title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
Rotational symmetry | Rotational symmetry, also known as radial symmetry in geometry, is the property a shape has when it looks the same after some rotation by a partial turn. An object's degree of rotational symmetry is the number of distinct orientations in which it looks exactly the same for each rotation.
Certain geometric objects are partially symmetrical when rotated at certain angles such as squares rotated 90°, however the only geometric objects that are fully rotationally symmetric at any angle are spheres, circles and other spheroids.
Formal treatment
Formally the rotational symmetry is symmetry with respect to some or all rotations in -dimensional Euclidean space. Rotations are direct isometries, i.e., isometries preserving orientation. Therefore, a symmetry group of rotational symmetry is a subgroup of (see Euclidean group).
Symmetry with respect to all rotations about all points implies translational symmetry with respect to all translations, so space is homogeneous, and the symmetry group is the whole . With the modified notion of symmetry for vector fields the symmetry group can also be .
For symmetry with respect to rotations about a point we can take that point as origin. These rotations form the special orthogonal group , the group of orthogonal matrices with determinant 1. For this is the rotation group .
In another definition of the word, the rotation group of an object is the symmetry group within , the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group.
Laws of physics are SO(3)-invariant if they do not distinguish different directions in space. Because of Noether's theorem, the rotational symmetry of a physical system is equivalent to the angular momentum conservation law.
Discrete rotational symmetry
Rotational symmetry of order , also called -fold rotational symmetry, or discrete rotational symmetry of the th order, with respect to a particular point (in 2D) or axis (in 3D) means that rotation by an angle of (180°, 120°, 90°, 72°, 60°, 51 °, etc.) does not change the object. A "1-fold" symmetry is no symmetry (all objects look alike after a rotation of 360°).
The notation for -fold symmetry is or simply . The actual symmetry group is specified by the point or axis of symmetry, together with the . For each point or axis of symmetry, the abstract group type is cyclic group of order , . Although for the latter also the notation is used, the geometric and abstract should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see cyclic symmetry groups in 3D.
The fundamental domain is a sector of
Examples without additional reflection symmetry:
, 180°: the dyad; letters Z, N, S; the outlines, albeit not the colors, of the yin and yang symbol; the Union Flag (as divided along the flag's diagonal and rotated about the flag's center point)
, 120°: triad, triskelion, Borromean rings; sometimes the term trilateral symmetry is used;
, 90°: tetrad, swastika
, 60°: hexad, Star of David (this one has additional reflection symmetry)
, 45°: octad, Octagonal muqarnas, computer-generated (CG), ceiling
is the rotation group of a regular -sided polygon in 2D and of a regular -sided pyramid in 3D.
If there is e.g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°.
A typical 3D object with rotational symmetry (possibly also with perpendicular axes) but no mirror symmetry is a propeller.
Examples
Multiple symmetry axes through the same point
For discrete symmetry with multiple symmetry axes through the same point, there are the following possibilities:
In addition to an -fold axis, perpendicular 2-fold axes: the dihedral groups of order . This is the rotation group of a regular prism, or regular bipyramid. Although the same notation is used, the geometric and abstract should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see dihedral symmetry groups in 3D.
4×3-fold and 3×2-fold axes: the rotation group of order 12 of a regular tetrahedron. The group is isomorphic to alternating group .
3×4-fold, 4×3-fold, and 6×2-fold axes: the rotation group O of order 24 of a cube and a regular octahedron. The group is isomorphic to symmetric group .
6×5-fold, 10×3-fold, and 15×2-fold axes: the rotation group of order 60 of a dodecahedron and an icosahedron. The group is isomorphic to alternating group . The group contains 10 versions of and 6 versions of (rotational symmetries like prisms and antiprisms).
In the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, and the number of them is half the number of edges. The other axes are through opposite vertices and through centers of opposite faces, except in the case of the tetrahedron, where the 3-fold axes are each through one vertex and the center of one face.
Rotational symmetry with respect to any angle
Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry. The fundamental domain is a half-line.
In three dimensions we can distinguish cylindrical symmetry and spherical symmetry (no change when rotating about one axis, or for any rotation). That is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, respectively. Axisymmetric and axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry (i.e. rotational symmetry with respect to a central axis) like a doughnut (torus). An example of approximate spherical symmetry is the Earth (with respect to density and other physical and chemical properties).
In 4D, continuous or discrete rotational symmetry about a plane corresponds to corresponding 2D rotational symmetry in every perpendicular plane, about the point of intersection. An object can also have rotational symmetry about two perpendicular planes, e.g. if it is the Cartesian product of two rotationally symmetry 2D figures, as in the case of e.g. the duocylinder and various regular duoprisms.
Rotational symmetry with translational symmetry
2-fold rotational symmetry together with single translational symmetry is one of the Frieze groups. A rotocenter is the fixed, or invariant, point of a rotation. There are two rotocenters per primitive cell.
Together with double translational symmetry the rotation groups are the following wallpaper groups, with axes per primitive cell:
p2 (2222): 4×2-fold; rotation group of a parallelogrammic, rectangular, and rhombic lattice.
p3 (333): 3×3-fold; not the rotation group of any lattice (every lattice is upside-down the same, but that does not apply for this symmetry); it is e.g. the rotation group of the regular triangular tiling with the equilateral triangles alternatingly colored.
p4 (442): 2×4-fold, 2×2-fold; rotation group of a square lattice.
p6 (632): 1×6-fold, 2×3-fold, 3×2-fold; rotation group of a hexagonal lattice.
2-fold rotocenters (including possible 4-fold and 6-fold), if present at all, form the translate of a lattice equal to the translational lattice, scaled by a factor 1/2. In the case translational symmetry in one dimension, a similar property applies, though the term "lattice" does not apply.
3-fold rotocenters (including possible 6-fold), if present at all, form a regular hexagonal lattice equal to the translational lattice, rotated by 30° (or equivalently 90°), and scaled by a factor
4-fold rotocenters, if present at all, form a regular square lattice equal to the translational lattice, rotated by 45°, and scaled by a factor
6-fold rotocenters, if present at all, form a regular hexagonal lattice which is the translate of the translational lattice.
Scaling of a lattice divides the number of points per unit area by the square of the scale factor. Therefore, the number of 2-, 3-, 4-, and 6-fold rotocenters per primitive cell is 4, 3, 2, and 1, respectively, again including 4-fold as a special case of 2-fold, etc.
3-fold rotational symmetry at one point and 2-fold at another one (or ditto in 3D with respect to parallel axes) implies rotation group p6, i.e. double translational symmetry and 6-fold rotational symmetry at some point (or, in 3D, parallel axis). The translation distance for the symmetry generated by one such pair of rotocenters is times their distance.
See also
Ambigram
Axial symmetry
Crystallographic restriction theorem
Lorentz symmetry
Point groups in three dimensions
Screw axis
Space group
Translational symmetry
References
External links
Rotational Symmetry Examples from Math Is Fun
Symmetry
Binocular rivalry | 0.767265 | 0.995053 | 0.76347 |
Power-to-weight ratio | Power-to-weight ratio (PWR, also called specific power, or power-to-mass ratio) is a calculation commonly applied to engines and mobile power sources to enable the comparison of one unit or design to another. Power-to-weight ratio is a measurement of actual performance of any engine or power source. It is also used as a measurement of performance of a vehicle as a whole, with the engine's power output being divided by the weight (or mass) of the vehicle, to give a metric that is independent of the vehicle's size. Power-to-weight is often quoted by manufacturers at the peak value, but the actual value may vary in use and variations will affect performance.
The inverse of power-to-weight, weight-to-power ratio (power loading) is a calculation commonly applied to aircraft, cars, and vehicles in general, to enable the comparison of one vehicle's performance to another. Power-to-weight ratio is equal to thrust per unit mass multiplied by the velocity of any vehicle.
Power-to-weight (specific power)
The power-to-weight ratio (specific power) is defined as the power generated by the engine(s) divided by the mass. In this context, the term "weight" can be considered a misnomer, as it colloquially refers to mass. In a zero-gravity (weightless) environment, the power-to-weight ratio would not be considered infinite.
A typical turbocharged V8 diesel engine might have an engine power of and a mass of , giving it a power-to-weight ratio of 0.65 kW/kg (0.40 hp/lb).
Examples of high power-to-weight ratios can often be found in turbines. This is because of their ability to operate at very high speeds. For example, the Space Shuttle's main engines used turbopumps (machines consisting of a pump driven by a turbine engine) to feed the propellants (liquid oxygen and liquid hydrogen) into the engine's combustion chamber. The original liquid hydrogen turbopump is similar in size to an automobile engine (weighing approximately ) and produces for a power-to-weight ratio of 153 kW/kg (93 hp/lb).
Physical interpretation
In classical mechanics, instantaneous power is the limiting value of the average work done per unit time as the time interval Δt approaches zero (i.e. the derivative with respect to time of the work done).
The typically used metric unit of the power-to-weight ratio is which equals . This fact allows one to express the power-to-weight ratio purely by SI base units. A vehicle's power-to-weight ratio equals its acceleration times its velocity; so at twice the velocity, it experiences half the acceleration, all else being equal.
Propulsive power
If the work to be done is rectilinear motion of a body with constant mass , whose center of mass is to be accelerated along a (possibly non-straight) line to a speed and angle with respect to the centre and radial of a gravitational field by an onboard powerplant, then the associated kinetic energy is
where:
is mass of the body
is speed of the center of mass of the body, changing with time.
The work–energy principle states that the work done to the object over a period of time is equal to the difference in its total energy over that period of time, so the rate at which work is done is equal to the rate of change of the kinetic energy (in the absence of potential energy changes).
The work done from time t to time t + Δt along the path C is defined as the line integral , so the fundamental theorem of calculus has that power is given by .
where:
is acceleration of the center of mass of the body, changing with time.
is linear force – or thrust – applied upon the center of mass of the body, changing with time.
is velocity of the center of mass of the body, changing with time.
is torque applied upon the center of mass of the body, changing with time.
is angular velocity of the center of mass of the body, changing with time.
In propulsion, power is only delivered if the powerplant is in motion, and is transmitted to cause the body to be in motion. It is typically assumed here that mechanical transmission allows the powerplant to operate at peak output power. This assumption allows engine tuning to trade power band width and engine mass for transmission complexity and mass. Electric motors do not suffer from this tradeoff, instead trading their high torque for traction at low speed. The power advantage or power-to-weight ratio is then
where:
is linear speed of the center of mass of the body.
Engine power
The useful power of an engine with shaft power output can be calculated using a dynamometer to measure torque and rotational speed, with maximum power reached when torque multiplied by rotational speed is a maximum. For jet engines the useful power is equal to the flight speed of the aircraft multiplied by the force, known as net thrust, required to make it go at that speed. It is used when calculating propulsive efficiency.
Examples
Engines
Heat engines and heat pumps
Thermal energy is made up from molecular kinetic energy and latent phase energy. Heat engines are able to convert thermal energy in the form of a temperature gradient between a hot source and a cold sink into other desirable mechanical work. Heat pumps take mechanical work to regenerate thermal energy in a temperature gradient. Standard definitions should be used when interpreting how the propulsive power of a jet or rocket engine is transferred to its vehicle.
Electric motors and electromotive generators
An electric motor uses electrical energy to provide mechanical work, usually through the interaction of a magnetic field and current-carrying conductors. By the interaction of mechanical work on an electrical conductor in a magnetic field, electrical energy can be generated.
Fluid engines and fluid pumps
Fluids (liquid and gas) can be used to transmit and/or store energy using pressure and other fluid properties. Hydraulic (liquid) and pneumatic (gas) engines convert fluid pressure into other desirable mechanical or electrical work. Fluid pumps convert mechanical or electrical work into movement or pressure changes of a fluid, or storage in a pressure vessel.
Thermoelectric generators and electrothermal actuators
A variety of effects can be harnessed to produce thermoelectricity, thermionic emission, pyroelectricity and piezoelectricity. Electrical resistance and ferromagnetism of materials can be harnessed to generate thermoacoustic energy from an electric current.
Electrochemical (galvanic) and electrostatic cell systems
(Closed cell) batteries
All electrochemical cell batteries deliver a changing voltage as their chemistry changes from "charged" to "discharged". A nominal output voltage and a cutoff voltage are typically specified for a battery by its manufacturer. The output voltage falls to the cutoff voltage when the battery becomes "discharged". The nominal output voltage is always less than the open-circuit voltage produced when the battery is "charged". The temperature of a battery can affect the power it can deliver, where lower temperatures reduce power. Total energy delivered from a single charge cycle is affected by both the battery temperature and the power it delivers. If the temperature lowers or the power demand increases, the total energy delivered at the point of "discharge" is also reduced.
Battery discharge profiles are often described in terms of a factor of battery capacity. For example, a battery with a nominal capacity quoted in ampere-hours (Ah) at a C/10 rated discharge current (derived in amperes) may safely provide a higher discharge current – and therefore higher power-to-weight ratio – but only with a lower energy capacity. Power-to-weight ratio for batteries is therefore less meaningful without reference to corresponding energy-to-weight ratio and cell temperature. This relationship is known as Peukert's law.
Electrostatic, electrolytic and electrochemical capacitors
Capacitors store electric charge onto two electrodes separated by an electric field semi-insulating (dielectric) medium. Electrostatic capacitors feature planar electrodes onto which electric charge accumulates. Electrolytic capacitors use a liquid electrolyte as one of the electrodes and the electric double layer effect upon the surface of the dielectric-electrolyte boundary to increase the amount of charge stored per unit volume. Electric double-layer capacitors extend both electrodes with a nanoporous material such as activated carbon to significantly increase the surface area upon which electric charge can accumulate, reducing the dielectric medium to nanopores and a very thin high permittivity separator.
While capacitors tend not to be as temperature sensitive as batteries, they are significantly capacity constrained and without the strength of chemical bonds suffer from self-discharge. Power-to-weight ratio of capacitors is usually higher than batteries because charge transport units within the cell are smaller (electrons rather than ions), however energy-to-weight ratio is conversely usually lower.
Fuel cell stacks and flow cell batteries
Fuel cells and flow cells, although perhaps using similar chemistry to batteries, do not contain the energy storage medium or fuel. With a continuous flow of fuel and oxidant, available fuel cells and flow cells continue to convert the energy storage medium into electric energy and waste products. Fuel cells distinctly contain a fixed electrolyte whereas flow cells also require a continuous flow of electrolyte. Flow cells typically have the fuel dissolved in the electrolyte.
Photovoltaics
Vehicles
Power-to-weight ratios for vehicles are usually calculated using curb weight (for cars) or wet weight (for motorcycles), that is, excluding weight of the driver and any cargo. This could be slightly misleading, especially with regard to motorcycles, where the driver might weigh 1/3 to 1/2 as much as the vehicle itself. In the sport of competitive cycling athlete's performance is increasingly being expressed in VAMs and thus as a power-to-weight ratio in W/kg. This can be measured through the use of a bicycle powermeter or calculated from measuring incline of a road climb and the rider's time to ascend it.
Locomotives
A locomotive generally must be heavy in order to develop enough adhesion on the rails to start a train. As the coefficient of friction between steel wheels and rails seldom exceeds 0.25 in most cases, improving a locomotive's power-to-weight ratio is often counterproductive. However, the choice of power transmission system, such as variable-frequency drive versus direct-current drive, may support a higher power-to-weight ratio by better managing propulsion power.
Utility and practical vehicles
Most vehicles are designed to meet passenger comfort and cargo carrying requirements. Vehicle designs trade off power-to-weight ratio to increase comfort, cargo space, fuel economy, emissions control, energy security and endurance. Reduced drag and lower rolling resistance in a vehicle design can facilitate increased cargo space without increase in the (zero cargo) power-to-weight ratio. This increases the role flexibility of the vehicle. Energy security considerations can trade off power (typically decreased) and weight (typically increased), and therefore power-to-weight ratio, for fuel flexibility or drive-train hybridisation. Some utility and practical vehicle variants such as hot hatches and sports-utility vehicles reconfigure power (typically increased) and weight to provide the perception of sports car like performance or for other psychological benefit.
Notable low ratio
Common power
Performance luxury, roadsters and mild sports
Increased engine performance is a consideration, but also other features associated with luxury vehicles. Longitudinal engines are common. Bodies vary from hot hatches, sedans (saloons), coupés, convertibles and roadsters. Mid-range dual-sport and cruiser motorcycles tend to have similar power-to-weight ratios.
Sports vehicles
Power-to-weight ratio is an important vehicle characteristic that affects the acceleration of sports vehicles.
Early vehicles
Aircraft
Propeller aircraft depend on high power-to-weight ratios to generate sufficient thrust to achieve sustained flight, and then for speed.
Thrust-to-weight ratio
Jet aircraft produce thrust directly.
Human
Power-to-weight ratio is important in cycling, since it determines acceleration and the speed during hill climbs. Since a cyclist's power-to-weight output decreases with fatigue, it is normally discussed with relation to the length of time that he or she maintains that power. A professional cyclist can produce over 20 W/kg (0.012 hp/lb) as a five-second maximum.
See also
References
Mechanics
Power (physics)
Engineering ratios | 0.76763 | 0.994548 | 0.763445 |
Potentiality and actuality | In philosophy, potentiality and actuality are a pair of closely connected principles which Aristotle used to analyze motion, causality, ethics, and physiology in his Physics, Metaphysics, Nicomachean Ethics, and De Anima.
The concept of potentiality, in this context, generally refers to any "possibility" that a thing can be said to have. Aristotle did not consider all possibilities the same, and emphasized the importance of those that become real of their own accord when conditions are right and nothing stops them. Actuality, in contrast to potentiality, is the motion, change or activity that represents an exercise or fulfillment of a possibility, when a possibility becomes real in the fullest sense. Both these concepts therefore reflect Aristotle's belief that events in nature are not all natural in a true sense. As he saw it, many things happen accidentally, and therefore not according to the natural purposes of things.
These concepts, in modified forms, remained very important into the Middle Ages, influencing the development of medieval theology in several ways. In modern times the dichotomy has gradually lost importance, as understandings of nature and deity have changed. However the terminology has also been adapted to new uses, as is most obvious in words like energy and dynamic. These were words first used in modern physics by the German scientist and philosopher, Gottfried Wilhelm Leibniz. Aristotle's concept of entelechy retains influence on recent concepts of biological "entelechy".
Potentiality
"Potentiality" and "potency" are translations of the Ancient Greek word (δύναμις). They refer especially to the way the word is used by Aristotle, as a concept contrasting with "actuality". The Latin translation of dunamis is , which is the root of the English word "potential"; it is also sometimes used in English-language philosophical texts. In early modern philosophy, English authors like Hobbes and Locke used the English word power as their translation of Latin .
is an ordinary Greek word for possibility or capability. Depending on context, it could be translated 'potency', 'potential', 'capacity', 'ability', 'power', 'capability', 'strength', 'possibility', 'force' and is the root of modern English words dynamic, dynamite, and dynamo.
In his philosophy, Aristotle distinguished two meanings of the word . According to his understanding of nature there was both a weak sense of potential, meaning simply that something "might chance to happen or not to happen", and a stronger sense, to indicate how something could be done well. For example, "sometimes we say that those who can merely take a walk, or speak, without doing it as well as they intended, cannot speak or walk." This stronger sense is mainly said of the potentials of living things, although it is also sometimes used for things like musical instruments.
Throughout his works, Aristotle clearly distinguishes things that are stable or persistent, with their own strong natural tendency to a specific type of change, from things that appear to occur by chance. He treats these as having a different and more real existence. "Natures which persist" are said by him to be one of the causes of all things, while natures that do not persist, "might often be slandered as not being at all by one who fixes his thinking sternly upon it as upon a criminal." The potencies which persist in a particular material are one way of describing "the nature itself" of that material, an innate source of motion and rest within that material. In terms of Aristotle's theory of four causes, a material's non-accidental potential is the material cause of the things that can come to be from that material, and one part of how we can understand the substance (ousia, sometimes translated as "thinghood") of any separate thing. (As emphasized by Aristotle, this requires his distinction between accidental causes and natural causes.) According to Aristotle, when we refer to the nature of a thing, we are referring to the form or shape of a thing, which was already present as a potential, an innate tendency to change, in that material before it achieved that form. When things are most "fully at work" we can see more fully what kind of thing they really are.
Actuality
Actuality is often used to translate both (ἐνέργεια) and (ἐντελέχεια) (sometimes rendered in English as entelechy). Actuality comes from Latin and is a traditional translation, but its normal meaning in Latin is 'anything which is currently happening.'
The two words and were coined by Aristotle, and he stated that their meanings were intended to converge. In practice, most commentators and translators consider the two words to be interchangeable. They both refer to something being in its own type of action or at work, as all things are when they are real in the fullest sense, and not just potentially real. For example, "to be a rock is to strain to be at the center of the universe, and thus to be in motion unless constrained otherwise."
is a word based upon , meaning 'work'. It is the source of the modern word energy but the term has evolved so much over the course of the history of science that reference to the modern term is not very helpful in understanding the original as used by Aristotle. It is difficult to translate his use of into English with consistency. Joe Sachs renders it with the phrase "being-at-work" and says that "we might construct the word is-at-work-ness from Anglo-Saxon roots to translate into English".
Aristotle says the word can be made clear by looking at examples rather than trying to find a definition. Two examples of in Aristotle's works are pleasure and happiness (eudaimonia). Pleasure is an of the human body and mind whereas happiness is more simply the of a human being a human.
, translated as movement, motion, or in some contexts change, is also explained by Aristotle as a particular type of . See below.
Entelechy
Entelechy, in Greek , was coined by Aristotle and transliterated in Latin as . According to :
Aristotle invents the word by combining (, 'complete, full-grown') with (= hexis, to be a certain way by the continuing effort of holding on in that condition), while at the same time punning on (, 'persistence') by inserting telos (, 'completion'). This is a three-ring circus of a word, at the heart of everything in Aristotle's thinking, including the definition of motion.
Sachs therefore proposed a complex neologism of his own, "being-at-work-staying-the-same." Another translation in recent years is "being-at-an-end" (which Sachs has also used).
, as can be seen by its derivation, is a kind of completeness, whereas "the end and completion of any genuine being is its being-at-work". The is a continuous being-at-work when something is doing its complete "work". For this reason, the meanings of the two words converge, and they both depend upon the idea that every thing's "thinghood" is a kind of work, or in other words a specific way of being in motion. All things that exist now, and not just potentially, are beings-at-work, and all of them have a tendency towards being-at-work in a particular way that would be their proper and "complete" way.
Sachs explains the convergence of and as follows, and uses the word actuality to describe the overlap between them:
Just as extends to because it is the activity which makes a thing what it is, extends to because it is the end or perfection which has being only in, through, and during activity.
Motion
Aristotle discusses motion in his Physics quite differently from modern science. Aristotle's definition of motion is closely connected to his actuality-potentiality distinction. Taken literally, Aristotle defines motion as the actuality of a "potentiality as such". What Aristotle meant however is the subject of several different interpretations. A major difficulty comes from the fact that the terms actuality and potentiality, linked in this definition, are normally understood within Aristotle as opposed to each other. On the other hand, the "as such" is important and is explained at length by Aristotle, giving examples of "potentiality as such". For example, the motion of building is the of the of the building materials as building materials as opposed to anything else they might become, and this potential in the unbuilt materials is referred to by Aristotle as "the buildable". So the motion of building is the actualization of "the buildable" and not the actualization of a house as such, nor the actualization of any other possibility which the building materials might have had.
In an influential 1969 paper, Aryeh Kosman divided up previous attempts to explain Aristotle's definition into two types, criticised them, and then gave his own third interpretation. While this has not become a consensus, it has been described as having become "orthodox". This and similar more recent publications are the basis of the following summary.
1. The "process" interpretation
and associate this approach with W. D. Ross. points out that it was also the interpretation of Averroes and Maimonides.
This interpretation is, to use the words of Ross that "it is the passage to actuality that is " as opposed to any potentiality being an actuality.
The argument of Ross for this interpretation requires him to assert that Aristotle actually used his own word wrongly, or inconsistently, only within his definition, making it mean "actualization", which is in conflict with Aristotle's normal use of words. According to this explanation also can not account for the "as such" in Aristotle's definition.
2. The "product" interpretation
associates this interpretation with Thomas Aquinas and explains that by this explanation "the apparent contradiction between potentiality and actuality in Aristotle's definition of motion" is resolved "by arguing that in every motion actuality and potentiality are mixed or blended." Motion is therefore "the actuality of any potentiality insofar as it is still a potentiality." Or in other words:
The Thomistic blend of actuality and potentiality has the characteristic that, to the extent that it is actual it is not potential and to the extent that it is potential it is not actual; the hotter the water is, the less is it potentially hot, and the cooler it is, the less is it actually, the more potentially, hot.
As with the first interpretation however, objects that:
One implication of this interpretation is that whatever happens to be the case right now is an , as though something that is intrinsically unstable as the instantaneous position of an arrow in flight deserved to be described by the word that everywhere else Aristotle reserves for complex organized states that persist, that hold out against internal and external causes that try to destroy them.
In a more recent paper on this subject, Kosman associates the view of Aquinas with those of his own critics, David Charles, Jonathan Beere, and Robert Heineman.
3. The interpretation of Kosman, Coope, Sachs and others
, amongst other authors (such as Aryeh Kosman and Ursula Coope), proposes that the solution to problems interpreting Aristotle's definition must be found in the distinction Aristotle makes between two different types of potentiality, with only one of those corresponding to the "potentiality as such" appearing in the definition of motion. He writes:
The man with sight, but with his eyes closed, differs from the blind man, although neither is seeing. The first man has the capacity to see, which the second man lacks. There are then potentialities as well as actualities in the world. But when the first man opens his eyes, has he lost the capacity to see? Obviously not; while he is seeing, his capacity to see is no longer merely a potentiality, but is a potentiality which has been put to work. The potentiality to see exists sometimes as active or at-work, and sometimes as inactive or latent.
Coming to motion, Sachs gives the example of a man walking across the room and explains as follows:
"Once he has reached the other side of the room, his potentiality to be there has been actualized in Ross' sense of the term". This is a type of . However, it is not a motion, and not relevant to the definition of motion.
While a man is walking his potentiality to be on the other side of the room is actual just as a potentiality, or in other words the potential as such is an actuality. "The actuality of the potentiality to be on the other side of the room, as just that potentiality, is neither more nor less than the walking across the room."
, in his commentary of Aristotle's Physics Book III gives the following results from his understanding of Aristotle's definition of motion:
The genus of which motion is a species is being-at-work-staying-itself, of which the only other species is thinghood. The being-at-work-staying-itself of a potency, as material, is thinghood. The being-at-work-staying-the-same of a potency as a potency is motion.
The importance of actuality in Aristotle's philosophy
The actuality-potentiality distinction in Aristotle is a key element linked to everything in his physics and metaphysics.
Aristotle describes potentiality and actuality, or potency and action, as one of several distinctions between things that exist or do not exist. In a sense, a thing that exists potentially does not exist; but, the potential does exist. And this type of distinction is expressed for several different types of being within Aristotle's categories of being. For example, from Aristotle's Metaphysics, 1017a:
We speak of an entity being a "seeing" thing whether it is currently seeing or just able to see.
We speak of someone having understanding, whether they are using that understanding or not.
We speak of corn existing in a field even when it is not yet ripe.
People sometimes speak of a figure being already present in a rock which could be sculpted to represent that figure.
Within the works of Aristotle the terms and , often translated as actuality, differ from what is merely actual because they specifically presuppose that all things have a proper kind of activity or work which, if achieved, would be their proper end. Greek for end in this sense is telos, a component word in (a work that is the proper end of a thing) and also teleology. This is an aspect of Aristotle's theory of four causes and specifically of formal cause (, which Aristotle says is ) and final cause.
In essence this means that Aristotle did not see things as matter in motion only, but also proposed that all things have their own aims or ends. In other words, for Aristotle (unlike modern science), there is a distinction between things with a natural cause in the strongest sense, and things that truly happen by accident. He also distinguishes non-rational from rational potentialities (e.g. the capacity to heat and the capacity to play the flute, respectively), pointing out that the latter require desire or deliberate choice for their actualization. Because of this style of reasoning, Aristotle is often referred to as having a teleology, and sometimes as having a theory of forms.
While actuality is linked by Aristotle to his concept of a formal cause, potentiality (or potency) on the other hand, is linked by Aristotle to his concepts of hylomorphic matter and material cause. Aristotle wrote for example that "matter exists potentially, because it may attain to the form; but when it exists actually, it is then in the form."
Teleology is a crucial concept throughout Aristotle's philosophy. This means that as well as its central role in his physics and metaphysics, the potentiality-actuality distinction has a significant influence on other areas of Aristotle's thought such as his ethics, biology and psychology.
The active intellect
The active intellect was a concept Aristotle described that requires an understanding of the actuality-potentiality dichotomy. Aristotle described this in his De Anima (Book 3, Chapter 5, 430a10-25) and covered similar ground in his Metaphysics (Book 12, Chapter 7-10). The following is from the De Anima, translated by Joe Sachs, with some parenthetic notes about the Greek. The passage tries to explain "how the human intellect passes from its original state, in which it does not think, to a subsequent state, in which it does." He inferred that the / distinction must also exist in the soul itself:
...since in nature one thing is the material [hulē] for each kind [genos] (this is what is in potency all the particular things of that kind) but it is something else that is the causal and productive thing by which all of them are formed, as is the case with an art in relation to its material, it is necessary in the soul [psuchē] too that these distinct aspects be present;
the one sort is intellect [nous] by becoming all things, the other sort by forming all things, in the way an active condition [hexis] like light too makes the colors that are in potency be at work as colors [].
This sort of intellect is separate, as well as being without attributes and unmixed, since it is by its thinghood a being-at-work, for what acts is always distinguished in stature above what is acted upon, as a governing source is above the material it works on.
Knowledge [], in its being-at-work, is the same as the thing it knows, and while knowledge in potency comes first in time in any one knower, in the whole of things it does not take precedence even in time.
This does not mean that at one time it thinks but at another time it does not think, but when separated it is just exactly what it is, and this alone is deathless and everlasting (though we have no memory, because this sort of intellect is not acted upon, while the sort that is acted upon is destructible), and without this nothing thinks.
This has been referred to as one of "the most intensely studied sentences in the history of philosophy." In the Metaphysics, Aristotle wrote at more length on a similar subject and is often understood to have equated the active intellect with being the "unmoved mover" and God. Nevertheless, as Davidson remarks:
Just what Aristotle meant by potential intellect and active intellect – terms not even explicit in the De Anima and at best implied – and just how he understood the interaction between them remains moot to this day. Students of the history of philosophy continue to debate Aristotle's intent, particularly the question whether he considered the active intellect to be an aspect of the human soul or an entity existing independently of man.
Post-Aristotelian usage
New meanings of or energy
Already in Aristotle's own works, the concept of a distinction between and was used in many ways, for example to describe the way striking metaphors work, or human happiness. Polybius about 150 BC, in his work the Histories uses Aristotle's word energeia in both an Aristotelian way and also to describe the "clarity and vividness" of things. Diodorus Siculus in 60-30 BC used the term in a very similar way to Polybius. However, Diodorus uses the term to denote qualities unique to individuals. Using the term in ways that could translated as 'vigor' or 'energy' (in a more modern sense); for society, 'practice' or 'custom'; for a thing, 'operation' or 'working'; like vigor in action.
Platonism and neoplatonism
Already in Plato it is found implicitly the notion of potency and act in his cosmological presentation of becoming and forces, linked to the ordering intellect, mainly in the description of the Demiurge and the "Receptacle" in his Timaeus. It has also been associated to the dyad of Plato's unwritten doctrines, and is involved in the question of being and non-being since from the pre-socratics, as in Heraclitus's mobilism and Parmenides' immobilism. The mythological concept of primordial Chaos is also classically associated with a disordered prime matter (see also prima materia), which, being passive and full of potentialities, would be ordered in actual forms, as can be seen in Neoplatonism, especially in Plutarch, Plotinus, and among the Church Fathers, and the subsequent medieval and Renaissance philosophy, as in Ramon Lllull's Book of Chaos and John Milton's Paradise Lost.
Plotinus was a late classical pagan philosopher and theologian whose monotheistic re-workings of Plato and Aristotle were influential amongst early Christian theologians. In his Enneads he sought to reconcile ideas of Aristotle and Plato together with a form of monotheism, that used three fundamental metaphysical principles, which were conceived of in terms consistent with Aristotle's / dichotomy, and one interpretation of his concept of the Active Intellect (discussed above):
The Monad or "the One" sometimes also described as "the Good". This is the or possibility of existence.
The Intellect, or Intelligence, or, to use the Greek term, Nous, which is described as God, or a Demiurge. It thinks its own contents, which are thoughts, equated to the Platonic ideas or forms. The thinking of this Intellect is the highest activity of life. The actualization of this thinking is the being of the forms. This Intellect is the first principle or foundation of existence. The One is prior to it, but not in the sense that a cause is prior to an effect, but instead Intellect is called an emanation of the One. The One is the possibility of this foundation of existence.
Soul or, to use the Greek term, Psyche. The soul is also an : it acts upon or actualizes its own thoughts and creates "a separate, material cosmos that is the living image of the spiritual or noetic Cosmos contained as a unified thought within the Intelligence."
This was based largely upon Plotinus' reading of Plato, but also incorporated many Aristotelian concepts, including the unmoved mover as .
New Testament usage
Other than incorporation of Neoplatonic into Christendom by early Christian theologians such as St. Augustine, the concepts of and (the morphological root of ) are frequently used in the original Greek New Testament. is used 119 times and is used 161 times, usually with the meaning 'power/ability' and 'act/work', respectively.
Essence-energies debate in medieval Christian theology
In Eastern Orthodox Christianity, St Gregory Palamas wrote about the "energies" (actualities; singular in Greek, or in Latin) of God in contrast to God's "essence". These are two distinct types of existence, with God's energy being the type of existence which people can perceive, while the essence of God is outside of normal existence or non-existence or human understanding, i.e. transcendental, in that it is not caused or created by anything else.
Palamas gave this explanation as part of his defense of the Eastern Orthodox ascetic practice of hesychasm. Palamism became a standard part of Orthodox dogma after 1351.
In contrast, the position of Western Medieval (or Catholic) Christianity, can be found for example in the philosophy of Thomas Aquinas, who relied on Aristotle's concept of entelechy, when he defined God as , pure act, actuality unmixed with potentiality. The existence of a truly distinct essence of God which is not actuality, is not generally accepted in Catholic theology.
Influence on modal logic
The notion of possibility was greatly analyzed by medieval and modern philosophers. Aristotle's logical work in this area is considered by some to be an anticipation of modal logic and its treatment of potentiality and time. Indeed, many philosophical interpretations of possibility are related to a famous passage on Aristotle's On Interpretation, concerning the truth of the statement: "There will be a sea battle tomorrow."
Contemporary philosophy regards possibility, as studied by modal metaphysics, to be an aspect of modal logic. Modal logic as a named subject owes much to the writings of the Scholastics, in particular William of Ockham and John Duns Scotus, who reasoned informally in a modal manner, mainly to analyze statements about essence and accident.
Influence on early modern physics
Aristotle's metaphysics, his account of nature and causality, was for the most part rejected by the early modern philosophers. Francis Bacon in his Novum Organon in one explanation of the case for rejecting the concept of a formal cause or "nature" for each type of thing, argued for example that philosophers must still look for formal causes but only in the sense of "simple natures" such as colour, and weight, which exist in many gradations and modes in very different types of individual bodies. In the works of Thomas Hobbes then, the traditional Aristotelian terms, "", are discussed, but he equates them simply to "cause and effect".
There was an adaptation of at least one aspect of Aristotle's potentiality and actuality distinction, which has become part of modern physics, although as per Bacon's approach it is a generalized form of energy, not one connected to specific forms for specific things. The definition of energy in modern physics as the product of mass and the square of velocity, was derived by Leibniz, as a correction of Descartes, based upon Galileo's investigation of falling bodies. He preferred to refer to it as an or 'living force' (Latin ), but what he defined is today called kinetic energy, and was seen by Leibniz as a modification of Aristotle's , and his concept of the potential for movement which is in things. Instead of each type of physical thing having its own specific tendency to a way of moving or changing, as in Aristotle, Leibniz said that instead, force, power, or motion itself could be transferred between things of different types, in such a way that there is a general conservation of this energy. In other words, Leibniz's modern version of entelechy or energy obeys its own laws of nature, whereas different types of things do not have their own separate laws of nature. Leibniz wrote: ...the entelechy of Aristotle, which has made so much noise, is nothing else but force or activity; that is, a state from which action naturally flows if nothing hinders it. But matter, primary and pure, taken without the souls or lives which are united to it, is purely passive; properly speaking also it is not a substance, but something incomplete.
Leibniz's study of the "entelechy" now known as energy was a part of what he called his new science of "dynamics", based on the Greek word and his understanding that he was making a modern version of Aristotle's old dichotomy. He also referred to it as the "new science of power and action", (Latin and ). And it is from him that the modern distinction between statics and dynamics in physics stems. The emphasis on in the name of this new science comes from the importance of his discovery of potential energy which is not active, but which conserves energy nevertheless. "As 'a science of power and action', dynamics arises when Leibniz proposes an adequate architectonic of laws for constrained, as well as unconstrained, motions."
For Leibniz, like Aristotle, this law of nature concerning entelechies was also understood as a metaphysical law, important not only for physics, but also for understanding life and the soul. A soul, or spirit, according to Leibniz, can be understood as a type of entelechy (or living monad) which has distinct perceptions and memory.
Influence on modern physics
Ideas about potentiality have been related to quantum mechanics, where a wave function in a superposition of potential values (before measurement) has the potential to collapse into one of those values, under the Copenhagen interpretation of quantum mechanics. In particular, the German physicist Werner Heisenberg called this "a quantitative version of the old concept of 'potentia' in Aristotelian philosophy".
in modern philosophy and biology
As discussed above, terms derived from and have become parts of modern scientific vocabulary with a very different meaning from Aristotle's. The original meanings are not used by modern philosophers unless they are commenting on classical or medieval philosophy. In contrast, , in the form of entelechy is a word used much less in technical senses in recent times.
As mentioned above, the concept had occupied a central position in the metaphysics of Leibniz, and is closely related to his monad in the sense that each sentient entity contains its own entire universe within it. But Leibniz' use of this concept influenced more than just the development of the vocabulary of modern physics. Leibniz was also one of the main inspirations for the important movement in philosophy known as German idealism, and within this movement and schools influenced by it entelechy may denote a force propelling one to self-fulfillment.
In the biological vitalism of Hans Driesch, living things develop by entelechy, a common purposive and organising field. Leading vitalists like Driesch argued that many of the basic problems of biology cannot be solved by a philosophy in which the organism is simply considered a machine. Vitalism and its concepts like entelechy have since been discarded as without value for scientific practice by the overwhelming majority of professional biologists.
Important to the philosophy of Giorgio Agamben is potentiality and the notion that tied in every potentiality is the potentiality to not do something as well, and that actuality is actually the not not doing of a potentiality; Agamben notes that thought is unique in that it is the ability to reflect on this potentiality in itself rather than in a relation to an object making the mind a sort of tabula rasa.
However, in philosophy aspects and applications of the concept of entelechy have been explored by scientifically interested philosophers and philosophically inclined scientists alike. One example was the American critic and philosopher Kenneth Burke (1897–1993) whose concept of the "terministic screen" illustrates his thought on the subject.
Prof. Denis Noble argues that, just as teleological causation is necessary to the social sciences, a specific teleological causation in biology, expressing functional purpose, should be restored and that it is already implicit in neo-Darwinism (e.g. "selfish gene"). Teleological analysis proves parsimonious when the level of analysis is appropriate to the complexity of the required 'level' of explanation (e.g. whole body or organ rather than cell mechanism).
See also
Actual infinity
Actus purus
Alexander of Aphrodisias
Essence–Energies distinction
First cause
Henosis
Hylomorphism
Hypokeimenon
Hypostasis (philosophy and religion)
Sumbebekos
Theosis
Unmoved movers
References
Bibliography
Old translations of Aristotle
This 1933 translation is reproduced online at the Perseus Project.
Action (philosophy)
Aristotelianism
Causality
Metaphysical properties
Philosophy of Aristotle | 0.769398 | 0.992251 | 0.763436 |
Hunting oscillation | Hunting oscillation is a self-oscillation, usually unwanted, about an equilibrium. The expression came into use in the 19th century and describes how a system "hunts" for equilibrium. The expression is used to describe phenomena in such diverse fields as electronics, aviation, biology, and railway engineering.
Railway wheelsets
A classical hunting oscillation is a swaying motion of a railway vehicle (often called truck hunting or bogie hunting) caused by the coning action on which the directional stability of an adhesion railway depends. It arises from the interaction of adhesion forces and inertial forces. At low speed, adhesion dominates but, as the speed increases, the adhesion forces and inertial forces become comparable in magnitude and the oscillation begins at a critical speed. Above this speed, the motion can be violent, damaging track and wheels and potentially causing derailment. The problem does not occur on systems with a differential because the action depends on both wheels of a wheelset rotating at the same angular rate, although differentials tend to be rare, and conventional trains have their wheels fixed to the axles in pairs instead. Some trains, like the Talgo 350, have no differential, yet they are mostly not affected by hunting oscillation, as most of their wheels rotate independently from one another. The wheels of the power car, however, can be affected by hunting oscillation, because the wheels of the power car are fixed to the axles in pairs like in conventional bogies. Less conical wheels and bogies equipped with independent wheels that turn independently from each other and are not fixed to an axle in pairs are cheaper than a suitable differential for the bogies of a train.
The problem was first noticed towards the end of the 19th century, when train speeds became high enough to encounter it. Serious efforts to counteract it got underway in the 1930s, giving rise to lengthened trucks and the side-damping swing hanger truck. In the development of the Japanese Shinkansen, less-conical wheels and other design changes were used to extend truck design speeds above . Advances in wheel and truck design based on research and development efforts in Europe and Japan have extended the speeds of steel wheel systems well beyond those attained by the original Shinkansen, while the advantage of backwards compatibility keeps such technology dominant over alternatives such as the hovertrain and maglev systems. The speed record for steel-wheeled trains is held by the French TGV, at .
Kinematic analysis
While a qualitative description provides some understanding of the phenomenon, deeper understanding inevitably requires a mathematical analysis of the vehicle dynamics. Even then, the results may be only approximate.
A kinematic description deals with the geometry of motion, without reference to the forces causing it, so the analysis begins with a description of the geometry of a wheel set running on a straight track. Since Newton's second law relates forces to the acceleration of bodies, the forces acting may then be derived from the kinematics by calculating the accelerations of the components. However, if these forces change the kinematic description (as they do in this case) then the results may only be approximately correct.
Assumptions and non-mathematical description
This kinematic description makes a number of simplifying assumptions since it neglects forces. For one, it assumes that the rolling resistance is zero. A wheelset (not attached to a train or truck), is given a push forward on a straight and level track. The wheelset starts coasting and never slows down since there are no forces (except downward forces on the wheelset to make it adhere to the track and not slip). If initially the wheelset is centered on the railroad track then the effective diameters of each wheel are the same and the wheelset rolls down the track in a perfectly straight line forever. But if the wheelset is a little off-center so that the effective diameters (or radii) are different, then the wheelset starts to move in a curve of radius (depending on these wheelset radii, etc.; to be derived later on). The problem is to use kinematic reasoning to find the trajectory of the wheelset, or more precisely, the trajectory of the center of the wheelset projected vertically on the roadbed in the center of the track. This is a trajectory on the plane of the level earth's surface and plotted on an - graphical plot where is the distance along the railroad and is the "tracking error", the deviation of the center of the wheelset from the straight line of the railway running down the center of the track (midway between the two rails).
To illustrate that a wheelset trajectory follows a curved path, one may place a nail or screw on a flat table top and give it a push. It will roll in a circular curve because the nail or screw is like a wheelset with extremely different diameter wheels. The head is analogous to a large diameter wheel and the pointed end is like a small diameter wheel. While the nail or screw will turn around in a full circle (and more) the railroad wheelset behaves differently because as soon at it starts to turn in a curve, the effective diameters change in such a way as to decrease the curvature of the path. Note that "radius" and "curvature" refer to the curvature of the trajectory of the wheelset and not the curvature of the railway since this is perfectly straight track. As the wheelset rolls on, the curvature decreases until the wheels reach the point where their effective diameters are equal and the path is no longer curving. But the trajectory has a slope at this point (it is a straight line which crosses diagonally over the centerline of the track) so that it overshoots the centerline of the track and the effective diameters reverse (the formerly smaller diameter wheel becomes the larger diameter and conversely). This results in the wheelset moving in a curve in the opposite direction. Again it overshoots the centerline and this phenomenon continues indefinitely with the wheelset oscillating from side to side. Note that the wheel flange never makes contact with the rail. In this model, the rails are assumed to always contact the wheel tread along the same line on the rail head which assumes that the rails are knife-edge and only make contact with the wheel tread along a line (of zero width).
Mathematical analysis
The train stays on the track by virtue of the conical shape of the wheel treads. If a wheelset is displaced to one side by an amount (the tracking error), the radius of the tread in contact with the rail on one side is reduced, while on the other side it is increased. The angular velocity is the same for both wheels (they are coupled via a rigid axle), so the larger diameter tread speeds up, while the smaller slows down. The wheel set steers around a centre of curvature defined by the intersection of the generator of a cone passing through the points of contact with the wheels on the rails and the axis of the wheel set. Applying similar triangles, we have for the turn radius:
where is the track gauge, the wheel radius when running straight and is the tread taper (which is the slope of tread in the horizontal direction perpendicular to the track).
The path of the wheel set relative to the straight track is defined by a function, where is the progress along the track. This is sometimes called the tracking error. Provided the direction of motion remains more or less parallel to the rails, the curvature of the path may be related to the second derivative of with respect to distance along the track as approximately
It follows that the trajectory along the track is governed by the equation:
This is a simple harmonic motion having wavelength:
This kinematic analysis implies that trains sway from side to side all the time. In fact, this oscillation is damped out below a critical speed and the ride is correspondingly more comfortable. The kinematic result ignores the forces causing the motion. These may be analyzed using the concept of creep (non-linear) but are somewhat difficult to quantify simply, as they arise from the elastic distortion of the wheel and rail at the regions of contact. These are the subject of frictional contact mechanics; an early presentation that includes these effects in hunting motion analysis was presented by Carter. See Knothe for a historical overview.
If the motion is substantially parallel with the rails, the angular displacement of the wheel set is given by:
Hence:
The angular deflection also follows a simple harmonic motion, which lags behind the side to side motion by a quarter of a cycle. In many systems which are characterised by harmonic motion involving two different states (in this case the axle yaw deflection and the lateral displacement), the quarter cycle lag between the two motions endows the system with the ability to extract energy from the forward motion. This effect is observed in "flutter" of aircraft wings and "shimmy" of road vehicles, as well as hunting of railway vehicles. The kinematic solution derived above describes the motion at the critical speed.
In practice, below the critical speed, the lag between the two motions is less than a quarter cycle so that the motion is damped out but, above the critical speed, the lag is greater than a quarter cycle so that the motion is amplified.
In order to estimate the inertial forces, it is necessary to express the distance derivatives as time derivatives. This is done using the speed of the vehicle , which is assumed constant:
The angular acceleration of the axle in yaw is:
The inertial moment (ignoring gyroscopic effects) is:
where is the force acting along the rails and is the moment of inertia of the wheel set.
the maximum frictional force between the wheel and rail is given by:
where is the axle load and is the coefficient of friction. Gross slipping will occur at a combination of speed and axle deflection given by:
this expression yields a significant overestimate of the critical speed, but it does illustrate the physical reason why hunting occurs, i.e. the inertial forces become comparable with the adhesion forces above a certain speed. Limiting friction is a poor representation of the adhesion force in this case.
The actual adhesion forces arise from the distortion of the tread and rail in the region of contact. There is no gross slippage, just elastic distortion and some local slipping (creep slippage). During normal operation these forces are well within the limiting friction constraint. A complete analysis takes these forces into account, using rolling contact mechanics theories.
However, the kinematic analysis assumed that there was no slippage at all at the wheel-rail contact. Now it is clear that there is some creep slippage which makes the calculated sinusoidal trajectory of the wheelset (per Klingel's formula) not exactly correct.
Energy balance
In order to get an estimate of the critical speed, we use the fact that the condition for which this kinematic solution is valid corresponds to the case where there is no net energy exchange with the surroundings, so by considering the kinetic and potential energy of the system, we should be able to derive the critical speed.
Let:
Using the operator:
the angular acceleration equation may be expressed in terms of the angular velocity in yaw, :
integrating:
so the kinetic energy due to rotation is:
When the axle yaws, the points of contact move outwards on the treads so that the height of the axle is lowered. The distance between the support points increases to:
(to second order of small quantities).
the displacement of the support point out from the centres of the treads is:
the axle load falls by
The work done by lowering the axle load is therefore:
This is energy lost from the system, so in order for the motion to continue, an equal amount of energy must be extracted from the forward motion of the wheelset.
The outer wheel velocity is given by:
The kinetic energy is:
for the inner wheel it is
where is the mass of both wheels.
The increase in kinetic energy is:
The motion will continue at constant amplitude as long as the energy extracted from the forward motion, and manifesting itself as increased kinetic energy of the wheel set at zero yaw, is equal to the potential energy lost by the lowering of the axle load at maximum yaw.
Now, from the kinematics:
but
The translational kinetic energy is
The total kinetic energy is:
The critical speed is found from the energy balance:
Hence the critical speed is given by
This is independent of the wheel taper, but depends on the ratio of the axle load to wheel set mass. If the treads were truly conical in shape, the critical speed would be independent of the taper. In practice, wear on the wheel causes the taper to vary across the tread width, so that the value of taper used to determine the potential energy is different from that used to calculate the kinetic energy. Denoting the former as , the critical speed becomes:
where is now a shape factor determined by the wheel wear. This result is derived in Wickens (1965) from an analysis of the system dynamics using standard control engineering methods.
Limitation of simplified analysis
The motion of a wheel set is much more complicated than this analysis would indicate. There are additional restraining forces applied by the vehicle suspension and, at high speed, the wheel set will generate additional gyroscopic torques, which will modify the estimate of the critical speed. Conventionally a railway vehicle has stable motion in low speeds, when it reaches to high speeds stability changes to unstable form. The main purpose of nonlinear analysis of rail vehicle system dynamics is to show the view of analytical investigation of bifurcation, nonlinear lateral stability and hunting behavior of rail vehicles in a tangent track. This study describes the Bogoliubov method for the analysis.
Two main matters, namely assuming the body as a fixed support and influence of the nonlinear elements in calculation of the hunting speed, are mostly focused in studies. A real railway vehicle has many more degrees of freedom and, consequently, may have more than one critical speed; it is by no means certain that the lowest is dictated by the wheelset motion. However, the analysis is instructive because it shows why hunting occurs. As the speed increases, the inertial forces become comparable with the adhesion forces. That is why the critical speed depends on the ratio of the axle load (which determines the adhesion force) to the wheelset mass (which determines the inertial forces).
Alternatively, below a certain speed, the energy which is extracted from the forward motion is insufficient to replace the energy lost by lowering the axles and the motion damps out; above this speed, the energy extracted is greater than the loss in potential energy and the amplitude builds up.
The potential energy at maximum axle yaw may be increased by including an elastic constraint on the yaw motion of the axle, so that there is a contribution arising from spring tension. Arranging wheels in bogies to increase the constraint on the yaw motion of wheelsets and applying elastic constraints to the bogie also raises the critical speed. Introducing elastic forces into the equation permits suspension designs which are limited only by the onset of gross slippage, rather than classical hunting. The penalty to be paid for the virtual elimination of hunting is a straight track, with an attendant right-of-way problem and incompatibility with legacy infrastructure.
Hunting is a dynamic problem which can be solved, in principle at least, by active feedback control, which may be adapted to the quality of track. However, the introduction of active control raises reliability and safety issues.
Shortly after the onset of hunting, gross slippage occurs and the wheel flanges impact on the rails, potentially causing damage to both.
Road–rail vehicles
Many road–rail vehicles feature independent axles and suspension systems on each rail wheel. When this is combined with the presence of road wheels on the rail it becomes difficult to use the formulae above. Historically, road–rail vehicles have their front wheels set slightly toe-in, which has been found to minimise hunting whilst the vehicle is being driven on-rail.
See also
Frictional contact mechanics
Rail adhesion
Rail profile
Speed wobble
Vehicle dynamics
Wheelset
For general methods dealing with this class of problem, see
Control engineering
References
Oscillation
Rail technologies | 0.77643 | 0.983256 | 0.763429 |
Hysteresis | Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in a hard disk drive and the remanence that retains a record of the Earth's magnetic field magnitude in the past. Hysteresis occurs in ferromagnetic and ferroelectric materials, as well as in the deformation of rubber bands and shape-memory alloys and many other natural phenomena. In natural systems, it is often associated with irreversible thermodynamic change such as phase transitions and with internal friction; and dissipation is a common side effect.
Hysteresis can be found in physics, chemistry, engineering, biology, and economics. It is incorporated in many artificial systems: for example, in thermostats and Schmitt triggers, it prevents unwanted frequent switching.
Hysteresis can be a dynamic lag between an input and an output that disappears if the input is varied more slowly; this is known as rate-dependent hysteresis. However, phenomena such as the magnetic hysteresis loops are mainly rate-independent, which makes a durable memory possible.
Systems with hysteresis are nonlinear, and can be mathematically challenging to model. Some hysteretic models, such as the Preisach model (originally applied to ferromagnetism) and the Bouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as the Jiles–Atherton model for ferromagnetism.
It is difficult to define hysteresis precisely. Isaak D. Mayergoyz wrote "...the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.".
Etymology and history
The term "hysteresis" is derived from , an Ancient Greek word meaning "deficiency" or "lagging behind". It was coined in 1881 by Sir James Alfred Ewing to describe the behaviour of magnetic materials.
Some early work on describing hysteresis in mechanical systems was performed by James Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works of Ferenc Preisach (Preisach model of hysteresis), Louis Néel and Douglas Hugh Everett in connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led by Mark Krasnosel'skii.
Types
Rate-dependent
One type of hysteresis is a lag between input and output. An example is a sinusoidal input that results in a sinusoidal output , but with a phase lag :
Such behavior can occur in linear systems, and a more general form of response is
where is the instantaneous response and is the impulse response to an impulse that occurred time units in the past. In the frequency domain, input and output are related by a complex generalized susceptibility that can be computed from ; it is mathematically equivalent to a transfer function in linear filter theory and analogue signal processing.
This kind of hysteresis is often referred to as rate-dependent hysteresis. If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases.
When rate-dependent hysteresis is due to dissipative effects like friction, it is associated with power loss.
Rate-independent
Systems with rate-independent hysteresis have a persistent memory of the past that remains after the transients have died out. The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variable cycles from to and back again, the output may be initially but a different value upon return. The values of depend on the path of values that passes through but not on the speed at which it traverses the path. Many authors restrict the term hysteresis to mean only rate-independent hysteresis. Hysteresis effects can be characterized using the Preisach model and the generalized Prandtl−Ishlinskii model.
In engineering
Control systems
In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, a thermostat controlling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C).
Similarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds.
Electronic circuits
Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate for contact bounce in switches, or noise in an electrical signal.
A Schmitt trigger is a simple electronic circuit that exhibits this property.
A latching relay uses a solenoid to actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated.
Some positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits.
Hysteresis is essential to the workings of some memristors (circuit components which "remember" changes in the current passing through them by changing their resistance).
Hysteresis can be used when connecting arrays of elements such as nanoelectronics, electrochrome cells and memory effect devices using passive matrix addressing. Shortcuts are made between adjacent components (see crosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually.
In the field of audio electronics, a noise gate often implements hysteresis intentionally to prevent the gate from "chattering" when signals close to its threshold are applied.
User interface design
A hysteresis is sometimes intentionally added to computer algorithms. The field of user interface design has borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior.
Aerodynamics
In aerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack.
Hydraulics
Hysteresis can be observed in the stage-flow relationship of a river during rapidly changing conditions such as passing of a flood wave. It is most pronounced in low gradient streams with steep leading edge hydrographs.
Backlash
Moving parts within machines, such as the components of a gear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part. This unwanted delay is normally kept as small as practicable, and is usually called backlash. The amount of backlash will increase with time as the surfaces of moving parts wear.
In mechanics
Elastic hysteresis
In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to material internal friction.
Elastic hysteresis was one of the first types of hysteresis to be examined.
The effect can be demonstrated using a rubber band with weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights are loaded onto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, or unloaded, the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obey Hooke's law perfectly. The hysteresis loop of an idealized rubber band is shown in the figure.
In terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy.
Elastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly. Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis.
When the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band is thermally isolated.
Small vehicle suspensions using rubber (or other elastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound. Mountain bikes have made use of elastomer suspension, as did the original Mini car.
The primary cause of rolling resistance when a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to the viscoelastic characteristics of the material of the rolling body.
Contact angle hysteresis
The contact angle formed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis.
The second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without the interfacial area decreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is the advancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle.
Bubble shape hysteresis
The equilibrium shapes of bubbles expanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of the maximum capillary pressure to ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system. The bubble shape hysteresis is a consequence of gas compressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis.
The existence of the bubble shape hysteresis has important consequences in interfacial rheology experiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis.
Adsorption hysteresis
Hysteresis can also occur during physical adsorption processes. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such as cavitation and pore blocking.
In physical adsorption, hysteresis is evidence of mesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius. An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is. Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called "crossing", "converging", or "returning", depending on the shape of the isotherm at this point.
Matric potential hysteresis
The relationship between matric water potential and water content is the basis of the water retention curve. Matric potential measurements (Ψm) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψm) of , the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%.
Tensiometers are directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. , only desorption curves are usually measured during calibration of soil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored.
In materials
Magnetic hysteresis
When an external magnetic field is applied to a ferromagnetic material such as iron, the atomic domains align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive.
The relationship between field strength and magnetization is not linear in such materials. If a magnet is demagnetized and the relationship between and is plotted for increasing levels of field strength, follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section is twice the coercivity of the material.
A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations.
Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon.
Physical origin
The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording).
Larger magnets are divided into regions called domains. Across each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation).
Magnetic hysteresis models
The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011)
Applications
There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased.
On the other hand, magnetically soft (low coercivity) iron is used for the cores in electromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors.
Electrical hysteresis
Electrical hysteresis typically occurs in ferroelectric material, where domains of polarization contribute to the total polarization. Polarization is the electrical dipole moment (either C·m−2 or C·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis.
Liquid–solid-phase transitions
Hysteresis manifests itself in state transitions when melting temperature and freezing temperature do not agree. For example, agar melts at and solidifies from . This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before.
In biology
Cell biology and genetics
Hysteresis in cell biology often follows bistable systems where the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise.
Cells undergoing cell division exhibit hysteresis in that it takes a higher concentration of cyclins to switch them from G2 phase into mitosis than to stay in mitosis once begun.
Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter.
Darlington in his classic works on genetics discussed hysteresis of the chromosomes, by which he meant "failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral", as they lie in a somewhat rigid medium in the limited space of the cell nucleus.
In developmental biology, cell type diversity is regulated by long range-acting signaling molecules called morphogens that pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogen sonic hedgehog (Shh), for example, acts on limb bud and neural progenitors to induce expression of a set of homeodomain-containing transcription factors to subdivide these tissues into distinct domains. It has been shown that these tissues have a 'memory' of previous exposure to Shh.
In neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling. In this circuit, expression of Gli transcription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh.
Immunology
There is some evidence that T cells exhibit hysteresis in that it takes a lower signal threshold to activate T cells that have been previously activated. Ras GTPase activation is required for downstream effector functions of activated T cells. Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell.
Neuroscience
The property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis.
Neuropsychology
Neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study (that is, its operation resists easy reduction). Context-dependent memory and state-dependent memory show hysteretic aspects of neurocognition.
Respiratory physiology
Lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. The difference in compliance (Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli.
The transpulmonary pressure vs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation.
Voice and speech physiology
A hysteresis effect may be observed in voicing onset versus offset. The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller.
Ecology and epidemiology
Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory of spruce budworm outbreaks and behavioral-effects on disease transmission.
It is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion.
In ocean and climate science
Complex ocean and climate models rely on the principle.
In economics
Economic systems can exhibit hysteresis. For example, export performance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going.
When some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for even higher wages when the economy again gets better instead of letting the wage be at the equilibrium wage level, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks.
Permanently higher unemployment
The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to the unemployment rate. According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or "boom", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment "structural", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that a ratchet effect in unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher "natural" rate of unemployment (more scientifically known as the NAIRU). This occurs first because inflationary expectations are "sticky" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories of rational expectations) and second because labor markets do not clear instantly in response to unemployment.
The existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked by Olivier Blanchard among others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution. However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps with incomes policies to avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier.
Models
Hysteretic models are mathematical models capable of simulating complex nonlinear behavior (hysteresis) characterizing mechanical systems and materials used in different fields of engineering, such as aerospace, civil, and mechanical engineering. Some examples of mechanical systems and materials having hysteretic behavior are:
materials, such as steel, reinforced concrete, wood;
structural elements, such as steel, reinforced concrete, or wood joints;
devices, such as seismic isolators and dampers.
Each subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis. An example is the Preisach model of hysteresis, which represents a hysteresis nonlinearity as a linear superposition of square loops called non-ideal relays. Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons.
A simple and intuitive parametric description of various hysteresis loops may be found in the Lapshin model. Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model in Mathcad and in R programming language.
The Bouc–Wen model of hysteresis is often used to describe non-linear hysteretic systems. It was introduced by Bouc and extended by Wen, who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional and torsional response of hysteretic systems two- and three-dimensional continua, and soil liquefaction among others. The Bouc–Wen model and its variants/extensions have been used in applications of structural control, in particular in the modeling of the behaviour of magnetorheological dampers, base isolation devices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. The BWBN model has been widely used in a wide spectrum of applications and implementations are available in software such as OpenSees.
Hysteretic models may have a generalized displacement as input variable and a generalized force as output variable, or vice versa. In particular, in rate-independent hysteretic models, the output variable does not depend on the rate of variation of the input one.
Rate-independent hysteretic models can be classified into four different categories depending on the type of equation that needs to be solved to compute the output variable:
algebraic models
transcendental models
differential models
integral models
List of models
Some notable hysteretic models are listed below with their associated fields.
Bean's critical state model (magnetism)
Bouc–Wen model (structural engineering)
Ising model (magnetism)
Jiles–Atherton model (magnetism)
Novak–Tyson model (cell-cycle control)
Preisach model (magnetism)
Stoner–Wohlfarth model (magnetism)
Energy
When hysteresis occurs with extensive and intensive variables, the work done on the system is the area under the hysteresis graph.
See also
Backlash (engineering)
Bean's critical state model
Black box
Deadband
Fuzzy control system
Hysteresivity
Markov property
Memristor
Path dependence
Path dependence (physics)
Remanence
References
Further reading
Originally published as Volume III/3 of Handbuch der Physik in 1965.
External links
Overview of contact angle Hysteresis
Preisach model of hysteresis – Matlab codes developed by Zs. Szabó
Hysteresis
What's hysteresis?
Dynamical systems with hysteresis (interactive web page)
Magnetization reversal app (coherent rotation)
Elastic hysteresis and rubber bands
Magnetic ordering
Materials science
Nonlinear systems
Dynamical systems | 0.764454 | 0.998648 | 0.76342 |
Hyperspace | In science fiction, hyperspace (also known as nulspace, subspace, overspace, jumpspace and similar terms) is a concept relating to higher dimensions as well as parallel universes and a faster-than-light (FTL) method of interstellar travel. In its original meaning, the term hyperspace was simply a synonym for higher-dimensional space. This usage was most common in 19th-century textbooks and is still occasionally found in academic and popular science texts, for example, Hyperspace (1994). Its science fiction usage originated in the magazine Amazing Stories Quarterly in 1931 and within several decades it became one of the most popular tropes of science fiction, popularized by its use in the works of authors such as Isaac Asimov and E. C. Tubb, and media franchises such as Star Wars.
One of the main reasons for the concept's popularity in science fiction is the impossibility of faster-than-light travel in ordinary space, which hyperspace allows writers to bypass. In most works, hyperspace is described as a higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close to each other, similar to the concept of a wormhole; or a shortcut-enabling parallel universe that can be travelled through. Usually it can be traversed – the process often known as "jumping" – through a gadget known as a "hyperdrive"; rubber science is sometimes used to explain it. Many works rely on hyperspace as a convenient background tool enabling FTL travel necessary for the plot, with a small minority making it a central element in their storytelling. While most often used in the context of interstellar travel, a minority of works focus on other plot points, such as the inhabitants of hyperspace, hyperspace as an energy source, or even hyperspace as the afterlife.
Concept
The basic premise of hyperspace is that vast distances through space can be traversed quickly by taking a kind of shortcut. There are two common models used to explain this shortcut: folding and mapping. In the folding model, hyperspace is a place of higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close to each other; a common analogy popularized by Robert A. Heinlein's Starman Jones (1953) is that of crumpling two-dimensional paper or cloth in the third dimension, thus bringing points on its surface into contact. In the mapping model, hyperspace is a parallel universe much smaller than ours (but not necessarily the same shape), which can be entered at a point corresponding to one location in ordinary space and exited at a different point corresponding to another location after travelling a much shorter distance than would be necessary in ordinary space. The Science in Science Fiction compares it to being able to step onto a world map at one's current location, walking across the map to a different continent, and then stepping off the map to find oneself at the new location—noting that the hyperspace "map" could have a significantly more complicated shape, as in Bob Shaw's Night Walk (1967).
Hyperspace is generally seen as a fictional concept not compatible with present-day scientific theories, particularly the theory of relativity). Some science fiction writers attempted quasi-scientific rubber science explanations of this concept. For others, however, it is just a convenient MacGuffin enabling faster-than-light travel necessary for their story without violating the prohibitions against FTL travel in ordinary space imposed by known laws of physics.
Terminology
The means of accessing hyperspace is often called a "hyperdrive", and navigating hyperspace is typically referred to as "jumping" (as in "the ship will now jump through hyperspace").
A number of related terms (such as imaginary space, Jarnell intersplit, jumpspace, megaflow, N-Space, nulspace, slipstream, overspace, Q-space, subspace, and tau-space) have been used by various writers, although none have gained recognition to rival that of hyperspace. Some works use multiple synonyms; for example, in the Star Trek franchise, the term hyperspace itself is only used briefly in a single 1988 episode ("Coming of Age") of Star Trek: The Next Generation, while a related set of terms – such as subspace, transwarp, and proto-warp – are employed much more often, and most of the travel takes place through the use of a warp drive. Hyperspace travel has also been discussed in the context of wormholes and teleportation, which some writers consider to be similar whereas others view them as separate concepts.
History
Emerging in the early 20th century, within several decades hyperspace became a common element of interstellar space travel stories in science fiction. Kirk Meadowcroft's "The Invisible Bubble" (1928) and John Campbell's Islands of Space (1931) feature the earliest known references to hyperspace, with Campbell, whose story was published in the science fiction magazine Amazing Stories Quarterly, likely being the first writer to use this term in the context of space travel. According to the Historical Dictionary of Science Fiction, the earliest known use of the word "hyper-drive" comes from a preview of Murray Leinster's story "The Manless Worlds" in Thrilling Wonder Stories 1946.
Another early work featuring hyperspace was Nelson Bond's The Scientific Pioneer Returns (1940). Isaac Asimov's Foundation series, first published in Astounding starting in 1942, featured a Galactic Empire traversed through hyperspace through the use of a "hyperatomic drive". In Foundation (1951), hyperspace is described as an "...unimaginable region that was neither space nor time, matter nor energy, something nor nothing, one could traverse the length of the Galaxy in the interval between two neighboring instants of time." E. C. Tubb has been credited with playing an important role in the development of hyperspace lore; writing a number of space operas in the early 1950s in which space travel occurs through that medium. He was also one of the first writers to treat hyperspace as a central part of the plot rather than a convenient background gadget that just enables the faster-than-light space travel.
In 1963, Philip Harbottle called the concept of hyperspace "a fixture" of the science fiction genre, and in 1977 Brian Ash wrote in The Visual Encyclopedia of Science Fiction that it had become the most popular of all faster-than-light methods of travel. The concept would subsequently be further popularized through its use in the Star Wars franchise.
In the 1974 film Dark Star, special effects designer Dan O'Bannon created a visual effect to depict going into hyperspace wherein the stars in space appear to move rapidly toward the camera. This is considered to be the first depiction in cinema history of a ship making the jump into hyperspace. The same effect was later employed in Star Wars (1977) and the "star streaks" are considered one of the visual "staples" of the Star Wars franchise.
Characteristics
Hyperspace is typically described as chaotic and confusing to human senses; often at least unpleasant – transitions to or from hyperspace can cause symptoms such as nausea, for example – and in some cases even hypnotic or dangerous to one's sanity. Visually, hyperspace is often left to the reader's imagination, or depicted as "a swirling gray mist". In some works, it is dark. Exceptions exist; for example, John Russel Fearn's Waters of Eternity (1953) features hyperspace that allows observation of regular space from within.
Many stories feature hyperspace as a dangerous, treacherous place where straying from a preset course can be disastrous. In Frederick Pohl's The Mapmakers (1955), navigational errors and the perils of hyperspace are one of the main plot-driving elements, and in K. Houston Brunner's Fiery Pillar (1955), a ship re-emerges within Earth, causing a catastrophic explosion. In some works, travelling or navigating hyperspace requires not only specialized equipment, but physical or psychological modifications of passengers or at least navigators, as seen in Frank Herbert's Dune (1965), Michael Moorcock's The Sundered Worlds (1966), Vonda McIntyre's Aztecs (1977), and David Brin's The Warm Space (1985).
While generally associated with science fiction, hyperspace-like concepts exist in some works of fantasy, particularly ones which involve movement between different worlds or dimensions. Such travel, usually done through portals rather than vehicles, is usually explained through the existence of magic.
Use
While mainly designed as means of fast space travel, occasionally, some writers have used the hyperspace concept in more imaginative ways, or as a central element of the story. In Arthur C. Clarke's "Technical Error" (1950), a man is laterally reversed by a brief accidental encounter with "hyperspace". In Robert A. Heinlein's Glory Road (1963) and Robert Silverberg's "Nightwings" (1968), it is used for storage. In George R.R. Martin's FTA (1974) hyperspace travel takes longer than in regular space, and in John E. Stith's Redshift Rendezvous (1990), the twist is that the relativistic effects within it appear at lower velocities. Hyperspace is generally unpopulated, save for the space-faring travellers. Early exceptions include Tubb's Dynasty of Doom (1953), Fearn's Waters of Eternity (1953) and Christopher Grimm's Someone to Watch Over Me (1959), which feature denizens of hyperspace. In The Mystery of Element 117 (1949) by Milton Smith, a window is opened into a new "hyperplane of hyperspace" containing those who have already died on Earth, and similarly, in Bob Shaw's The Palace of Eternity (1969), hyperspace is a form of afterlife, where human minds and memories reside after death. In some works, hyperspace is a source of extremely dangerous energy, threatening to destroy the entire world if mishandled (for instance Eando Binder's The Time Contractor from 1937 or Alfred Bester's "The Push of a Finger" from 1942). The concept of hyperspace travel, or space folding, can be used outside space travel as well, for example in Stephen King's short story "Mrs. Todd's Shortcut" it is a means for an elderly lady to take a shortcut while travelling between two cities.
In many stories, a starship cannot enter or leave hyperspace too close to a large concentration of mass, such as a planet or star; this means that hyperspace can only be used after a starship gets to the outside edge of a solar system, so that it must use other means of propulsion to get to and from planets. Other stories require a very large expenditure of energy in order to open a link (sometimes called a jump point) between hyperspace and regular space; this effectively limits access to hyperspace to very large starships, or to large stationary jump gates that can open jump points for smaller vessels. Examples include the "jump" technology in Babylon 5 and the star gate in Arthur C. Clarke's 2001: A Space Odyssey (1968). Just like with the very concept of hyperspace, the reasons given for such restrictions are usually technobabble, but their existence can be an important plot device. Science fiction author Larry Niven published his opinions to that effect in N-Space. According to him, an unrestricted FTL technology would give no limits to what heroes and villains could do. Limiting the places a ship can appear in, or making them more predictable, means that they will meet each other most often around contested planets or space stations, allowing for narratively satisfying battles or other encounters. On the other hand, a less restricted hyperdrive may also allow for dramatic escapes as the pilot "jumps" to hyperspace in the midst of battle to avoid destruction. In 1999 science fiction author James P. Hogan wrote that hyperspace is often treated as a plot-enabling gadget rather than as a fascinating, world-changing item, and that there are next to no works that discuss how hyperspace has been discovered and how such discovery subsequently changed the world.
See also
Minkowski space
Teleportation in fiction
Wormholes in fiction
Warp (video games)
Notes
References
Further reading
External links
Hyperspace by Curtis Saxton at Star Wars Technical Commentaries
Who Invented Hyperspace? Hyperspace in Science Fiction by Sten Odenwald at Astronomy Cafe
Historical Dictionary of Science Fiction entry for hyperspace
Fiction about faster-than-light travel
Fictional dimensions
Science fiction themes
Space
Fiction about teleportation | 0.767446 | 0.994742 | 0.763411 |
Sackur–Tetrode equation | The Sackur–Tetrode equation is an expression for the entropy of a monatomic ideal gas.
It is named for Hugo Martin Tetrode (1895–1931) and Otto Sackur (1880–1914), who developed it independently as a solution of Boltzmann's gas statistics and entropy equations, at about the same time in 1912.
Formula
The Sackur–Tetrode equation expresses the entropy of a monatomic ideal gas in terms of its thermodynamic state—specifically, its volume , internal energy , and the number of particles :
where is the Boltzmann constant, is the mass of a gas particle and is the Planck constant.
The equation can also be expressed in terms of the thermal wavelength :
For a derivation of the Sackur–Tetrode equation, see the Gibbs paradox. For the constraints placed upon the entropy of an ideal gas by thermodynamics alone, see the ideal gas article.
The above expressions assume that the gas is in the classical regime and is described by Maxwell–Boltzmann statistics (with "correct Boltzmann counting"). From the definition of the thermal wavelength, this means the Sackur–Tetrode equation is valid only when
The entropy predicted by the Sackur–Tetrode equation approaches negative infinity as the temperature approaches zero.
Sackur–Tetrode constant
The Sackur–Tetrode constant, written S0/R, is equal to S/kBN evaluated at a temperature of T = 1 kelvin, at standard pressure (100 kPa or 101.325 kPa, to be specified), for one mole of an ideal gas composed of particles of mass equal to the atomic mass constant. Its 2018 CODATA recommended value is:
S0/R = for po = 100 kPa
S0/R = for po = 101.325 kPa.
Information-theoretic interpretation
In addition to the thermodynamic perspective of entropy, the tools of information theory can be used to provide an information perspective of entropy. In particular, it is possible to derive the Sackur–Tetrode equation in information-theoretic terms. The overall entropy is represented as the sum of four individual entropies, i.e., four distinct sources of missing information. These are positional uncertainty, momenta uncertainty, the quantum mechanical uncertainty principle, and the indistinguishability of the particles. Summing the four pieces, the Sackur–Tetrode equation is then given as
The derivation uses Stirling's approximation, . Strictly speaking, the use of dimensioned arguments to the logarithms is incorrect, however their use is a "shortcut" made for simplicity. If each logarithmic argument were divided by an unspecified standard value expressed in terms of an unspecified standard mass, length and time, these standard values would cancel in the final result, yielding the same conclusion. The individual entropy terms will not be absolute, but will rather depend upon the standards chosen, and will differ with different standards by an additive constant.
References
Further reading
.
. (This derives a Sackur–Tetrode equation in a different way, also based on information.)
.
.
Equations of state
Ideal gas
Thermodynamic entropy | 0.773536 | 0.9869 | 0.763403 |
Magnetohydrodynamic drive | A magnetohydrodynamic drive or MHD accelerator is a method for propelling vehicles using only electric and magnetic fields with no moving parts, accelerating an electrically conductive propellant (liquid or gas) with magnetohydrodynamics. The fluid is directed to the rear and as a reaction, the vehicle accelerates forward.
Studies examining MHD in the field of marine propulsion began in the late 1950s.
Few large-scale marine prototypes have been built, limited by the low electrical conductivity of seawater. Increasing current density is limited by Joule heating and water electrolysis in the vicinity of electrodes, and increasing the magnetic field strength is limited by the cost, size and weight (as well as technological limitations) of electromagnets and the power available to feed them. In 2023 DARPA launched the PUMP program to build a marine engine using superconducting magnets expected to reach a field strength of 20 Tesla.
Stronger technical limitations apply to air-breathing MHD propulsion (where ambient air is ionized) that is still limited to theoretical concepts and early experiments.
Plasma propulsion engines using magnetohydrodynamics for space exploration have also been actively studied as such electromagnetic propulsion offers high thrust and high specific impulse at the same time, and the propellant would last much longer than in chemical rockets.
Principle
The working principle involves the acceleration of an electrically conductive fluid (which can be a liquid or an ionized gas called a plasma) by the Lorentz force, resulting from the cross product of an electric current (motion of charge carriers accelerated by an electric field applied between two electrodes) with a perpendicular magnetic field. The Lorentz force accelerates all charged particles, positive and negative species (in opposite directions). If either positive or negative species dominate the vehicle is put in motion in the opposite direction from the net charge.
This is the same working principle as an electric motor (more exactly a linear motor) except that in an MHD drive, the solid moving rotor is replaced by the fluid acting directly as the propellant. As with all electromagnetic devices, an MHD accelerator is reversible: if the ambient working fluid is moving relatively to the magnetic field, charge separation induces an electric potential difference that can be harnessed with electrodes: the device then acts as a power source with no moving parts, transforming the kinetic energy of the incoming fluid into electricity, called an MHD generator.
As the Lorentz force in an MHD converter does not act on a single isolated charged particle nor on electrons in a solid electrical wire, but on a continuous charge distribution in motion, it is a "volumetric" (body) force, a force per unit volume:
where f is the force density (force per unit volume), ρ the charge density (charge per unit volume), E the electric field, J the current density (current per unit area) and B the magnetic field.
Typology
MHD thrusters are classified in two categories according to the way the electromagnetic fields operate:
Conduction devices when a direct current flows in the fluid due to an applied voltage between pairs of electrodes, the magnetic field being steady.
Induction devices when alternating currents are induced by a rapidly varying magnetic field, as eddy currents. No electrodes are required in this case.
As induction MHD accelerators are electrodeless, they do not exhibit the common issues related to conduction systems (especially Joule heating, bubbles and redox from electrolysis) but need much more intense peak magnetic fields to operate. Since one of the biggest issues with such thrusters is the limited energy available on-board, induction MHD drives have not been developed out of the laboratory.
Both systems can put the working fluid in motion according to two main designs:
Internal flow when the fluid is accelerated within and propelled back out of a nozzle of tubular or ring-shaped cross-section, the MHD interaction being concentrated within the pipe (similarly to rocket or jet engines).
External flow when the fluid is accelerated around the whole wetted area of the vehicle, the electromagnetic fields extending around the body of the vehicle. The propulsion force results from the pressure distribution on the shell (as lift on a wing, or how ciliate microorganisms such as Paramecium move water around them).
Internal flow systems concentrate the MHD interaction in a limited volume, preserving stealth characteristics. External field systems on the contrary have the ability to act on a very large expanse of surrounding water volume with higher efficiency and the ability to decrease drag, increasing the efficiency even further.
Marine propulsion
MHD has no moving parts, which means that a good design might be silent, reliable, and efficient. Additionally, the MHD design eliminates many of the wear and friction pieces of the drivetrain with a directly driven propeller by an engine. Problems with current technologies include expense and slow speed compared to a propeller driven by an engine. The extra expense is from the large generator that must be driven by an engine. Such a large generator is not required when an engine directly drives a propeller.
The first prototype, a 3-meter (10-feet) long submarine called EMS-1, was designed and tested in 1966 by Stewart Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior year undergraduate students to build the operational unit. This MHD submarine operated on batteries delivering power to electrodes and electromagnets, which produced a magnetic field of 0.015 tesla. The cruise speed was about 0.4 meter per second (15 inches per second) during the test in the bay of Santa Barbara, California, in accordance with theoretical predictions.
Later, a Japanese prototype, the 3.6-meter long "ST-500", achieved speeds of up to 0.6 m/s in 1979.
In 1991, the world's first full-size prototype Yamato 1 was completed in Japan after 6 years of research and development (R&D) by the Ship & Ocean Foundation (later known as the Ocean Policy Research Foundation). The ship successfully carried a crew of ten plus passengers at speeds of up to in Kobe Harbour in June 1992.
Small-scale ship models were later built and studied extensively in the laboratory, leading to successful comparisons between the measurements and the theoretical prediction of ship terminal speeds.
Military research about underwater MHD propulsion included high-speed torpedoes, remotely operated underwater vehicles (ROV), autonomous underwater vehicles (AUV), up to larger ones such as submarines.
Aircraft propulsion
Passive flow control
First studies of the interaction of plasmas with hypersonic flows around vehicles date back to the late 1950s, with the concept of a new kind of thermal protection system for space capsules during high-speed reentry. As low-pressure air is naturally ionized at such very high velocities and altitude, it was thought to use the effect of a magnetic field produced by an electromagnet to replace thermal ablative shields by a "magnetic shield". Hypersonic ionized flow interacts with the magnetic field, inducing eddy currents in the plasma. The current combines with the magnetic field to give Lorentz forces that oppose the flow and detach the bow shock wave further ahead of the vehicle, lowering the heat flux which is due to the brutal recompression of air behind the stagnation point. Such passive flow control studies are still ongoing, but a large-scale demonstrator has yet to be built.
Active flow control
Active flow control by MHD force fields on the contrary involves a direct and imperious action of forces to locally accelerate or slow down the airflow, modifying its velocity, direction, pressure, friction, heat flux parameters, in order to preserve materials and engines from stress, allowing hypersonic flight. It is a field of magnetohydrodynamics also called magnetogasdynamics, magnetoaerodynamics or magnetoplasma aerodynamics, as the working fluid is the air (a gas instead of a liquid) ionized to become electrically conductive (a plasma).
Air ionization is achieved at high altitude (electrical conductivity of air increases as atmospheric pressure reduces according to Paschen's law) using various techniques: high voltage electric arc discharge, RF (microwaves) electromagnetic glow discharge, laser, e-beam or betatron, radioactive source… with or without seeding of low ionization potential alkali substances (like caesium) into the flow.
MHD studies applied to aeronautics try to extend the domain of hypersonic planes to higher Mach regimes:
Action on the boundary layer to prevent laminar flow from becoming turbulent.
Shock wave mitigation for thermal control and reduction of the wave drag and form drag. Some theoretical studies suggest the flow velocity could be controlled everywhere on the wetted area of an aircraft, so shock waves could be totally cancelled when using enough power.
Inlet flow control.
Airflow velocity reduction upstream to feed a scramjet by the use of an MHD generator section combined with an MHD accelerator downstream at the exhaust nozzle, powered by the generator through an MHD bypass system.
The Russian project Ayaks (Ajax) is an example of MHD-controlled hypersonic aircraft concept. A US program also exists to design a hypersonic MHD bypass system, the Hypersonic Vehicle Electric Power System (HVEPS). A working prototype was completed in 2017 under development by General Atomics and the University of Tennessee Space Institute, sponsored by the US Air Force Research Laboratory. These projects aim to develop MHD generators feeding MHD accelerators for a new generation of high-speed vehicles. Such MHD bypass systems are often designed around a scramjet engine, but easier to design turbojets are also considered, as well as subsonic ramjets.
Such studies covers a field of resistive MHD with magnetic Reynolds number ≪ 1 using nonthermal weakly ionized gases, making the development of demonstrators much more difficult to realize than for MHD in liquids. "Cold plasmas" with magnetic fields are subject to the electrothermal instability occurring at a critical Hall parameter, which makes full-scale developments difficult.
Prospects
MHD propulsion has been considered as the main propulsion system for both marine and space ships since there is no need to produce lift to counter the gravity of Earth in water (due to buoyancy) nor in space (due to weightlessness), which is ruled out in the case of flight in the atmosphere.
Nonetheless, considering the current problem of the electric power source solved (for example with the availability of a still missing multi-megawatt compact fusion reactor), one could imagine future aircraft of a new kind silently powered by MHD accelerators, able to ionize and direct enough air downward to lift several tonnes. As external flow systems can control the flow over the whole wetted area, limiting thermal issues at high speeds, ambient air would be ionized and radially accelerated by Lorentz forces around an axisymmetric body (shaped as a cylinder, a cone, a sphere…), the entire airframe being the engine. Lift and thrust would arise as a consequence of a pressure difference between the upper and lower surfaces, induced by the Coandă effect. In order to maximize such pressure difference between the two opposite sides, and since the most efficient MHD converters (with a high Hall effect) are disk-shaped, such MHD aircraft would be preferably flattened to take the shape of a biconvex lens. Having no wings nor airbreathing jet engines, it would share no similarities with conventional aircraft, but it would behave like a helicopter whose rotor blades would have been replaced by a "purely electromagnetic rotor" with no moving part, sucking the air downward. Such concepts of flying MHD disks have been developed in the peer review literature from the mid 1970s mainly by physicists Leik Myrabo with the Lightcraft, and Subrata Roy with the Wingless Electromagnetic Air Vehicle (WEAV).
These futuristic visions have been advertised in the media although they still remain beyond the reach of modern technology.
Spacecraft propulsion
A number of experimental methods of spacecraft propulsion are based on magnetohydrodynamics. As this kind of MHD propulsion involves compressible fluids in the form of plasmas (ionized gases) it is also referred to as magnetogasdynamics or magnetoplasmadynamics.
In such electromagnetic thrusters, the working fluid is most of the time ionized hydrazine, xenon or lithium. Depending on the propellant used, it can be seeded with alkali such as potassium or caesium to improve its electrical conductivity. All charged species within the plasma, from positive and negative ions to free electrons, as well as neutral atoms by the effect of collisions, are accelerated in the same direction by the Lorentz "body" force, which results from the combination of a magnetic field with an orthogonal electric field (hence the name of "cross-field accelerator"), these fields not being in the direction of the acceleration. This is a fundamental difference with ion thrusters which rely on electrostatics to accelerate only positive ions using the Coulomb force along a high voltage electric field.
First experimental studies involving cross-field plasma accelerators (square channels and rocket nozzles) date back to the late 1950s. Such systems provide greater thrust and higher specific impulse than conventional chemical rockets and even modern ion drives, at the cost of a higher required energy density.
Some devices also studied nowadays besides cross-field accelerators include the magnetoplasmadynamic thruster sometimes referred to as the Lorentz force accelerator (LFA), and the electrodeless pulsed inductive thruster (PIT).
Even today, these systems are not ready to be launched in space as they still lack a suitable compact power source offering enough energy density (such as hypothetical fusion reactors) to feed the power-greedy electromagnets, especially pulsed inductive ones. The rapid ablation of electrodes under the intense thermal flow is also a concern. For these reasons, studies remain largely theoretical and experiments are still conducted in the laboratory, although over 60 years have passed since the first research in this kind of thrusters.
Fiction
Oregon, a ship in the Oregon Files series of books by author Clive Cussler, has a magnetohydrodynamic drive. This allows the ship to turn very sharply and brake instantly, instead of gliding for a few miles. In Valhalla Rising, Clive Cussler writes the same drive into the powering of Captain Nemo's Nautilus.
The film adaptation of The Hunt for Red October popularized the magnetohydrodynamic drive as a "caterpillar drive" for submarines, a nearly undetectable "silent drive" intended to achieve stealth in submarine warfare. In reality, the current traveling through the water would create gases and noise, and the magnetic fields would induce a detectable magnetic signature. In the film, it was suggested that this sound could be confused with geological activity. In the novel from which the film was adapted, the caterpillar that Red October used was actually a pump-jet of the so-called "tunnel drive" type (the tunnels provided acoustic camouflage for the cavitation from the propellers).
In the Ben Bova novel The Precipice, the ship where some of the action took place, Starpower 1, built to prove that exploration and mining of the Asteroid Belt was feasible and potentially profitable, had a magnetohydrodynamic drive mated to a fusion power plant.
See also
Electrohydrodynamics
Lorentz force, relates electric and magnetic fields to propulsion force
References
External links
Demonstrate Magnetohydrodynamic Propulsion in a Minute
Marine propulsion
Fluid dynamics
Plasma technology and applications
Magnetic propulsion devices | 0.767081 | 0.995197 | 0.763396 |
Gravitational binding energy | The gravitational binding energy of a system is the minimum energy which must be added to it in order for the system to cease being in a gravitationally bound state. A gravitationally bound system has a lower (i.e., more negative) gravitational potential energy than the sum of the energies of its parts when these are completely separated—this is what keeps the system aggregated in accordance with the minimum total potential energy principle.
The gravitational binding energy can be conceptually different within the theories of newtonian gravity and Albert Einstein's theory of gravity called General Relativity. In newtonian gravity, the binding energy can be considered to be the linear sum of the interactions between all pairs of microscopic components of the system, while in General Relativity, this is only approximately true if the gravitational fields are all weak. When stronger fields are present within a system, the binding energy is a nonlinear property of the system, and it cannot be conceptually attributed among the elements of the system. In this case the binding energy can be considered to be the (negative) difference between the ADM mass of the system, as it is manifest in its gravitational interaction with other distant systems, and the sum of the energies of all the atoms and other elementary particles of the system if disassembled.
For a spherical body of uniform density, the gravitational binding energy U is given in newtonian gravity by the formula
where G is the gravitational constant, M is the mass of the sphere, and R is its radius.
Assuming that the Earth is a sphere of uniform density (which it is not, but is close enough to get an order-of-magnitude estimate) with M = and r = , then U = . This is roughly equal to one week of the Sun's total energy output. It is , 60% of the absolute value of the potential energy per kilogram at the surface.
The actual depth-dependence of density, inferred from seismic travel times (see Adams–Williamson equation), is given in the Preliminary Reference Earth Model (PREM). Using this, the real gravitational binding energy of Earth can be calculated numerically as U = .
According to the virial theorem, the gravitational binding energy of a star is about two times its internal thermal energy in order for hydrostatic equilibrium to be maintained. As the gas in a star becomes more relativistic, the gravitational binding energy required for hydrostatic equilibrium approaches zero and the star becomes unstable (highly sensitive to perturbations), which may lead to a supernova in the case of a high-mass star due to strong radiation pressure or to a black hole in the case of a neutron star.
Derivation within Newtonian gravity for a uniform sphere
The gravitational binding energy of a sphere with radius is found by imagining that it is pulled apart by successively moving spherical shells to infinity, the outermost first, and finding the total energy needed for that.
Assuming a constant density , the masses of a shell and the sphere inside it are:
and
The required energy for a shell is the negative of the gravitational potential energy:
Integrating over all shells yields:
Since is simply equal to the mass of the whole divided by its volume for objects with uniform density, therefore
And finally, plugging this into our result leads to
Negative mass component
Two bodies, placed at the distance R from each other and reciprocally not moving, exert a gravitational force on a third body slightly smaller when R is small. This can be seen as a negative mass component of the system, equal, for uniformly spherical solutions, to:
For example, the fact that Earth is a gravitationally-bound sphere of its current size costs of mass (roughly one fourth the mass of Phobos – see above for the same value in Joules), and if its atoms were sparse over an arbitrarily large volume the Earth would weigh its current mass plus kilograms (and its gravitational pull over a third body would be accordingly stronger).
It can be easily demonstrated that this negative component can never exceed the positive component of a system. A negative binding energy greater than the mass of the system itself would indeed require that the radius of the system be smaller than:
which is smaller than its Schwarzschild radius:
and therefore never visible to an external observer. However this is only a Newtonian approximation and in relativistic conditions other factors must be taken into account as well.
Non-uniform spheres
Planets and stars have radial density gradients from their lower density surfaces to their much denser compressed cores. Degenerate matter objects (white dwarfs; neutron star pulsars) have radial density gradients plus relativistic corrections.
Neutron star relativistic equations of state include a graph of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to observed neutron star gravitational mass of M with radius R,
Given current values
and the star mass M expressed relative to the solar mass,
then the relativistic fractional binding energy of a neutron star is
See also
Stress–energy tensor
Stress–energy–momentum pseudotensor
Nordtvedt effect
References
Binding energy
Binding energy | 0.773052 | 0.98751 | 0.763396 |
Mach's principle | In theoretical physics, particularly in discussions of gravitation theories, Mach's principle (or Mach's conjecture) is the name given by Albert Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach. The hypothesis attempted to explain how rotating objects, such as gyroscopes and spinning celestial bodies, maintain a frame of reference.
The proposition is that the existence of absolute rotation (the distinction of local inertial frames vs. rotating reference frames) is determined by the large-scale distribution of matter, as exemplified by this anecdote:
You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?
Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. There are a number of rival formulations of the principle, often stated in vague ways like "mass out there influences inertia here". A very general statement of Mach's principle is "local physical laws are determined by the large-scale structure of the universe".
Mach's concept was a guiding factor in Einstein's development of the general theory of relativity. Einstein realized that the overall distribution of matter would determine the metric tensor which indicates which frame is stationary with respect to rotation. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements have been made which would qualify as a Mach principle, some of which are false. The Gödel rotating universe is a solution of the field equations that is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example does not completely settle the question of the physical relevance of the principle because it has closed timelike curves.
History
Mach put forth the idea in his book The Science of Mechanics (1883 in German, 1893 in English). Before Mach's time, the basic idea also appears in the writings of George Berkeley. After Mach, the book Absolute or Relative Motion? (1896) by Benedict Friedlaender and his brother Immanuel contained ideas similar to Mach's principle.
Einstein's use of the principle
There is a fundamental issue in relativity theory: if all motion is relative, how can we measure the inertia of a body? We must measure the inertia with respect to something else. But what if we imagine a particle completely on its own in the universe? We might hope to still have some notion of its state of motion. Mach's principle is sometimes interpreted as the statement that such a particle's state of motion has no meaning in that case.
In Mach's words, the principle is embodied as follows:
Albert Einstein seemed to view Mach's principle as something along the lines of:
In this sense, at least some of Mach's principles are related to philosophical holism. Mach's suggestion can be taken as the injunction that gravitation theories should be relational theories. Einstein brought the principle into mainstream physics while working on general relativity. Indeed, it was Einstein who first coined the phrase Mach's principle. There is much debate as to whether Mach really intended to suggest a new physical law since he never states it explicitly.
The writing in which Einstein found inspiration was Mach's book The Science of Mechanics (1883, tr. 1893), where the philosopher criticized Newton's idea of absolute space, in particular the argument that Newton gave sustaining the existence of an advantaged reference system: what is commonly called "Newton's bucket argument".
In his Philosophiae Naturalis Principia Mathematica, Newton tried to demonstrate that one can always decide if one is rotating with respect to the absolute space, measuring the apparent forces that arise only when an absolute rotation is performed. If a bucket is filled with water, and made to rotate, initially the water remains still, but then, gradually, the walls of the vessel communicate their motion to the water, making it curve and climb up the borders of the bucket, because of the centrifugal forces produced by the rotation. This experiment demonstrates that the centrifugal forces arise only when the water is in rotation with respect to the absolute space (represented here by the earth's reference frame, or better, the distant stars) instead, when the bucket was rotating with respect to the water no centrifugal forces were produced, this indicating that the latter was still with respect to the absolute space.
Mach, in his book, says that the bucket experiment only demonstrates that when the water is in rotation with respect to the bucket no centrifugal forces are produced, and that we cannot know how the water would behave if in the experiment the bucket's walls were increased in depth and width until they became leagues big. In Mach's idea this concept of absolute motion should be substituted with a total relativism in which every motion, uniform or accelerated, has sense only in reference to other bodies (i.e., one cannot simply say that the water is rotating, but must specify if it's rotating with respect to the vessel or to the earth). In this view, the apparent forces that seem to permit discrimination between relative and "absolute" motions should only be considered as an effect of the particular asymmetry that there is in our reference system between the bodies which we consider in motion, that are small (like buckets), and the bodies that we believe are still (the earth and distant stars), that are overwhelmingly bigger and heavier than the former.
This same thought had been expressed by the philosopher George Berkeley in his De Motu. It is then not clear, in the passages from Mach just mentioned, if the philosopher intended to formulate a new kind of physical action between heavy bodies. This physical mechanism should determine the inertia of bodies, in a way that the heavy and distant bodies of our universe should contribute the most to the inertial forces. More likely, Mach only suggested a mere "redescription of motion in space as experiences that do not invoke the term space". What is certain is that Einstein interpreted Mach's passage in the former way, originating a long-lasting debate.
Most physicists believe Mach's principle was never developed into a quantitative physical theory that would explain a mechanism by which the stars can have such an effect. Mach himself never made his principle exactly clear. Although Einstein was intrigued and inspired by Mach's principle, Einstein's formulation of the principle is not a fundamental assumption of general relativity, although the principle of equivalence of gravitational and inertial mass is most certainly fundamental.
Mach's principle in general relativity
Because intuitive notions of distance and time no longer apply, what exactly is meant by "Mach's principle" in general relativity is even less clear than in Newtonian physics and at least 21 formulations of Mach's principle are possible, some being considered more strongly Machian than others. A relatively weak formulation is the assertion that the motion of matter in one place should affect which frames are inertial in another.
Einstein, before completing his development of the general theory of relativity, found an effect which he interpreted as being evidence of Mach's principle. We assume a fixed background for conceptual simplicity, construct a large spherical shell of mass, and set it spinning in that background. The reference frame in the interior of this shell will precess with respect to the fixed background. This effect is known as the Lense–Thirring effect. Einstein was so satisfied with this manifestation of Mach's principle that he wrote a letter to Mach expressing this:
The Lense–Thirring effect certainly satisfies the very basic and broad notion that "matter there influences inertia here". The plane of the pendulum would not be dragged around if the shell of matter were not present, or if it were not spinning. As for the statement that "inertia originates in a kind of interaction between bodies", this, too, could be interpreted as true in the context of the effect.
More fundamental to the problem, however, is the very existence of a fixed background, which Einstein describes as "the fixed stars". Modern relativists see the imprints of Mach's principle in the initial-value problem. Essentially, we humans seem to wish to separate spacetime into slices of constant time. When we do this, Einstein's equations can be decomposed into one set of equations, which must be satisfied on each slice, and another set, which describe how to move between slices. The equations for an individual slice are elliptic partial differential equations. In general, this means that only part of the geometry of the slice can be given by the scientist, while the geometry everywhere else will then be dictated by Einstein's equations on the slice.
In the context of an asymptotically flat spacetime, the boundary conditions are given at infinity. Heuristically, the boundary conditions for an asymptotically flat universe define a frame with respect to which inertia has meaning. By performing a Lorentz transformation on the distant universe, of course, this inertia can also be transformed.
A stronger form of Mach's principle applies in Wheeler–Mach–Einstein spacetimes, which require spacetime to be spatially compact and globally hyperbolic. In such universes Mach's principle can be stated as the distribution of matter and field energy-momentum (and possibly other information) at a particular moment in the universe determines the inertial frame at each point in the universe (where "a particular moment in the universe" refers to a chosen Cauchy surface).
There have been other attempts to formulate a theory that is more fully Machian, such as the Brans–Dicke theory and the Hoyle–Narlikar theory of gravity, but most physicists argue that none have been fully successful. At an exit poll of experts, held in Tübingen in 1993, when asked the question "Is general relativity perfectly Machian?", 3 respondents replied "yes", and 22 replied "no". To the question "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?" the result was 14 "yes" and 7 "no".
However, Einstein was convinced that a valid theory of gravity would necessarily have to include the relativity of inertia:
Inertial induction
In 1953, in order to express Mach's Principle in quantitative terms, the Cambridge University physicist Dennis W. Sciama proposed the addition of an acceleration dependent term to the Newtonian gravitation equation. Sciama's acceleration dependent term was where r is the distance between the particles, G is the gravitational constant, a is the relative acceleration and c represents the speed of light in vacuum. Sciama referred to the effect of the acceleration dependent term as Inertial Induction.
Variations in the statement of the principle
The broad notion that "mass there influences inertia here" has been expressed in several forms.
Hermann Bondi and Joseph Samuel have listed eleven distinct statements that can be called Mach principles, labelled Mach0 through Mach10 (taking inspiration from the Mach number). Though their list is not necessarily exhaustive, it does give a flavor for the variety possible.
The universe, as represented by the average motion of distant galaxies, does not appear to rotate relative to local inertial frames.
Newton's gravitational constant G is a dynamical field.
An isolated body in otherwise empty space has no inertia.
Local inertial frames are affected by the cosmic motion and distribution of matter.
The universe is spatially closed.
The total energy, angular and linear momentum of the universe are zero.
Inertial mass is affected by the global distribution of matter.
If you take away all matter, there is no more space.
is a definite number, of order unity, where is the mean density of matter in the universe, and is the Hubble time.
The theory contains no absolute elements.
Overall rigid rotations and translations of a system are unobservable.
See also
Notes
References
Further reading
This textbook, among other writings by Sciama, helped revive interest in Mach's principle.
External links
Ernst Mach, The Science of Mechanics (tr. 1893) at Archive.org
"Mach's Principle" (1995) from Einstein Studies vol. 6 (13MB PDF)
(originally published in Italian as Gasco E. "Il contributo di mach sull'origine dell'inerzia." Quaderni di Storia della Fisica, 2004.)
Theories of gravity
Principles
Rotation
Philosophy of astronomy
Thought experiments in physics | 0.770776 | 0.990425 | 0.763396 |
Praxis (process) | Praxis is the process by which a theory, lesson, or skill is enacted, embodied, realized, applied, or put into practice. "Praxis" may also refer to the act of engaging, applying, exercising, realizing, or practising ideas. This has been a recurrent topic in the field of philosophy, discussed in the writings of Plato, Aristotle, St. Augustine, Francis Bacon, Immanuel Kant, Søren Kierkegaard, Ludwig von Mises, Karl Marx, Antonio Gramsci, Martin Heidegger, Hannah Arendt, Jean-Paul Sartre, Paulo Freire, Murray Rothbard, and many others. It has meaning in the political, educational, spiritual and medical realms.
Origins
The word praxis is from . In Ancient Greek the word praxis (πρᾶξις) referred to activity engaged in by free people. The philosopher Aristotle held that there were three basic activities of humans: theoria (thinking), poiesis (making), and praxis (doing). Corresponding to these activities were three types of knowledge: theoretical, the end goal being truth; poietical, the end goal being production; and practical, the end goal being action. Aristotle further divided the knowledge derived from praxis into ethics, economics, and politics. He also distinguished between eupraxia (εὐπραξία, "good praxis") and dyspraxia (δυσπραξία, "bad praxis, misfortune").
Marxism
Young Hegelian August Cieszkowski was one of the earliest philosophers to use the term praxis to mean "action oriented towards changing society" in his 1838 work Prolegomena zur Historiosophie (Prolegomena to a Historiosophy). Cieszkowski argued that while absolute truth had been achieved in the speculative philosophy of Hegel, the deep divisions and contradictions in man's consciousness could only be resolved through concrete practical activity that directly influences social life. Although there is no evidence that Karl Marx himself read this book, it may have had an indirect influence on his thought through the writings of his friend Moses Hess.
Marx uses the term "praxis" to refer to the free, universal, creative and self-creative activity through which man creates and changes his historical world and himself. Praxis is an activity unique to man, which distinguishes him from all other beings. The concept appears in two of Marx's early works: the Economic and Philosophical Manuscripts of 1844 and the Theses on Feuerbach (1845). In the former work, Marx contrasts the free, conscious productive activity of human beings with the unconscious, compulsive production of animals. He also affirms the primacy of praxis over theory, claiming that theoretical contradictions can only be resolved through practical activity. In the latter work, revolutionary practice is a central theme:
Marx here criticizes the materialist philosophy of Ludwig Feuerbach for envisaging objects in a contemplative way. Marx argues that perception is itself a component of man's practical relationship to the world. To understand the world does not mean considering it from the outside, judging it morally or explaining it scientifically. Society cannot be changed by reformers who understand its needs, only by the revolutionary praxis of the mass whose interest coincides with that of society as a whole—the proletariat. This will be an act of society understanding itself, in which the subject changes the object by the very fact of understanding it.
Seemingly inspired by the Theses, the nineteenth century socialist Antonio Labriola called Marxism the "philosophy of praxis". This description of Marxism would appear again in Antonio Gramsci's Prison Notebooks and the writings of the members of the Frankfurt School. Praxis is also an important theme for Marxist thinkers such as Georg Lukacs, Karl Korsch, Karel Kosik and Henri Lefebvre, and was seen as the central concept of Marx's thought by Yugoslavia's Praxis School, which established a journal of that name in 1964.
Jean-Paul Sartre
In the Critique of Dialectical Reason, Jean-Paul Sartre posits a view of individual praxis as the basis of human history. In his view, praxis is an attempt to negate human need. In a revision of Marxism and his earlier existentialism, Sartre argues that the fundamental relation of human history is scarcity. Conditions of scarcity generate competition for resources, exploitation of one over another and division of labor, which in its turn creates struggle between classes. Each individual experiences the other as a threat to his or her own survival and praxis; it is always a possibility that one's individual freedom limits another's. Sartre recognizes both natural and man-made constraints on freedom: he calls the non-unified practical activity of humans the "practico-inert". Sartre opposes to individual praxis a "group praxis" that fuses each individual to be accountable to each other in a common purpose. Sartre sees a mass movement in a successful revolution as the best exemplar of such a fused group.
Hannah Arendt
In The Human Condition, Hannah Arendt argues that Western philosophy too often has focused on the contemplative life (vita contemplativa) and has neglected the active life (vita activa). This has led humanity to frequently miss much of the everyday relevance of philosophical ideas to real life. For Arendt, praxis is the highest and most important level of the active life. Thus, she argues that more philosophers need to engage in everyday political action or praxis, which she sees as the true realization of human freedom. According to Arendt, our capacity to analyze ideas, wrestle with them, and engage in active praxis is what makes us uniquely human.
In Maurizio Passerin d'Etreves's estimation, "Arendt's theory of action and her revival of the ancient notion of praxis represent one of the most original contributions to twentieth century political thought. ... Moreover, by viewing action as a mode of human togetherness, Arendt is able to develop a conception of participatory democracy which stands in direct contrast to the bureaucratized and elitist forms of politics so characteristic of the modern epoch."
Education
Praxis is used by educators to describe a recurring passage through a cyclical process of experiential learning, such as the cycle described and popularised by David A. Kolb.
Paulo Freire defines praxis in Pedagogy of the Oppressed as "reflection and action directed at the structures to be transformed." Through praxis, oppressed people can acquire a critical awareness of their own condition, and, with teacher-students and students-teachers, struggle for liberation.
In the British Channel 4 television documentary New Order: Play at Home, Factory Records owner Tony Wilson describes praxis as "doing something, and then only afterwards, finding out why you did it".
Praxis may be described as a form of critical thinking and comprises the combination of reflection and action. Praxis can be viewed as a progression of cognitive and physical actions:
Taking the action
Considering the impacts of the action
Analysing the results of the action by reflecting upon it
Altering and revising conceptions and planning following reflection
Implementing these plans in further actions
This creates a cycle which can be viewed in terms of educational settings, learners and educational facilitators.
Scott and Marshall (2009) refer to praxis as "a philosophical term referring to human action on the natural and social world". Furthermore, Gramsci (1999) emphasises the power of praxis in Selections from the Prison Notebooks by stating that "The philosophy of praxis does not tend to leave the simple in their primitive philosophy of common sense but rather to lead them to a higher conception of life".
To reveal the inadequacies of religion, folklore, intellectualism and other such 'one-sided' forms of reasoning, Gramsci appeals directly in his later work to Marx's 'philosophy of praxis', describing it as a 'concrete' mode of reasoning. This principally involves the juxtaposition of a dialectical and scientific audit of reality; against all existing normative, ideological, and therefore counterfeit accounts. Essentially a 'philosophy' based on 'a practice', Marx's philosophy, is described correspondingly in this manner, as the only 'philosophy' that is at the same time a 'history in action' or a 'life' itself (Gramsci, Hoare and Nowell-Smith, 1972, p. 332).
Spirituality
Praxis is also key in meditation and spirituality, where emphasis is placed on gaining first-hand experience of concepts and certain areas, such as union with the Divine, which can only be explored through praxis due to the inability of the finite mind (and its tool, language) to comprehend or express the infinite. In an interview for YES! Magazine, Matthew Fox explained it this way:
According to Strong's Concordance, the Hebrew word ta‛am is, properly, a taste. This is, figuratively, perception and, by implication, intelligence; transitively, a mandate: advice, behaviour, decree, discretion, judgment, reason, taste, understanding.
Medicine
Praxis is the ability to perform voluntary skilled movements. The partial or complete inability to do so in the absence of primary sensory or motor impairments is known as apraxia.
See also
Apraxia
Christian theological praxis
Hexis
Lex artis
Orthopraxy
Praxeology
Praxis Discussion Series
Praxis (disambiguation)
Praxis intervention
Praxis school
Practice (social theory)
Theses on Feuerbach
References
Further reading
Paulo Freire (1970), Pedagogy of the Oppressed, Continuum International Publishing Group.
External links
Entry for "praxis" at the Encyclopaedia of Informal Education
Der Begriff Praxis
Concepts in the philosophy of mind
Marxism | 0.765331 | 0.997456 | 0.763384 |
Energy (psychological) | Energy is a concept in some psychological theories or models of a postulated unconscious mental functioning on a level between biology and consciousness.
Philosophical accounts
The idea harks back to Aristotle's conception of actus et potentia. In the philosophical context, the term "energy" may have the literal meaning of "activity" or "operation". Henry More, in his 1642 Psychodia platonica; or a platonicall song of the soul, defined an "energy of the soul" as including "every phantasm of the soul". In 1944 Julian Sorell Huxley characterised "mental energy" as "the driving forces of the psyche, emotional as well as intellectual [...]."
Psychoanalytic accounts
In 1874, the concept of "psychodynamics" was proposed with the publication of Lectures on Physiology by German physiologist Ernst Wilhelm von Brücke who, in coordination with physicist Hermann von Helmholtz, one of the formulators of the first law of thermodynamics (conservation of energy), supposed that all living organisms are energy-systems also governed by this principle. During this year, at the University of Vienna, Brücke served as supervisor for first-year medical student Sigmund Freud who adopted this new "dynamic" physiology. In his Lectures on Physiology, Brücke set forth the then-radical view that the living organism is a dynamic system to which the laws of chemistry and physics apply.
In The Ego and the Id, Freud argued that the id was the source of the personality's desires, and therefore of the psychic energy that powered the mind. Freud defined libido as the instinct energy or force. Freud later added the death drive (also contained in the id) as a second source of mental energy. The origins of Freud's basic model, based on the fundamentals of chemistry and physics, according to John Bowlby, stems from Brücke, Meynert, Breuer, Helmholtz, and Herbart.
In 1928, Carl Jung published a seminal essay entitled "On Psychic Energy" which dealt with energy Jung claimed was first discovered by Russian philosopher Nikolaus Grot. Later, the theory of psychodynamics and the concept of "psychic energy" was developed further by those such as Alfred Adler and Melanie Klein.
A pupil of Freud named Wilhelm Reich proponed a theory construed out of the root of Freud's libido, of psychic energy he came to term orgone energy. This was very controversial and Reich was soon rejected and expelled from the Vienna Psychoanalytical Association.
Psychological energy and force are the basis of an attempt to formulate a scientific theory according to which psychological phenomena would be subject to precise laws akin to how physical objects are subject to Newton's laws. This concept of psychological energy is separate and distinct from (or even opposed to) the mystical eastern concept of spiritual energy.
The Myers–Briggs Type Indicator divides people into 16 categories based on whether certain activities leave them feeling energized or drained of energy.
Neuroscientific accounts
Mental energy has been repeatedly compared to, or connected with, the physical quantity energy.
Studies of the 1990s to 2000s (and earlier) have found that mental effort can be measured in terms of increased metabolism in the brain. The modern neuroscientific view is that brain metabolism, measured by functional magnetic resonance imaging or positron emission tomography, is a physical correlate of mental activity.
Criticism
The concept of psychic energy has been criticized because it lacks empirical evidence and there is not a neurological or neuropsychological correlate, unlike with the neural correlates of consciousness.
Shevrin argues that energy may be a systems concept. He theorizes that the strength of an emotion can remain the same, while an emotion changes. He argues that this intensity, can be understood separately from emotion and that this intensity might be considered energy.
However, a significant volume of empirical research on energy psychology has emerged over several decades, much of it published in peer-reviewed medical and psychology journals. It includes a large body of randomized controlled trials; extensive noteworthy uncontrolled trials in which subjects served as their own controls, with measurements taken over time to assess client progress; as well as small pilot studies and collections of case histories that are suggestive of future research directions.
Thus, as of the date of this citation, there have been over 200 review articles, research studies, and meta-analyses published in professional peer-reviewed journals. This includes over 70 randomized controlled trials, 50 clinical outcomes studies, 5 meta-analyses, 4 systematic reviews of various energy psychology modalities, and 9 comparative reviews of energy psychology with other therapies such as EMDR and cognitive behavioral therapy. All but one of the experimental studies have documented the effectiveness of energy psychology modalities. Also, the studies document the efficacy of energy psychology methods for the treatment of physical pain, anxiety, depression, cravings, trauma, PTSD, and peak athletic performance.
Concerning meta-analyses, four revealed a large effect size and one a moderate effect size. The Gilomen & Lee (2015) meta-analysis indicated a moderate effect size of tapping on psychological distress (utilizing Hedge's g as compared to the standard Cohen's h), although they opined that the results could be due to factors common to other therapeutic approaches, and not necessarily due to tapping. Nelms & Castel (2016) found a large effect size on tapping for depression, Clond's (2017) revealed a large effect size for treating anxiety, and Sebastian & Nelms (2017) also indicated a large effect size for PTSD. Regarding the question of acupoint tapping as an active therapeutic ingredient, the meta-analysis by Church, Stapleton, Kip & Gallo (2020) revealed a large effect size in this regard, supporting tapping as an active therapeutic ingredient.
See also
Cathexis
Cognitive load
Death drive
Energy (esotericism)
Energy psychology
Humorism
Id, ego and superego
Libido
Mind
Motivation
Psyche (psychology)
Spoon theory
Theory of mind
References
Further reading
Laplanche, J.|Jean Laplanche and Pontalis, J.B. (1974). The Language of Psycho-Analysis. Trans. Donald Nicholson-Smith. New York: W. W. Norton & Company, 1974.
Furman, M., and Gallo, F. (2000). The Neurophysics of Human Behavior: Explorations at the Interface of Brain, Mind, Behavior, and Information. Boca Raton, FL: CRC Press.
Gallo, F. (2005). Energy Psychology: Explorations at the Interface of Energy, Cognition, Behavior, and Health. Boca Raton, FL: CRC Press.
Gallo, F. (2007). Energy Tapping for Trauma. Oakland, CA: New Harbinger.
Gallo, F., and Vincenzi, V. (2008). Energy Tapping. Oakland, CA: New Harbinger.
Clond, M. (2016). Emotional freedom techniques for anxiety: A systematic review with meta-analysis. The Journal of Nervous and Mental Disease. 204(5), 388-395.
Gilomen, S. A. & Lee, C. W. (2015). The efficacy of acupoint stimulation in the treatment of psychological distress: A meta-analysis. Journal of Behavior Therapy and Experimental Psychiatry, 48, 140-148.
Johnson, C., Shala, M., Sejdijaj, X., Odell, R., Dabishevci, K. (2001). Thought field therapy: Soothing the bad moments of Kosovo. Journal of Clinical Psychology, 57(10), 1237-1240.
Nelms, J. & Castel, D. (2016). A systematic review and meta-analysis of randomized and nonrandomized trials of emotional freedom techniques (EFT) for the treatment of depression. Explore: The Journal of Science and Healing, 12(6), 416-26.
Sebastian, B., & Nelms, J. (2017). The effectiveness of emotional freedom techniques in the treatment of posttraumatic stress disorder: A meta-analysis. Explore: The Journal of Science and Healing, 13(1), 16-25.
External links
Psychic Energy & Psychoanalytic Theory
Motivation
Psychological concepts | 0.778695 | 0.980312 | 0.763364 |
Physical property | A physical property is any property of a physical system that is measurable. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables.
Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility, viscosity, etc.
Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These
classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined.
Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance.
It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quantum structure.
Physical properties are contrasted with chemical properties which determine the way a material behaves in a chemical reaction.
List of properties
The physical properties of an object that are traditionally defined by classical mechanics are often called mechanical properties. Other broad categories, commonly cited, are electrical properties, optical properties, thermal properties, etc. Examples of physical properties include:
absorption (physical)
absorption (electromagnetic)
albedo
angular momentum
area
brittleness
boiling point
capacitance
color
concentration
density
dielectric
ductility
distribution
efficacy
elasticity
electric charge
electrical conductivity
electrical impedance
electric field
electric potential
emission
flow rate (mass)
flow rate (volume)
fluidity
frequency
hardness
heat capacity
inductance
intrinsic impedance
intensity
irradiance
length
location
luminance
luminescence
luster
malleability
magnetic field
magnetic flux
mass
melting point
moment
momentum
opacity
permeability
permittivity
plasticity
pressure
radiance
resistivity
reflectivity
refractive index
spin
solubility
specific heat
strength
stiffness
temperature
tension
thermal conductivity (and resistance)
velocity
viscosity
volume
wave impedance
See also
List of materials properties
Physical quantity
Physical test
Test method
References
Bibliography
External links
Physical and Chemical Property Data Sources – a list of references which cover several chemical and physical properties of various materials
Physical phenomena | 0.76611 | 0.996401 | 0.763352 |
Technological change | Technological change (TC) or technological development is the overall process of invention, innovation and diffusion of technology or processes.<ref name="Econ refsdon't" >From [[The New Palgrave Dictionary of technical change" by S. Metcalfe. • "biased and biased technological change" by Peter L. Rousseau. • "skill-biased technical change" by Giovanni L. Violante.</ref> In essence, technological change covers the invention of technologies (including processes) and their commercialization or release as open source via research and development (producing emerging technologies), the continual improvement of technologies (in which they often become less expensive), and the diffusion of technologies throughout industry or society (which sometimes involves disruption and convergence). In short, technological change is based on both better and more technology.
Modeling technological change
In its earlier days, technological change was illustrated with the 'Linear Model of Innovation', which has now been largely discarded to be replaced with a model of technological change that involves innovation at all stages of research, development, diffusion, and use. When speaking about "modeling technological change," this often means the process of innovation. This process of continuous improvement is often modeled as a curve depicting decreasing costs over time (for instance fuel cell which have become cheaper every year). TC is also often modelled using a learning curve, ex.: Ct=C0 * Xt^-b
Technological change itself is often included in other models (e.g. climate change models) and was often taken as an exogenous factor. These days TC is more often included as an endogenous factor. This means that it is taken as something you can influence. Today, there are sectors that maintain the policy which can influence the speed and direction of technological change. For example, proponents of the Induced Technological Change hypothesis state that policymakers can steer the direction of technological advances by influencing relative factor prices and this can be demonstrated in the way climate policies impact the use of fossil fuel energy, specifically how it becomes relatively more expensive. Until now, the empirical evidence about the existence of policy-induced innovation effects is still lacking and this may be attributed to a variety of reasons outside the sparsity of models (e.g. long-term policy uncertainty and exogenous drivers of (directed) innovation). A related concept is the notion of Directed Technical Change with more emphasis on price induced directional rather than policy induced scale effects.
Invention
The creation of something new, or a "breakthrough" technology. This is often included in the process of product development and relies on research. This can be demonstrated in the invention of the spreadsheet software. Newly invented technologies are conventionally patented.
Diffusion
Diffusion pertains to the spread of a technology through a society or industry. The diffusion of a technology theory generally follows an S-shaped curve as early versions of technology are rather unsuccessful, followed by a period of successful innovation with high levels of adoption, and finally a dropping off in adoption as a technology reaches its maximum potential in a market. In the case of a personal computer, it has made way beyond homes and into business settings, such as office workstations and server machines to host websites.
Technological change as a social process
Underpinning the idea of a technological change as a social process is a general agreement on the importance of social context and communication. According to this model, technological change is seen as a social process involving producers and adopters and others (such as government) who are profoundly affected by cultural setting, political institutions, and marketing strategies.
In free market economies, the maximization of profits is a powerful driver of technological change. Generally, only those technologies that promise to maximize profits for the owners of incoming producing capital are developed and reach the market. Any technological product that fails to meet this criterion - even though they may satisfy important societal needs - are eliminated. Therefore, technological change is a social process strongly biased in favor of the financial interests of capital. There are currently no well established democratic processes, such as voting on the social or environmental desirability of a new technology prior to development and marketing, that would allow average citizens to direct the course of technological change.
Elements of diffusion
Emphasis has been on four key elements of the technological change process: (1) an innovative technology (2) communicated through certain channels (3) to members of a social system (4) who adopt it over a period of time. These elements are derived from Everett M. Rogers' diffusion of innovations theory using a communications-type approach.
Innovation
Rogers proposed that there are five main attributes of innovative technologies that influence acceptance. He called these criteria ACCTO, which stands for Advantage, Compatibility, Complexity, Trialability, and Observability. Relative advantage may be economic or non-economic, and is the degree to which an innovation is seen as superior to prior innovations fulfilling the same needs. It is positively related to acceptance (e.g. the higher the relative advantage, the higher the adoption level, and vice versa). Compatibility is the degree to which an innovation appears consistent with existing values, past experiences, habits and needs to the potential adopter; a low level of compatibility will slow acceptance. Complexity is the degree to which an innovation appears difficult to understand and use; the more complex an innovation, the slower its acceptance. Trialability is the perceived degree to which an innovation may be tried on a limited basis, and is positively related to acceptance. Trialability can accelerate acceptance because small-scale testing reduces risk. Observability is the perceived degree to which results of innovating are visible to others and is positively related to acceptance.
Communication channels
Communication channels are the means by which a source conveys a message to a receiver. Information may be exchanged through two fundamentally different, yet complementary, channels of communication. Awareness is more often obtained through the mass media, while uncertainty reduction that leads to acceptance mostly results from face-to-face communication.
Social system
The social system provides a medium through which and boundaries within which, innovation is adopted. The structure of the social system affects technological change in several ways. Social norms, opinion leaders, change agents, government and the consequences of innovations are all involved. Also involved are cultural setting, nature of political institutions, laws, policies and administrative structures.
Time
Time enters into the acceptance process in many ways. The time dimension relates to the innovativeness of an individual or other adopter, which is the relative earliness or lateness with which an innovation is adopted.
Economics
In economics, technological change is a change in the set of feasible production possibilities.
A technological innovation is Hicks neutral, following John Hicks (1932), if a change in technology does not change the ratio of capital's marginal product to labour's marginal product for a given capital-to-labour ratio. A technological innovation is Harrod neutral (following Roy Harrod) if the technology is labour-augmenting (i.e. helps labor); it is Solow neutral if the technology is capital-augmenting (i.e. helps capital).J. R. Hicks (1932, 2nd ed., 1963). The Theory of Wages, Ch. VI, Appendix, and Section III. Macmillan.
See also
Accelerating change
Cultural lag
Innovation
Investment specific technological progress
Posthumanization
Productivity
Productivity improving technologies (historical)
Second Industrial Revolution
Technical change
Technological innovation system
Technological revolution
Technological transitions
Technological unemployment
Theories of technology
Wait calculation
References
Notes
Further reading
Books
Jones, Charles I. (1997). Introduction to Economic Growth. W. W. Norton.
Kuhn, Thomas Samuel (1996). The Structure of Scientific Revolutions, 3rd edition. University of Chicago Press.
Mansfield, Edwin (2003). Microeconomic Theory and Applications, 11th edition. W. W. Norton
Rogers, Everett (2003). Diffusion of Innovations, 5th edition, Free Press.
Green, L (2001). Technoculture, Allen and Unwin, Crows Nest, pp. 1–20.
Articles
Danna, W. (2007). "They Had a Satellite and They Knew How to Use It", American Journalism, Spring, Vol. 24 Issue 2, pp. 87–110. Online source: abstract and excerpt.
Dickey, Colin (January 2015), A fault in our design . "Perhaps a brighter technological future lies less in the latest gadgets, and rather in learning to understand ourselves better, particularly our capacity to forget what we’ve already learned. The future of technology is nothing without a long view of the past, and a means to embody history’s mistakes and lessons." Aeon Hanlon, Michael (December 2014), The golden quarter. "Some of our greatest cultural and technological achievements took place between 1945 and 1971. Why has progress stalled?" Aeon''
External links
Innovation
Engineering studies | 0.769958 | 0.991396 | 0.763333 |
Hemimetabolism | Hemimetabolism or hemimetaboly, also called partial metamorphosis and paurometabolism, is the mode of development of certain insects that includes three distinct stages: the egg, nymph, and the adult stage, or imago. These groups go through gradual changes; there is no pupal stage. The nymph often has a thin exoskeleton and resembles the adult stage but lacks wings and functional reproductive organs. The hemimetabolous insects differ from ametabolous taxa in that the one and only adult instar undergoes no further moulting.
Orders
All insects of the Pterygota except Holometabola belong to hemimetabolous orders:
Hemiptera (scale insects, aphids, whitefly, cicadas, leafhoppers, and true bugs)
Orthoptera (grasshoppers, locusts, and crickets)
Mantodea (praying mantises)
Blattodea (cockroaches and termites)
Dermaptera (earwigs)
Odonata (dragonflies and damselflies)
Phasmatodea (stick insects)
Phthiraptera (sucking lice)
Ephemeroptera (mayflies)
Plecoptera (stoneflies)
Notoptera (icebugs and gladiators)
Terminology of aquatic entomology
In aquatic entomology, different terminology is used when categorizing insects with gradual or partial metamorphosis. Paurometabolism (gradual) refers to insects whose nymphs occupy the same environment as the adults, as in the family Gerridae of Hemiptera. The hemimetabolous (partial) insects are those whose nymphs, called naiads, occupy aquatic habitats while the adults are terrestrial. This includes all members of the orders Plecoptera, Ephemeroptera, and Odonata. Aquatic entomologists use this categorization because it specifies whether the adult will occupy an aquatic or semi aquatic habitat, or will be terrestrial.
See also
Ametabolism
Holometabolism
Subimago
Metamorphosis
References
Insect developmental biology
he:בעלי גלגול חסר
simple:Incomplete metamorphosis | 0.774049 | 0.986152 | 0.76333 |
Baroclinity | In fluid dynamics, the baroclinity (often called baroclinicity) of a stratified fluid is a measure of how misaligned the gradient of pressure is from the gradient of density in a fluid. In meteorology a baroclinic flow is one in which the density depends on both temperature and pressure (the fully general case). A simpler case, barotropic flow, allows for density dependence only on pressure, so that the curl of the pressure-gradient force vanishes.
Baroclinity is proportional to:
which is proportional to the sine of the angle between surfaces of constant pressure and surfaces of constant density. Thus, in a barotropic fluid (which is defined by zero baroclinity), these surfaces are parallel.
In Earth's atmosphere, barotropic flow is a better approximation in the tropics, where density surfaces and pressure surfaces are both nearly level, whereas in higher latitudes the flow is more baroclinic. These midlatitude belts of high atmospheric baroclinity are characterized by the frequent formation of synoptic-scale cyclones, although these are not really dependent on the baroclinity term per se: for instance, they are commonly studied on pressure coordinate iso-surfaces where that term has no contribution to vorticity production.
Baroclinic instability
Baroclinic instability is a fluid dynamical instability of fundamental importance in the atmosphere and in the oceans. In the atmosphere it is the principal mechanism shaping the cyclones and anticyclones that dominate weather in mid-latitudes. In the ocean it generates a field of mesoscale eddies (100 km or smaller) that play various roles in oceanic dynamics and the transport of tracers.
Whether a fluid counts as rapidly rotating is determined in this context by the Rossby number, which is a measure of how close the flow is to solid body rotation. More precisely, a flow in solid body rotation has vorticity that is proportional to its angular velocity. The Rossby number is a measure of the departure of the vorticity from that of solid body rotation. The Rossby number must be small for the concept of baroclinic instability to be relevant. When the Rossby number is large, other kinds of instabilities, often referred to as inertial, become more relevant.
The simplest example of a stably stratified flow is an incompressible flow with density decreasing with height.
In a compressible gas such as the atmosphere, the relevant measure is the vertical gradient of the entropy, which must increase with height for the flow to be stably stratified.
The strength of the stratification is measured by asking how large the vertical shear of the horizontal winds has to be in order to destabilize the flow and produce the classic Kelvin–Helmholtz instability. This measure is called the Richardson number. When the Richardson number is large, the stratification is strong enough to prevent this shear instability.
Before the classic work of Jule Charney and Eric Eady on baroclinic instability in the late 1940s, most theories trying to explain the structure of mid-latitude eddies took as their starting points the high Rossby number or small Richardson number instabilities familiar to fluid dynamicists at that time. The most important feature of baroclinic instability is that it exists even in the situation of rapid rotation (small Rossby number) and strong stable stratification (large Richardson's number) typically observed in the atmosphere.
The energy source for baroclinic instability is the potential energy in the environmental flow. As the instability grows, the center of mass of the fluid is lowered.
In growing waves in the atmosphere, cold air moving downwards and equatorwards displaces the warmer air moving polewards and upwards.
Baroclinic instability can be investigated in the laboratory using a rotating, fluid filled annulus. The annulus is heated at the outer wall and cooled at the inner wall, and the resulting fluid flows give rise to baroclinically unstable waves.
The term "baroclinic" refers to the mechanism by which vorticity is generated. Vorticity is the curl of the velocity field. In general, the evolution of vorticity can be broken into contributions from advection (as vortex tubes move with the flow), stretching and twisting (as vortex tubes are pulled or twisted by the flow) and baroclinic vorticity generation, which occurs whenever there is a density gradient along surfaces of constant pressure. Baroclinic flows can be contrasted with barotropic flows in which density and pressure surfaces coincide and there is no baroclinic generation of vorticity.
The study of the evolution of these baroclinic instabilities as they grow and then decay is a crucial part of developing theories for the fundamental characteristics of midlatitude weather.
Baroclinic vector
Beginning with the equation of motion for a frictionless fluid (the Euler equations) and taking the curl, one arrives at the equation of motion for the curl of the fluid velocity, that is to say, the vorticity.
In a fluid that is not all of the same density, a source term appears in the vorticity equation whenever surfaces of constant density (isopycnic surfaces) and surfaces
of constant pressure (isobaric surfaces) are not aligned. The material derivative of the local vorticity is given by:
(where is the velocity and is the vorticity, is the pressure, and is the density). The baroclinic contribution is the vector:
This vector, sometimes called the solenoidal vector, is of interest both in compressible fluids and in incompressible (but inhomogeneous) fluids. Internal gravity waves as well as unstable Rayleigh–Taylor modes can be analyzed from the perspective of the baroclinic vector. It is also of interest in the creation of vorticity by the passage of shocks through inhomogeneous media, such as in the Richtmyer–Meshkov instability.
Experienced divers are familiar with the very slow waves that can be excited at a thermocline or a halocline, which are known as internal waves. Similar waves can be generated between a layer of water and a layer of oil. When the interface between these two surfaces is not horizontal and the system is close to hydrostatic equilibrium, the gradient of the pressure is vertical but the gradient of the density is not. Therefore the baroclinic vector is nonzero, and the sense of the baroclinic vector is to create vorticity to make the interface level out. In the process, the interface overshoots, and the result is an oscillation which is an internal gravity wave. Unlike surface gravity waves, internal gravity waves do not require a sharp interface. For example, in bodies of water, a gradual gradient in temperature or salinity is sufficient to support internal gravity waves driven by the baroclinic vector.
References
Bibliography
External links
Fluid dynamics
Atmospheric dynamics | 0.773277 | 0.987128 | 0.763323 |
Quantum decoherence | Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.
Concept
In quantum mechanics,
physical systems are described by a mathematical representation called a wave function; a probabilistic interpretation of the wave function is used to explain various quantum effects. The wave function describes various states and as long as there exists a definite phase relation between different states, the system is said to be coherent. In the absence of outside forces or interactions, coherence is preserved under the laws of quantum physics.
If a quantum system were perfectly isolated, it would maintain coherence indefinitely, but it would be impossible to manipulate or investigate it. If it is not perfectly isolated, for example during a measurement, coherence is shared with the environment and appears to be lost with time ─ a process called quantum decoherence or environmental decoherence. The quantum coherence is not lost but rather mixed with many more degrees of freedom in the environment, analogous to the way energy appears to be lost in by friction in classical mechanics when it actually has produced heat in the environment.
Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings. Viewed in isolation, the system's dynamics are non-unitary (although the combined system plus environment evolves in a unitary fashion). Thus the dynamics of the system alone are irreversible. As with any coupling, entanglements are generated between the system and environment. These have the effect of sharing quantum information with—or transferring it to—the surroundings.
History and interpretation
Relation to interpretation of quantum mechanics
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum physics might correspond to experienced reality. Decoherence calculations can be done in any interpretation of quantum mechanics, since those calculations are an application of the standard mathematical tools of quantum theory. However, the subject of decoherence has been closely related to the problem of interpretation throughout its history.
Decoherence has been used to understand the possibility of the collapse of the wave function in quantum mechanics. Decoherence does not generate actual wave-function collapse. It only provides a framework for apparent wave-function collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wave function are decoupled from a coherent system and acquire phases from their immediate surroundings. A total superposition of the global or universal wavefunction still exists (and remains coherent at the global level), but its ultimate fate remains an interpretational issue.
With respect to the measurement problem, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, observation indicates that this mixture looks like a proper quantum ensemble in a measurement situation, as the measurements lead to the "realization" of precisely one state in the "ensemble".
The philosophical views of Werner Heisenberg and Niels Bohr have often been grouped together as the "Copenhagen interpretation", despite significant divergences between them on important points. In 1955, Heisenberg suggested that the interaction of a system with its surrounding environment would eliminate quantum interference effects. However, Heisenberg did not provide a detailed account of how this might transpire, nor did he make explicit the importance of entanglement in the process.
Origin of the concepts
Nevill Mott's solution to the iconic Mott problem in 1929 is considered in retrospect to be the first quantum decoherence work. It was cited by the first modern theoretical treatment.
Although he did not use the term, the concept of quantum decoherence was first introduced in 1951 by the American physicist David Bohm, who called it the "destruction of interference in the process of measurement". Bohm later used decoherence to handle the measurement process in the de Broglie-Bohm interpretation of quantum theory.
The significance of decoherence was further highlighted in 1970 by the German physicist H. Dieter Zeh, and it has been a subject of active research since the 1980s. Decoherence has been developed into a complete framework, but there is controversy as to whether it solves the measurement problem, as the founders of decoherence theory admit in their seminal papers.
The study of decoherence as a proper subject began in 1970, with H. Dieter Zeh's paper "On the Interpretation of Measurement in Quantum Theory". Zeh regarded the wavefunction as a physical entity, rather than a calculational device or a compendium of statistical information (as is typical for Copenhagen-type interpretations), and he proposed that it should evolve unitarily, in accord with the Schrödinger equation, at all times. Zeh was initially unaware of Hugh Everett III's earlier work, which also proposed a universal wavefunction evolving unitarily; he revised his paper to reference Everett after learning of Everett's "relative-state interpretation" through an article by Bryce DeWitt. (DeWitt was the one who termed Everett's proposal the many-worlds interpretation, by which name it is commonly known.) For Zeh, the question of how to interpret quantum mechanics was of key importance, and an interpretation along the lines of Everett's was the most natural. Partly because of a general disinterest among physicists for interpretational questions, Zeh's work remained comparatively neglected until the early 1980s, when two papers by Wojciech Zurek invigorated the subject. Unlike Zeh's publications, Zurek's articles were fairly agnostic about interpretation, focusing instead on specific problems of density-matrix dynamics. Zurek's interest in decoherence stemmed from furthering Bohr's analysis of the double-slit experiment in his reply to the Einstein–Podolsky–Rosen paradox, work he had undertaken with Bill Wootters, and he has since argued that decoherence brings a kind of rapprochement between Everettian and Copenhagen-type views.
Decoherence does not claim to provide a mechanism for some actual wave-function collapse; rather it puts forth a reasonable framework for the appearance of wave-function collapse. The quantum nature of the system is simply "leaked" into the environment so that a total superposition of the wave function still exists, but exists—at least for all practical purposes—beyond the realm of measurement. By definition, the claim that a merged but unmeasurable wave function still exists cannot be proven experimentally. Decoherence is needed to understand why a quantum system begins to obey classical probability rules after interacting with its environment (due to the suppression of the interference terms when applying Born's probability rules to the system).
Criticism of the adequacy of decoherence theory to solve the measurement problem has been expressed by Anthony Leggett.
Mechanisms
To examine how decoherence operates, an "intuitive" model is presented below. The model requires some familiarity with quantum theory basics. Analogies are made between visualizable classical phase spaces and Hilbert spaces. A more rigorous derivation in Dirac notation shows how decoherence destroys interference effects and the "quantum nature" of systems. Next, the density matrix approach is presented for perspective.
Phase-space picture
An N-particle system can be represented in non-relativistic quantum mechanics by a wave function , where each xi is a point in 3-dimensional space. This has analogies with the classical phase space. A classical phase space contains a real-valued function in 6N dimensions (each particle contributes 3 spatial coordinates and 3 momenta). In this case a "quantum" phase space, on the other hand, involves a complex-valued function on a 3N-dimensional space. The position and momenta are represented by operators that do not commute, and lives in the mathematical structure of a Hilbert space. Aside from these differences, however, the rough analogy holds.
Different previously isolated, non-interacting systems occupy different phase spaces. Alternatively we can say that they occupy different lower-dimensional subspaces in the phase space of the joint system. The effective dimensionality of a system's phase space is the number of degrees of freedom present, which—in non-relativistic models—is 6 times the number of a system's free particles. For a macroscopic system this will be a very large dimensionality. When two systems (the environment being one system) start to interact, though, their associated state vectors are no longer constrained to the subspaces. Instead the combined state vector time-evolves a path through the "larger volume", whose dimensionality is the sum of the dimensions of the two subspaces. The extent to which two vectors interfere with each other is a measure of how "close" they are to each other (formally, their overlap or Hilbert space multiplies together) in the phase space. When a system couples to an external environment, the dimensionality of, and hence "volume" available to, the joint state vector increases enormously. Each environmental degree of freedom contributes an extra dimension.
The original system's wave function can be expanded in many different ways as a sum of elements in a quantum superposition. Each expansion corresponds to a projection of the wave vector onto a basis. The basis can be chosen at will. Choosing an expansion where the resulting basis elements interact with the environment in an element-specific way, such elements will—with overwhelming probability—be rapidly separated from each other by their natural unitary time evolution along their own independent paths. After a very short interaction, there is almost no chance of further interference. The process is effectively irreversible. The different elements effectively become "lost" from each other in the expanded phase space created by coupling with the environment. In phase space, this decoupling is monitored through the Wigner quasi-probability distribution. The original elements are said to have decohered. The environment has effectively selected out those expansions or decompositions of the original state vector that decohere (or lose phase coherence) with each other. This is called "environmentally-induced superselection", or einselection. The decohered elements of the system no longer exhibit quantum interference between each other, as in a double-slit experiment. Any elements that decohere from each other via environmental interactions are said to be quantum-entangled with the environment. The converse is not true: not all entangled states are decohered from each other.
Any measuring device or apparatus acts as an environment, since at some stage along the measuring chain, it has to be large enough to be read by humans. It must possess a very large number of hidden degrees of freedom. In effect, the interactions may be considered to be quantum measurements. As a result of an interaction, the wave functions of the system and the measuring device become entangled with each other. Decoherence happens when different portions of the system's wave function become entangled in different ways with the measuring device. For two einselected elements of the entangled system's state to interfere, both the original system and the measuring in both elements device must significantly overlap, in the scalar product sense. If the measuring device has many degrees of freedom, it is very unlikely for this to happen.
As a consequence, the system behaves as a classical statistical ensemble of the different elements rather than as a single coherent quantum superposition of them. From the perspective of each ensemble member's measuring device, the system appears to have irreversibly collapsed onto a state with a precise value for the measured attributes, relative to that element. This provides one explanation of how the Born rule coefficients effectively act as probabilities as per the measurement postulate constituting a solution to the quantum measurement problem.
Dirac notation
Using Dirac notation, let the system initially be in the state
where the s form an einselected basis (environmentally induced selected eigenbasis), and let the environment initially be in the state . The vector basis of the combination of the system and the environment consists of the tensor products of the basis vectors of the two subsystems. Thus, before any interaction between the two subsystems, the joint state can be written as
where is shorthand for the tensor product . There are two extremes in the way the system can interact with its environment: either (1) the system loses its distinct identity and merges with the environment (e.g. photons in a cold, dark cavity get converted into molecular excitations within the cavity walls), or (2) the system is not disturbed at all, even though the environment is disturbed (e.g. the idealized non-disturbing measurement). In general, an interaction is a mixture of these two extremes that we examine.
System absorbed by environment
If the environment absorbs the system, each element of the total system's basis interacts with the environment such that
evolves into
and so
evolves into
The unitarity of time evolution demands that the total state basis remains orthonormal, i.e. the scalar or inner products of the basis vectors must vanish, since :
This orthonormality of the environment states is the defining characteristic required for einselection.
System not disturbed by environment
In an idealized measurement, the system disturbs the environment, but is itself undisturbed by the environment. In this case, each element of the basis interacts with the environment such that
evolves into the product
and so
evolves into
In this case, unitarity demands that
where was used. Additionally, decoherence requires, by virtue of the large number of hidden degrees of freedom in the environment, that
As before, this is the defining characteristic for decoherence to become einselection. The approximation becomes more exact as the number of environmental degrees of freedom affected increases.
Note that if the system basis were not an einselected basis, then the last condition is trivial, since the disturbed environment is not a function of , and we have the trivial disturbed environment basis . This would correspond to the system basis being degenerate with respect to the environmentally defined measurement observable. For a complex environmental interaction (which would be expected for a typical macroscale interaction) a non-einselected basis would be hard to define.
Loss of interference and the transition from quantum to classical probabilities
The utility of decoherence lies in its application to the analysis of probabilities, before and after environmental interaction, and in particular to the vanishing of quantum interference terms after decoherence has occurred. If we ask what is the probability of observing the system making a transition from to before has interacted with its environment, then application of the Born probability rule states that the transition probability is the squared modulus of the scalar product of the two states:
where , , and etc.
The above expansion of the transition probability has terms that involve ; these can be thought of as representing interference between the different basis elements or quantum alternatives. This is a purely quantum effect and represents the non-additivity of the probabilities of quantum alternatives.
To calculate the probability of observing the system making a quantum leap from to after has interacted with its environment, then application of the Born probability rule states that we must sum over all the relevant possible states of the environment before squaring the modulus:
The internal summation vanishes when we apply the decoherence/einselection condition , and the formula simplifies to
If we compare this with the formula we derived before the environment introduced decoherence, we can see that the effect of decoherence has been to move the summation sign from inside of the modulus sign to outside. As a result, all the cross- or quantum interference-terms
have vanished from the transition-probability calculation. The decoherence has irreversibly converted quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities).
However, Ballentine shows that the significant impact of decoherence to reduce interference need not have significance for the transition of quantum systems to classical limits.
In terms of density matrices, the loss of interference effects corresponds to the diagonalization of the "environmentally traced-over" density matrix.
Density-matrix approach
The effect of decoherence on density matrices is essentially the decay or rapid vanishing of the off-diagonal elements of the partial trace of the joint system's density matrix, i.e. the trace, with respect to any environmental basis, of the density matrix of the combined system and its environment. The decoherence irreversibly converts the "averaged" or "environmentally traced-over" density matrix from a pure state to a reduced mixture; it is this that gives the appearance of wave-function collapse. Again, this is called "environmentally induced superselection", or einselection. The advantage of taking the partial trace is that this procedure is indifferent to the environmental basis chosen.
Initially, the density matrix of the combined system can be denoted as
where is the state of the environment.
Then if the transition happens before any interaction takes place between the system and the environment, the environment subsystem has no part and can be traced out, leaving the reduced density matrix for the system:
Now the transition probability will be given as
where , , and etc.
Now the case when transition takes place after the interaction of the system with the environment. The combined density matrix will be
To get the reduced density matrix of the system, we trace out the environment and employ the decoherence/einselection condition and see that the off-diagonal terms vanish (a result obtained by Erich Joos and H. D. Zeh in 1985):
Similarly, the final reduced density matrix after the transition will be
The transition probability will then be given as
which has no contribution from the interference terms
The density-matrix approach has been combined with the Bohmian approach to yield a reduced-trajectory approach, taking into account the system reduced density matrix and the influence of the environment.
Operator-sum representation
Consider a system S and environment (bath) B, which are closed and can be treated quantum-mechanically. Let and be the system's and bath's Hilbert spaces respectively. Then the Hamiltonian for the combined system is
where are the system and bath Hamiltonians respectively, is the interaction Hamiltonian between the system and bath, and are the identity operators on the system and bath Hilbert spaces respectively. The time-evolution of the density operator of this closed system is unitary and, as such, is given by
where the unitary operator is . If the system and bath are not entangled initially, then we can write . Therefore, the evolution of the system becomes
The system–bath interaction Hamiltonian can be written in a general form as
where is the operator acting on the combined system–bath Hilbert space, and are the operators that act on the system and bath respectively. This coupling of the system and bath is the cause of decoherence in the system alone. To see this, a partial trace is performed over the bath to give a description of the system alone:
is called the reduced density matrix and gives information about the system only. If the bath is written in terms of its set of orthogonal basis kets, that is, if it has been initially diagonalized, then . Computing the partial trace with respect to this (computational) basis gives
where are defined as the Kraus operators and are represented as (the index combines indices and ):
This is known as the operator-sum representation (OSR). A condition on the Kraus operators can be obtained by using the fact that ; this then gives
This restriction determines whether decoherence will occur or not in the OSR. In particular, when there is more than one term present in the sum for , then the dynamics of the system will be non-unitary, and hence decoherence will take place.
Semigroup approach
A more general consideration for the existence of decoherence in a quantum system is given by the master equation, which determines how the density matrix of the system alone evolves in time (see also the Belavkin equation for the evolution under continuous measurement). This uses the Schrödinger picture, where evolution of the state (represented by its density matrix) is considered. The master equation is
where is the system Hamiltonian along with a (possible) unitary contribution from the bath, and is the Lindblad decohering term. The Lindblad decohering term is represented as
The are basis operators for the M-dimensional space of bounded operators that act on the system Hilbert space and are the error generators. The matrix elements represent the elements of a positive semi-definite Hermitian matrix; they characterize the decohering processes and, as such, are called the noise parameters. The semigroup approach is particularly nice, because it distinguishes between the unitary and decohering (non-unitary) processes, which is not the case with the OSR. In particular, the non-unitary dynamics are represented by , whereas the unitary dynamics of the state are represented by the usual Heisenberg commutator. Note that when , the dynamical evolution of the system is unitary. The conditions for the evolution of the system density matrix to be described by the master equation are:
the evolution of the system density matrix is determined by a one-parameter semigroup
the evolution is "completely positive" (i.e. probabilities are preserved)
the system and bath density matrices are initially decoupled
Non-unitary modelling examples
Decoherence can be modelled as a non-unitary process by which a system couples with its environment (although the combined system plus environment evolves in a unitary fashion). Thus the dynamics of the system alone, treated in isolation, are non-unitary and, as such, are represented by irreversible transformations acting on the system's Hilbert space . Since the system's dynamics are represented by irreversible representations, then any information present in the quantum system can be lost to the environment or heat bath. Alternatively, the decay of quantum information caused by the coupling of the system to the environment is referred to as decoherence. Thus decoherence is the process by which information of a quantum system is altered by the system's interaction with its environment (which form a closed system), hence creating an entanglement between the system and heat bath (environment). As such, since the system is entangled with its environment in some unknown way, a description of the system by itself cannot be made without also referring to the environment (i.e. without also describing the state of the environment).
Rotational decoherence
Consider a system of N qubits that is coupled to a bath symmetrically. Suppose this system of N qubits undergoes a rotation around the eigenstates of . Then under such a rotation, a random phase will be created between the eigenstates , of . Thus these basis qubits and will transform in the following way:
This transformation is performed by the rotation operator
Since any qubit in this space can be expressed in terms of the basis qubits, then all such qubits will be transformed under this rotation. Consider the th qubit in a pure state where . Before application of the rotation this state is:
.
This state will decohere, since it is not ‘encoded’ with (dependent upon) the dephasing factor . This can be seen by examining the density matrix averaged over the random phase :
,
where is a probability measure of the random phase, . Although not entirely necessary, let us assume for simplicity that this is given by the Gaussian distribution, i.e. , where represents the spread of the random phase. Then the density matrix computed as above is
.
Observe that the off-diagonal elements—the coherence terms—decay as the spread of the random phase, , increases over time (which is a realistic expectation). Thus the density matrices for each qubit of the system become indistinguishable over time. This means that no measurement can distinguish between the qubits, thus creating decoherence between the various qubit states. In particular, this dephasing process causes the qubits to collapse to one of the pure states in . This is why this type of decoherence process is called collective dephasing, because the mutual phases between all qubits of the N-qubit system are destroyed.
Depolarizing
Depolarizing is a non-unitary transformation on a quantum system which maps pure states to mixed states. This is a non-unitary process because any transformation that reverses this process will map states out of their respective Hilbert space thus not preserving positivity (i.e. the original probabilities are mapped to negative probabilities, which is not allowed). The 2-dimensional case of such a transformation would consist of mapping pure states on the surface of the Bloch sphere to mixed states within the Bloch sphere. This would contract the Bloch sphere by some finite amount and the reverse process would expand the Bloch sphere, which cannot happen.
Dissipation
Dissipation is a decohering process by which the populations of quantum states are changed due to entanglement with a bath. An example of this would be a quantum system that can exchange its energy with a bath through the interaction Hamiltonian. If the system is not in its ground state and the bath is at a temperature lower than that of the system's, then the system will give off energy to the bath, and thus higher-energy eigenstates of the system Hamiltonian will decohere to the ground state after cooling and, as such, will all be non-degenerate. Since the states are no longer degenerate, they are not distinguishable, and thus this process is irreversible (non-unitary).
Timescales
Decoherence represents an extremely fast process for macroscopic objects, since these are interacting with many microscopic objects, with an enormous number of degrees of freedom in their natural environment. The process is needed if we are to understand why we tend not to observe quantum behavior in everyday macroscopic objects and why we do see classical fields emerge from the properties of the interaction between matter and radiation for large amounts of matter. The time taken for off-diagonal components of the density matrix to effectively vanish is called the decoherence time. It is typically extremely short for everyday, macroscale processes. A modern basis-independent definition of the decoherence time relies on the short-time behavior of the fidelity between the initial and the time-dependent state or, equivalently, the decay of the purity.
Mathematical details
Assume for the moment that the system in question consists of a subsystem A being studied and the "environment" , and the total Hilbert space is the tensor product of a Hilbert space describing A and a Hilbert space describing , that is,
This is a reasonably good approximation in the case where A and are relatively independent (e.g. there is nothing like parts of A mixing with parts of or conversely). The point is, the interaction with the environment is for all practical purposes unavoidable (e.g. even a single excited atom in a vacuum would emit a photon, which would then go off). Let's say this interaction is described by a unitary transformation U acting upon . Assume that the initial state of the environment is , and the initial state of A is the superposition state
where and are orthogonal, and there is no entanglement initially. Also, choose an orthonormal basis for . (This could be a "continuously indexed basis" or a mixture of continuous and discrete indexes, in which case we would have to use a rigged Hilbert space and be more careful about what we mean by orthonormal, but that's an inessential detail for expository purposes.) Then, we can expand
and
uniquely as
and
respectively. One thing to realize is that the environment contains a huge number of degrees of freedom, a good number of them interacting with each other all the time. This makes the following assumption reasonable in a handwaving way, which can be shown to be true in some simple toy models. Assume that there exists a basis for such that and are all approximately orthogonal to a good degree if i ≠ j and the same thing for and and also for and for any i and j (the decoherence property).
This often turns out to be true (as a reasonable conjecture) in the position basis because how A interacts with the environment would often depend critically upon the position of the objects in A. Then, if we take the partial trace over the environment, we would find the density state is approximately described by
that is, we have a diagonal mixed state, there is no constructive or destructive interference, and the "probabilities" add up classically. The time it takes for U(t) (the unitary operator as a function of time) to display the decoherence property is called the decoherence time.
Experimental observations
Quantitative measurement
The decoherence rate depends on a number of factors, including temperature or uncertainty in position, and many experiments have tried to measure it depending on the external environment.
The process of a quantum superposition gradually obliterated by decoherence was quantitatively measured for the first time by Serge Haroche and his co-workers at the École Normale Supérieure in Paris in 1996. Their approach involved sending individual rubidium atoms, each in a superposition of two states, through a microwave-filled cavity. The two quantum states both cause shifts in the phase of the microwave field, but by different amounts, so that the field itself is also put into a superposition of two states. Due to photon scattering on cavity-mirror imperfection, the cavity field loses phase coherence to the environment. Haroche and his colleagues measured the resulting decoherence via correlations between the states of pairs of atoms sent through the cavity with various time delays between the atoms.
In July 2011, researchers from University of British Columbia and University of California, Santa Barbara showed that applying high magnetic fields to single molecule magnets suppressed two of three known sources of decoherence. They were able to measure the dependence of decoherence on temperature and magnetic field strength.
Applications
Decoherence is a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. They require that the coherence of states be preserved and that decoherence be managed, in order to actually perform quantum computation. The preservation of coherence, and mitigation of decoherence effects, are thus related to the concept of quantum error correction.
In August 2020 scientists reported that ionizing radiation from environmental radioactive materials and cosmic rays may substantially limit the coherence times of qubits if they aren't shielded adequately which may be critical for realizing fault-tolerant superconducting quantum computers in the future.
See also
Dephasing
Dephasing rate SP formula
Einselection
Ghirardi–Rimini–Weber theory
H. Dieter Zeh
Interpretations of quantum mechanics
Objective-collapse theory
Partial trace
Photon polarization
Quantum coherence
Quantum Darwinism
Quantum entanglement
Quantum superposition
Quantum Zeno effect
References
Further reading
Zurek, Wojciech H. (2003). "Decoherence and the transition from quantum to classical – REVISITED", (An updated version of PHYSICS TODAY, 44:36–44 (1991) article)
Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994). Demonstrates that complementarity is enforced, and quantum interference effects destroyed, by irreversible object-apparatus correlations, and not, as was previously popularly believed, by Heisenberg's uncertainty principle itself.
Mario Castagnino, Sebastian Fortin, Roberto Laura and Olimpia Lombardi, A general theoretical framework for decoherence in open and closed systems, Classical and Quantum Gravity, 25, pp. 154002–154013, (2008). A general theoretical framework for decoherence is proposed, which encompasses formalisms originally devised to deal just with open or closed systems.
1970 introductions
Articles containing video clips
Decoherence | 0.765928 | 0.996597 | 0.763322 |
Phylogenetics | In biology, phylogenetics is the study of the evolutionary history of life using genetics, which is known as phylogenetic inference. It establishes the relationship between organisms with the empirical data and observed heritable traits of DNA sequences, protein amino acid sequences, and morphology. The results are a phylogenetic tree—a diagram setting the hypothetical relationships between organisms and their evolutionary history.
The tips of a phylogenetic tree can be living taxa or fossils, which represent the present time or "end" of an evolutionary lineage, respectively. A phylogenetic diagram can be rooted or unrooted. A rooted tree diagram indicates the hypothetical common ancestor of the tree. An unrooted tree diagram (a network) makes no assumption about the ancestral line, and does not show the origin or "root" of the taxa in question or the direction of inferred evolutionary transformations.
In addition to their use for inferring phylogenetic patterns among taxa, phylogenetic analyses are often employed to represent relationships among genes or individual organisms. Such uses have become central to understanding biodiversity, evolution, ecology, and genomes.
Phylogenetics is a component of systematics that uses similarities and differences of the characteristics of species to interpret their evolutionary relationships and origins. Phylogenetics focuses on whether the characteristics of a species reinforce a phylogenetic inference that it diverged from the most recent common ancestor of a taxonomic group.
In the field of cancer research, phylogenetics can be used to study the clonal evolution of tumors and molecular chronology, predicting and showing how cell populations vary throughout the progression of the disease and during treatment, using whole genome sequencing techniques. The evolutionary processes behind cancer progression are quite different from those in most species and are important to phylogenetic inference; these differences manifest in several areas: the types of aberrations that occur, the rates of mutation, the high heterogeneity (variability) of tumor cell subclones, and the absence of genetic recombination.
Phylogenetics can also aid in drug design and discovery. Phylogenetics allows scientists to organize species and can show which species are likely to have inherited particular traits that are medically useful, such as producing biologically active compounds - those that have effects on the human body. For example, in drug discovery, venom-producing animals are particularly useful. Venoms from these animals produce several important drugs, e.g., ACE inhibitors and Prialt (Ziconotide). To find new venoms, scientists turn to phylogenetics to screen for closely related species that may have the same useful traits. The phylogenetic tree shows which species of fish have an origin of venom, and related fish they may contain the trait. Using this approach in studying venomous fish, biologists are able to identify the fish species that may be venomous. Biologist have used this approach in many species such as snakes and lizards.
In forensic science, phylogenetic tools are useful to assess DNA evidence for court cases. The simple phylogenetic tree of viruses A-E shows the relationships between viruses e.g., all viruses are descendants of Virus A.
HIV forensics uses phylogenetic analysis to track the differences in HIV genes and determine the relatedness of two samples. Phylogenetic analysis has been used in criminal trials to exonerate or hold individuals. HIV forensics does have its limitations, i.e., it cannot be the sole proof of transmission between individuals and phylogenetic analysis which shows transmission relatedness does not indicate direction of transmission.
Taxonomy and classification
Taxonomy is the identification, naming, and classification of organisms. Compared to systemization, classification emphasizes whether a species has characteristics of a taxonomic group. The Linnaean classification system developed in the 1700s by Carolus Linnaeus is the foundation for modern classification methods. Linnaean classification relies on an organism's phenotype or physical characteristics to group and organize species. With the emergence of biochemistry, organism classifications are now usually based on phylogenetic data, and many systematists contend that only monophyletic taxa should be recognized as named groups. The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters (synapomorphies); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between them.
Inference of a phylogenetic tree
Usual methods of phylogenetic inference involve computational approaches implementing the optimality criteria and methods of parsimony, maximum likelihood (ML), and MCMC-based Bayesian inference. All these depend upon an implicit or explicit mathematical model describing the evolution of characters observed.
Phenetics, popular in the mid-20th century but now largely obsolete, used distance matrix-based methods to construct trees based on overall similarity in morphology or similar observable traits (i.e. in the phenotype or the overall similarity of DNA, not the DNA sequence), which was often assumed to approximate phylogenetic relationships.
Prior to 1950, phylogenetic inferences were generally presented as narrative scenarios. Such methods are often ambiguous and lack explicit criteria for evaluating alternative hypotheses.
Impacts of taxon sampling
In phylogenetic analysis, taxon sampling selects a small group of taxa to represent the evolutionary history of its broader population. This process is also known as stratified sampling or clade-based sampling. The practice occurs given limited resources to compare and analyze every species within a target population. Based on the representative group selected, the construction and accuracy of phylogenetic trees vary, which impacts derived phylogenetic inferences.
Unavailable datasets, such as an organism's incomplete DNA and protein amino acid sequences in genomic databases, directly restrict taxonomic sampling. Consequently, a significant source of error within phylogenetic analysis occurs due to inadequate taxon samples. Accuracy may be improved by increasing the number of genetic samples within its monophyletic group. Conversely, increasing sampling from outgroups extraneous to the target stratified population may decrease accuracy. Long branch attraction is an attributed theory for this occurrence, where nonrelated branches are incorrectly classified together, insinuating a shared evolutionary history.
There are debates if increasing the number of taxa sampled improves phylogenetic accuracy more than increasing the number of genes sampled per taxon. Differences in each method's sampling impact the number of nucleotide sites utilized in a sequence alignment, which may contribute to disagreements. For example, phylogenetic trees constructed utilizing a more significant number of total nucleotides are generally more accurate, as supported by phylogenetic trees' bootstrapping replicability from random sampling.
The graphic presented in Taxon Sampling, Bioinformatics, and Phylogenomics, compares the correctness of phylogenetic trees generated using fewer taxa and more sites per taxon on the x-axis to more taxa and fewer sites per taxon on the y-axis. With fewer taxa, more genes are sampled amongst the taxonomic group; in comparison, with more taxa added to the taxonomic sampling group, fewer genes are sampled. Each method has the same total number of nucleotide sites sampled. Furthermore, the dotted line represents a 1:1 accuracy between the two sampling methods. As seen in the graphic, most of the plotted points are located below the dotted line, which indicates gravitation toward increased accuracy when sampling fewer taxa with more sites per taxon. The research performed utilizes four different phylogenetic tree construction models to verify the theory; neighbor-joining (NJ), minimum evolution (ME), unweighted maximum parsimony (MP), and maximum likelihood (ML). In the majority of models, sampling fewer taxon with more sites per taxon demonstrated higher accuracy.
Generally, with the alignment of a relatively equal number of total nucleotide sites, sampling more genes per taxon has higher bootstrapping replicability than sampling more taxa. However, unbalanced datasets within genomic databases make increasing the gene comparison per taxon in uncommonly sampled organisms increasingly difficult.
History
Overview
The term "phylogeny" derives from the German , introduced by Haeckel in 1866, and the Darwinian approach to classification became known as the "phyletic" approach. It can be traced back to Aristotle, who wrote in his Posterior Analytics, "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses."
Ernst Haeckel's recapitulation theory
The modern concept of phylogenetics evolved primarily as a disproof of a previously widely accepted theory. During the late 19th century, Ernst Haeckel's recapitulation theory, or "biogenetic fundamental law", was widely popular. It was often expressed as "ontogeny recapitulates phylogeny", i.e. the development of a single organism during its lifetime, from germ to adult, successively mirrors the adult stages of successive ancestors of the species to which it belongs. But this theory has long been rejected. Instead, ontogeny evolves – the phylogenetic history of a species cannot be read directly from its ontogeny, as Haeckel thought would be possible, but characters from ontogeny can be (and have been) used as data for phylogenetic analyses; the more closely related two species are, the more apomorphies their embryos share.
Timeline of key points
14th century, lex parsimoniae (parsimony principle), William of Ockam, English philosopher, theologian, and Franciscan friar, but the idea actually goes back to Aristotle, as a precursor concept. He introduced the concept of Occam's razor, which is the problem solving principle that recommends searching for explanations constructed with the smallest possible set of elements. Though he did not use these exact words, the principle can be summarized as "Entities must not be multiplied beyond necessity." The principle advocates that when presented with competing hypotheses about the same prediction, one should prefer the one that requires fewest assumptions.
1763, Bayesian probability, Rev. Thomas Bayes, a precursor concept. Bayesian probability began a resurgence in the 1950s, allowing scientists in the computing field to pair traditional Bayesian statistics with other more modern techniques. It is now used as a blanket term for several related interpretations of probability as an amount of epistemic confidence.
18th century, Pierre Simon (Marquis de Laplace), perhaps first to use ML (maximum likelihood), precursor concept. His work gave way to the Laplace distribution, which can be directly linked to least absolute deviations.
1809, evolutionary theory, Philosophie Zoologique, Jean-Baptiste de Lamarck, precursor concept, foreshadowed in the 17th century and 18th century by Voltaire, Descartes, and Leibniz, with Leibniz even proposing evolutionary changes to account for observed gaps suggesting that many species had become extinct, others transformed, and different species that share common traits may have at one time been a single race, also foreshadowed by some early Greek philosophers such as Anaximander in the 6th century BC and the atomists of the 5th century BC, who proposed rudimentary theories of evolution
1837, Darwin's notebooks show an evolutionary tree
1840, American Geologist Edward Hitchcock published what is considered to be the first paleontological "Tree of Life". Many critiques, modifications, and explanations would follow.
1843, distinction between homology and analogy (the latter now referred to as homoplasy), Richard Owen, precursor concept. Homology is the term used to characterize the similarity of features that can be parsimoniously explained by common ancestry. Homoplasy is the term used to describe a feature that has been gained or lost independently in separate lineages over the course of evolution.
1858, Paleontologist Heinrich Georg Bronn (1800–1862) published a hypothetical tree to illustrating the paleontological "arrival" of new, similar species. following the extinction of an older species. Bronn did not propose a mechanism responsible for such phenomena, precursor concept.
1858, elaboration of evolutionary theory, Darwin and Wallace, also in Origin of Species by Darwin the following year, precursor concept.
1866, Ernst Haeckel, first publishes his phylogeny-based evolutionary tree, precursor concept. Haeckel introduces the now-disproved recapitulation theory. He introduced the term "Cladus" as a taxonomic category just below subphylum.
1893, Dollo's Law of Character State Irreversibility, precursor concept. Dollo's Law of Irreversibility states that "an organism never comes back exactly to its previous state due to the indestructible nature of the past, it always retains some trace of the transitional stages through which it has passed."
1912, ML (maximum likelihood recommended, analyzed, and popularized by Ronald Fisher, precursor concept. Fisher is one of the main contributors to the early 20th-century revival of Darwinism, and has been called the "greatest of Darwin's successors" for his contributions to the revision of the theory of evolution and his use of mathematics to combine Mendelian genetics and natural selection in the 20th century "modern synthesis".
1921, Tillyard uses term "phylogenetic" and distinguishes between archaic and specialized characters in his classification system.
1940, Lucien Cuénot coined the term "clade" in 1940: "terme nouveau de clade (du grec κλάδοςç, branche) [A new term clade (from the Greek word klados, meaning branch)]". He used it for evolutionary branching.
1947, Bernhard Rensch introduced the term Kladogenesis in his German book Neuere Probleme der Abstammungslehre Die transspezifische Evolution, translated into English in 1959 as Evolution Above the Species Level (still using the same spelling).
1949, Jackknife resampling, Maurice Quenouille (foreshadowed in '46 by Mahalanobis and extended in '58 by Tukey), precursor concept.
1950, Willi Hennig's classic formalization. Hennig is considered the founder of phylogenetic systematics, and published his first works in German of this year. He also asserted a version of the parsimony principle, stating that the presence of amorphous characters in different species 'is always reason for suspecting kinship, and that their origin by convergence should not be presumed a priori'. This has been considered a foundational view of phylogenetic inference.
1952, William Wagner's ground plan divergence method.
1957, Julian Huxley adopted Rensch's terminology as "cladogenesis" with a full definition: "Cladogenesis I have taken over directly from Rensch, to denote all splitting, from subspeciation through adaptive radiation to the divergence of phyla and kingdoms." With it he introduced the word "clades", defining it as: "Cladogenesis results in the formation of delimitable monophyletic units, which may be called clades."
1960, Arthur Cain and Geoffrey Ainsworth Harrison coined "cladistic" to mean evolutionary relationship,
1963, first attempt to use ML (maximum likelihood) for phylogenetics, Edwards and Cavalli-Sforza.
1965
Camin-Sokal parsimony, first parsimony (optimization) criterion and first computer program/algorithm for cladistic analysis both by Camin and Sokal.
Character compatibility method, also called clique analysis, introduced independently by Camin and Sokal (loc. cit.) and E. O. Wilson.
1966
English translation of Hennig.
"Cladistics" and "cladogram" coined (Webster's, loc. cit.)
1969
Dynamic and successive weighting, James Farris.
Wagner parsimony, Kluge and Farris.
CI (consistency index), Kluge and Farris.
Introduction of pairwise compatibility for clique analysis, Le Quesne.
1970, Wagner parsimony generalized by Farris.
1971
First successful application of ML (maximum likelihood) to phylogenetics (for protein sequences), Neyman.
Fitch parsimony, Walter M. Fitch. These gave way to the most basic ideas of maximum parsimony. Fitch is known for his work on reconstructing phylogenetic trees from protein and DNA sequences. His definition of orthologous sequences has been referenced in many research publications.
NNI (nearest neighbour interchange), first branch-swapping search strategy, developed independently by Robinson and Moore et al.
ME (minimum evolution), Kidd and Sgaramella-Zonta (it is unclear if this is the pairwise distance method or related to ML as Edwards and Cavalli-Sforza call ML "minimum evolution").
1972, Adams consensus, Adams.
1976, prefix system for ranks, Farris.
1977, Dollo parsimony, Farris.
1979
Nelson consensus, Nelson.
MAST (maximum agreement subtree)((GAS) greatest agreement subtree), a consensus method, Gordon.
Bootstrap, Bradley Efron, precursor concept.
1980, PHYLIP, first software package for phylogenetic analysis, Joseph Felsenstein. A free computational phylogenetics package of programs for inferring evolutionary trees (phylogenies). One such example tree created by PHYLIP, called a "drawgram", generates rooted trees. This image shown in the figure below shows the evolution of phylogenetic trees over time.
1981
Majority consensus, Margush and MacMorris.
Strict consensus, Sokal and Rohlffirst computationally efficient ML (maximum likelihood) algorithm. Felsenstein created the Felsenstein Maximum Likelihood method, used for the inference of phylogeny which evaluates a hypothesis about evolutionary history in terms of the probability that the proposed model and the hypothesized history would give rise to the observed data set.
1982
PHYSIS, Mikevich and Farris
Branch and bound, Hendy and Penny
1985
First cladistic analysis of eukaryotes based on combined phenotypic and genotypic evidence Diana Lipscomb.
First issue of Cladistics.
First phylogenetic application of bootstrap, Felsenstein.
First phylogenetic application of jackknife, Scott Lanyon.
1986, MacClade, Maddison and Maddison.
1987, neighbor-joining method Saitou and Nei
1988, Hennig86 (version 1.5), Farris
Bremer support (decay index), Bremer.
1989
RI (retention index), RCI (rescaled consistency index), Farris.
HER (homoplasy excess ratio), Archie.
1990
combinable components (semi-strict) consensus, Bremer.
SPR (subtree pruning and regrafting), TBR (tree bisection and reconnection), Swofford and Olsen.
1991
DDI (data decisiveness index), Goloboff.
First cladistic analysis of eukaryotes based only on phenotypic evidence, Lipscomb.
1993, implied weighting Goloboff.
1994, reduced consensus: RCC (reduced cladistic consensus) for rooted trees, Wilkinson.
1995, reduced consensus RPC (reduced partition consensus) for unrooted trees, Wilkinson.
1996, first working methods for BI (Bayesian Inference) independently developed by Li, Mau, and Rannala and Yang and all using MCMC (Markov chain-Monte Carlo).
1998, TNT (Tree Analysis Using New Technology), Goloboff, Farris, and Nixon.
1999, Winclada, Nixon.
2003, symmetrical resampling, Goloboff.
2004, 2005, similarity metric (using an approximation to Kolmogorov complexity) or NCD (normalized compression distance), Li et al., Cilibrasi and Vitanyi.
Uses of phylogenetic analysis
Pharmacology
One use of phylogenetic analysis involves the pharmacological examination of closely related groups of organisms. Advances in cladistics analysis through faster computer programs and improved molecular techniques have increased the precision of phylogenetic determination, allowing for the identification of species with pharmacological potential.
Historically, phylogenetic screens for pharmacological purposes were used in a basic manner, such as studying the Apocynaceae family of plants, which includes alkaloid-producing species like Catharanthus, known for producing vincristine, an antileukemia drug. Modern techniques now enable researchers to study close relatives of a species to uncover either a higher abundance of important bioactive compounds (e.g., species of Taxus for taxol) or natural variants of known pharmaceuticals (e.g., species of Catharanthus for different forms of vincristine or vinblastine).
Biodiversity
Phylogenetic analysis has also been applied to biodiversity studies within the fungi family. Phylogenetic analysis helps understand the evolutionary history of various groups of organisms, identify relationships between different species, and predict future evolutionary changes. Emerging imagery systems and new analysis techniques allow for the discovery of more genetic relationships in biodiverse fields, which can aid in conservation efforts by identifying rare species that could benefit ecosystems globally.
Infectious disease epidemiology
Whole-genome sequence data from outbreaks or epidemics of infectious diseases can provide important insights into transmission dynamics and inform public health strategies. Traditionally, studies have combined genomic and epidemiological data to reconstruct transmission events. However, recent research has explored deducing transmission patterns solely from genomic data using phylodynamics, which involves analyzing the properties of pathogen phylogenies. Phylodynamics uses theoretical models to compare predicted branch lengths with actual branch lengths in phylogenies to infer transmission patterns. Additionally, coalescent theory, which describes probability distributions on trees based on population size, has been adapted for epidemiological purposes. Another source of information within phylogenies that has been explored is "tree shape." These approaches, while computationally intensive, have the potential to provide valuable insights into pathogen transmission dynamics.
The structure of the host contact network significantly impacts the dynamics of outbreaks, and management strategies rely on understanding these transmission patterns. Pathogen genomes spreading through different contact network structures, such as chains, homogeneous networks, or networks with super-spreaders, accumulate mutations in distinct patterns, resulting in noticeable differences in the shape of phylogenetic trees, as illustrated in Fig. 1. Researchers have analyzed the structural characteristics of phylogenetic trees generated from simulated bacterial genome evolution across multiple types of contact networks. By examining simple topological properties of these trees, researchers can classify them into chain-like, homogeneous, or super-spreading dynamics, revealing transmission patterns. These properties form the basis of a computational classifier used to analyze real-world outbreaks. Computational predictions of transmission dynamics for each outbreak often align with known epidemiological data.
Different transmission networks result in quantitatively different tree shapes. To determine whether tree shapes captured information about underlying disease transmission patterns, researchers simulated the evolution of a bacterial genome over three types of outbreak contact networks—homogeneous, super-spreading, and chain-like. They summarized the resulting phylogenies with five metrics describing tree shape. Figures 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network.
Super-spreader networks give rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Δw, and deeper trees than those from homogeneous contact networks. Trees from chain-like networks are less variable, deeper, more imbalanced, and narrower than those from other networks.
Scatter plots can be used to visualize the relationship between two variables in pathogen transmission analysis, such as the number of infected individuals and the time since infection. These plots can help identify trends and patterns, such as whether the spread of the pathogen is increasing or decreasing over time, and can highlight potential transmission routes or super-spreader events. Box plots displaying the range, median, quartiles, and potential outliers datasets can also be valuable for analyzing pathogen transmission data, helping to identify important features in the data distribution. They may be used to quickly identify differences or similarities in the transmission data.
Disciplines other than biology
Phylogenetic tools and representations (trees and networks) can also be applied to philology, the study of the evolution of oral languages and written text and manuscripts, such as in the field of quantitative comparative linguistics.
Computational phylogenetics can be used to investigate a language as an evolutionary system. The evolution of human language closely corresponds with human's biological evolution which allows phylogenetic methods to be applied. The concept of a "tree" serves as an efficient way to represent relationships between languages and language splits. It also serves as a way of testing hypotheses about the connections and ages of language families. For example, relationships among languages can be shown by using cognates as characters. The phylogenetic tree of Indo-European languages shows the relationships between several of the languages in a timeline, as well as the similarity between words and word order.
There are three types of criticisms about using phylogenetics in philology, the first arguing that languages and species are different entities, therefore you can not use the same methods to study both. The second being how phylogenetic methods are being applied to linguistic data. And the third, discusses the types of data that is being used to construct the trees.
Bayesian phylogenetic methods, which are sensitive to how treelike the data is, allow for the reconstruction of relationships among languages, locally and globally. The main two reasons for the use of Bayesian phylogenetics are that (1) diverse scenarios can be included in calculations and (2) the output is a sample of trees and not a single tree with true claim.
The same process can be applied to texts and manuscripts. In Paleography, the study of historical writings and manuscripts, texts were replicated by scribes who copied from their source and alterations - i.e., 'mutations' - occurred when the scribe did not precisely copy the source.
Phylogenetics has been applied to archaeological artefacts such as the early hominin hand-axes, late Palaeolithic figurines, Neolithic stone arrowheads, Bronze Age ceramics, and historical-period houses. Bayesian methods have also been employed by archaeologists in an attempt to quantify uncertainty in the tree topology and divergence times of stone projectile point shapes in the European Final Palaeolithic and earliest Mesolithic.
See also
Angiosperm Phylogeny Group
Bauplan
Bioinformatics
Biomathematics
Coalescent theory
EDGE of Existence programme
Evolutionary taxonomy
Language family
Maximum parsimony
Microbial phylogenetics
Molecular phylogeny
Ontogeny
PhyloCode
Phylodynamics
Phylogenesis
Phylogenetic comparative methods
Phylogenetic network
Phylogenetic nomenclature
Phylogenetic tree viewers
Phylogenetics software
Phylogenomics
Phylogeny (psychoanalysis)
Phylogeography
Systematics
References
Bibliography
External links | 0.765502 | 0.997117 | 0.763295 |
Archimedean spiral | The Archimedean spiral (also known as Archimedes' spiral, the arithmetic spiral) is a spiral named after the 3rd-century BC Greek mathematician Archimedes. The term Archimedean spiral is sometimes used to refer to the more general class of spirals of this type (see below), in contrast to Archimedes' spiral (the specific arithmetic spiral of Archimedes). It is the locus corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line that rotates with constant angular velocity. Equivalently, in polar coordinates it can be described by the equation
with real number . Changing the parameter controls the distance between loops.
From the above equation, it can thus be stated: position of the particle from point of start is proportional to angle as time elapses.
Archimedes described such a spiral in his book On Spirals. Conon of Samos was a friend of his and Pappus states that this spiral was discovered by Conon.
Derivation of general equation of spiral
A physical approach is used below to understand the notion of Archimedean spirals.
Suppose a point object moves in the Cartesian system with a constant velocity directed parallel to the -axis, with respect to the -plane. Let at time , the object was at an arbitrary point . If the plane rotates with a constant angular velocity about the -axis, then the velocity of the point with respect to -axis may be written as:
As shown in the figure alongside, we have representing the modulus of the position vector of the particle at any time , with and as the velocity components along the x and y axes, respectively.
The above equations can be integrated by applying integration by parts, leading to the following parametric equations:
Squaring the two equations and then adding (and some small alterations) results in the Cartesian equation
(using the fact that and ) or
Its polar form is
Arc length and curvature
Given the parametrization in cartesian coordinates
the arc length from to is
or, equivalently:
The total length from to is therefore
The curvature is given by
Characteristics
The Archimedean spiral has the property that any ray from the origin intersects successive turnings of the spiral in points with a constant separation distance (equal to if is measured in radians), hence the name "arithmetic spiral". In contrast to this, in a logarithmic spiral these distances, as well as the distances of the intersection points measured from the origin, form a geometric progression.
The Archimedean spiral has two arms, one for and one for . The two arms are smoothly connected at the origin. Only one arm is shown on the accompanying graph. Taking the mirror image of this arm across the -axis will yield the other arm.
For large a point moves with well-approximated uniform acceleration along the Archimedean spiral while the spiral corresponds to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity (see contribution from Mikhail Gaichenkov).
As the Archimedean spiral grows, its evolute asymptotically approaches a circle with radius .
General Archimedean spiral
Sometimes the term Archimedean spiral is used for the more general group of spirals
The normal Archimedean spiral occurs when . Other spirals falling into this group include the hyperbolic spiral, Fermat's spiral, and the lituus.
Applications
One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs.
The Archimedean spiral has a variety of real-world applications. Scroll compressors, used for compressing gases, have rotors that can be made from two interleaved Archimedean spirals, involutes of a circle of the same size that almost resemble Archimedean spirals, or hybrid curves.
Archimedean spirals can be found in spiral antenna, which can be operated over a wide range of frequencies.
The coils of watch balance springs and the grooves of very early gramophone records form Archimedean spirals, making the grooves evenly spaced (although variable track spacing was later introduced to maximize the amount of music that could be cut onto a record).
Asking for a patient to draw an Archimedean spiral is a way of quantifying human tremor; this information helps in diagnosing neurological diseases.
Archimedean spirals are also used in digital light processing (DLP) projection systems to minimize the "rainbow effect", making it look as if multiple colors are displayed at the same time, when in reality red, green, and blue are being cycled extremely quickly. Additionally, Archimedean spirals are used in food microbiology to quantify bacterial concentration through a spiral platter.
They are also used to model the pattern that occurs in a roll of paper or tape of constant thickness wrapped around a cylinder.
Many dynamic spirals (such as the Parker spiral of the solar wind, or the pattern made by a Catherine's wheel) are Archimedean. For instance, the star LL Pegasi shows an approximate Archimedean spiral in the dust clouds surrounding it, thought to be ejected matter from the star that has been shepherded into a spiral by another companion star as part of a double star system.
Construction methods
The Archimedean Spiral cannot be constructed precisely by traditional compass and straightedge methods, since the arithmetic spiral requires the radius of the curve to be incremented constantly as the angle at the origin is incremented. But an arithmetic spiral can be constructed approximately, to varying degrees of precision, by various manual drawing methods. One such method uses compass and straightedge; another method uses a modified string compass.
The common traditional construction uses compass and straightedge to approximate the arithmetic spiral. First, a large circle is constructed and its circumference is subdivided by 12 diameters into 12 arcs (of 30 degrees each; see regular dodecagon). Next, the radius of this circle is itself subdivided into 12 unit segments (radial units), and a series of concentric circles is constructed, each with radius incremented by one radial unit. Starting with the horizontal diameter and the innermost concentric circle, the point is marked where its radius intersects its circumference; one then moves to the next concentric circle and to the next diameter (moving up to construct a counterclockwise spiral, or down for clockwise) to mark the next point. After all points have been marked, successive points are connected by a line approximating the arithmetic spiral (or by a smooth curve of some sort; see French Curve). Depending on the desired degree of precision, this method can be improved by increasing the size of the large outer circle, making more subdivisions of both its circumference and radius, increasing the number of concentric circles (see Polygonal Spiral). Approximating the Archimedean Spiral by this method is of course reminiscent of Archimedes’ famous method of approximating π by doubling the sides of successive polygons (see Polygon approximation of π).
Compass and straightedge construction of the Spiral of Theodorus is another simple method to approximate the Archimedean Spiral.
A mechanical method for constructing the arithmetic spiral uses a modified string compass, where the string wraps and winds (or unwraps/unwinds) about a fixed central pin (that does not pivot), thereby incrementing (or decrementing) the length of the radius (string) as the angle changes (the string winds around the fixed pin which does not pivot). Such a method is a simple way to create an arithmetic spiral, arising naturally from use of a string compass with winding pin (not the loose pivot of a common string compass). The string compass drawing tool has various modifications and designs, and this construction method is reminiscent of string-based methods for creating ellipses (with two fixed pins).
Yet another mechanical method is a variant of the previous string compass method, providing greater precision and more flexibility. Instead of the central pin and string of the string compass, this device uses a non-rotating shaft (column) with helical threads (screw; see Archimedes’ screw) to which are attached two slotted arms: one horizontal arm is affixed to (travels up) the screw threads of the vertical shaft at one end, and holds a drawing tool at the other end; another sloped arm is affixed at one end to the top of the screw shaft, and is joined by a pin loosely fitted in its slot to the slot of the horizontal arm. The two arms rotate together and work in consort to produce the arithmetic spiral: as the horizontal arm gradually climbs the screw, that arm’s slotted attachment to the sloped arm gradually shortens the drawing radius. The angle of the sloped arm remains constant throughout (traces a cone), and setting a different angle varies the pitch of the spiral. This device provides a high degree of precision, depending on the precision with which the device is machined (machining a precise helical screw thread is a related challenge). And of course the use of a screw shaft in this mechanism is reminiscent of Archimedes’ screw.
See also
References
External links
Jonathan Matt making the Archimedean spiral interesting - Video : The surprising beauty of Mathematics - TedX Talks, Green Farms
Page with Java application to interactively explore the Archimedean spiral and its related curves
Online exploration using JSXGraph (JavaScript)
Archimedean spiral at "mathcurve"
Squaring the circle
Spirals
Spiral
Articles with example R code
Plane curves | 0.766048 | 0.996382 | 0.763276 |
Control theory | Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.
Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.
History
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.
A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.
Open-loop and closed-loop (feedback) control
Classical control theory
Linear and nonlinear control theory
The field of control theory can be divided into two branches:
Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest.
Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used.
Analysis techniques - frequency domain and time domain
Mathematical techniques for analyzing and designing control systems fall into two different categories:
Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above.
Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine.
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.
System interfacing - SISO & MIMO
Control systems can be divided into different categories depending on the number of inputs and outputs.
Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker.
Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems.
Classical SISO system design
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
Modern MIMO system design
Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.
Topics in control theory
Stability
The stability of a general dynamical system with no input can be described with Lyapunov stability criteria.
A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input.
Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability.
For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function.
inside the unit circle for discrete time, when the Z-transform is used.
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the axis is the real axis and the discrete Z-transform is in circular coordinates where the axis is the real axis.
When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has an impulse response of
then the Z-transform (see this example), is given by
which has a pole in (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle.
However, if the impulse response was
then the Z-transform is
which has a pole at and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
Controllability and observability
Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
Control specification
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have , where is a fixed value strictly greater than zero, instead of simply asking that .
Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
Model identification and robustness
A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.
System identification
The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.
Constraints
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
System classifications
Linear systems control
For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
Nonlinear systems control
Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.
Decentralized systems control
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
Deterministic and stochastic systems control
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
Main control strategies
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
List of the main control techniques
Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control.
Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors.
Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations.
Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field.
A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system.
Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy.
People in systems and control
Many active and historical figures made significant contribution to control theory including
Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him.
Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems.
Alexander Lyapunov in the 1890s marks the beginning of stability theory.
Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s.
Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s.
Richard Bellman developed dynamic programming in the 1940s.
Warren E. Dixon, control theorist and a professor
Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems
Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941.
Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s.
John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s.
Lev Pontryagin introduced the maximum principle and the bang-bang principle.
Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods.
Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation.
Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods
Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory.
See also
Examples of control systems
Automation
Deadbeat controller
Distributed parameter systems
Fractional-order control
H-infinity loop-shaping
Hierarchical control system
Model predictive control
Optimal control
Process control
Robust control
Servomechanism
State space (controls)
Vector control
Topics in control theory
Coefficient diagram method
Control reconfiguration
Feedback
H infinity
Hankel singular value
Krener's theorem
Lead-lag compensator
Minor loop feedback
Multi-loop feedback
Positive systems
Radial basis function
Root locus
Signal-flow graphs
Stable polynomial
State space representation
Steady state
Transient response
Transient state
Underactuation
Youla–Kucera parametrization
Markov chain approximation method
Other related topics
Adaptive system
Automation and remote control
Bond graph
Control engineering
Control–feedback–abort loop
Controller (control theory)
Cybernetics
Intelligent control
Mathematical system theory
Negative feedback amplifier
Outline of management
People in systems and control
Perceptual control theory
Systems theory
References
Further reading
For Chemical Engineering
External links
Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods.
Control Tuning and Best Practices
Advanced control structures, free on-line simulators explaining the control theory
Control engineering
Computer engineering
Management cybernetics | 0.764804 | 0.997969 | 0.763251 |
Potential | Potential generally refers to a currently unrealized ability. The term is used in a wide variety of fields, from physics to the social sciences to indicate things that are in a state where they are able to change in ways ranging from the simple release of energy by objects to the realization of abilities in people.
The philosopher Aristotle incorporated this concept into his theory of potentiality and actuality (in Greek, dynamis and energeia), translated into Latin as potentia and actualitas (earlier also possibilitas and efficacia). a pair of closely connected principles which he used to analyze motion, causality, ethics, and physiology in his Physics, Metaphysics, Nicomachean Ethics, and De Anima, which is about the human psyche. That which is potential can theoretically be made actual by taking the right action; for example, a boulder on the edge of a cliff has potential to fall that could be actualized by pushing it over the edge.
In physics, a potential may refer to the scalar potential or to the vector potential. In either case, it is a field defined in space, from which many important physical properties may be derived. Leading examples are the gravitational potential and the electric potential, from which the motion of gravitating or electrically charged bodies may be obtained. Specific forces have associated potentials, including the Coulomb potential, the van der Waals potential, the Lennard-Jones potential and the Yukawa potential. In electrochemistry there are Galvani potential, Volta potential, electrode potential, and standard electrode potential. In the
thermodynamics, the term potential often refers to thermodynamic potential.
Etymology
“Potential” comes from the Latin word potentialis, from potentia = might, force, power, and hence ability, faculty, capacity, authority, influence. From the verb posse = to be able, to have power. From the adjective potis = able, capable. (The old form of the verb was a compound of the adjective and the verb “to be”, e.g. for possum it was potis sum, etc.) The Latin word potis is cognate with the Sanskrit word patis = “lord”.
Several languages have a potential mood, a grammatical construction which indicates that something is in a potential as opposed to actual state. These include Finnish, Japanese, and Sanskrit.
See also
Potential difference (voltage)
Potential energy
Water potential
References
Potentials | 0.76967 | 0.99166 | 0.76325 |
Larmor formula | In electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light.
When any charged particle (such as an electron, a proton, or an ion) accelerates, energy is radiated in the form of electromagnetic waves. For a particle whose velocity is small relative to the speed of light (i.e., nonrelativistic), the total power that the particle radiates (when considered as a point charge) can be calculated by the Larmor formula:
where or is the proper acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials.
In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as:
One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced.
Derivation
To calculate the power radiated
by a point charge at a position
, with a velocity,
we integrate the Poynting vector over the surface of a sphere of radius R, to get
The electric and magnetic fields are given
by the Li'enard-Wiechert field equations,
The radius vector, , is the distance from the charged particle's position at
the retarded time to the point of observation of the electromagnetic
fields at the present time, is the charge's velocity divided by , is the charge's acceleration divided by , and .
The variables, , , ,
and are all evaluated at the
retarded time, .
We make a Lorentz transformation to the rest frame of the point charge where , and
Here, is the rest frame acceleration parallel to , and
is the rest frame acceleration perpendicular to .
We integrate the rest frame Poynting vector over the surface of a sphere of radius R', to get.
We take the limit In this limit, , and so the electric field is given by
with all variables evaluated at the present time.
Then, the surface integral for the radiated power reduces to
The radiated power can be put back in terms of the original acceleration in the moving frame, to give
The variables in this equation are in the original moving frame, but the rate of energy emission on the left hand side of the equation is still given in terms of the rest frame variables.
However, the right-hand side will be shown below to be a Lorentz invariant, so radiated power can be Lorentz transformed to the moving frame, finally giving
This result (in two forms) is the same as Li'enard's relativistic extension
of Larmor's formula, and is given here with all variables at the present time.
Its nonrelativistic reduction reduces to Larmor's original formula.
For high-energies, it appears that the power radiated for acceleration parallel to the velocity is a factor larger than that for perpendicular acceleration. However, writing the Liénard formula in terms of the velocity gives a misleading implication. In terms of momentum instead of velocity, the Liénard formula becomes
This shows that the power emitted for perpendicular to the velocity is larger by a factor of than the power for parallel to the velocity.
This results in radiation damping being negligible for linear accelerators, but a limiting factor for circular accelerators.
Covariant form
The radiated power
is actually a Lorentz scalar, given in covariant form as
To show this, we reduce the four-vector scalar product to vector notation. We start with
The time derivatives are.
When these derivatives are used,
we get
With this expression for the scalar product, the manifestly invariant form for the power agrees with the vector form above, demonstrating that the radiated power is a Lorentz scalar
Angular distribution
The angular distribution of radiated power is given by a general formula, applicable whether or not the particle is relativistic. In CGS units, this formula is
where is a unit vector pointing from the particle towards the observer. In the case of linear motion (velocity parallel to acceleration), this simplifies to
where is the angle between the observer and the particle's motion.
Radiation reaction
The radiation from a charged particle carries energy and momentum. In order to satisfy energy and momentum conservation, the charged particle must experience a recoil at the time of emission. The radiation must exert an additional force on the charged particle. This force is known as Abraham-Lorentz force while its non-relativistic limit is known as the Lorentz self-force and relativistic forms are known as Lorentz-Dirac force or Abraham-Lorentz-Dirac force. The radiation reaction phenomenon is one of the key problems and consequences of the Larmor formula. According to classical electrodynamics, a charged particle produces electromagnetic radiation as it accelerates. The particle loses momentum and energy as a result of the radiation, which is carrying it away from it. The radiation response force, on the other hand, also acts on the charged particle as a result of the radiation.
The dynamics of charged particles are significantly impacted by the existence of this force. In particular, it causes a change in their motion that may be accounted for by the Larmor formula, a factor in the Lorentz-Dirac equation.
According to the Lorentz-Dirac equation, a charged particle's velocity will be influenced by a "self-force" resulting from its own radiation. Such non-physical behavior as runaway solutions, when the particle's velocity or energy become infinite in a finite amount of time, might result from this self-force.
A resolution to the paradoxes resulting from the introduction of a self-force due to the emission of electromagnetic radiation, is that there is no self-force produced. The acceleration of a charged particle produces electromagnetic radiation, whose outgoing energy reduces the energy of the charged particle. This results in 'radiation reaction' that decreases the acceleration of the charged particle, not as a self force, but just as less acceleration of the particle.
Atomic physics
The invention of quantum physics, notably the Bohr model of the atom, was able to explain this gap between the classical prediction and the actual reality. The Bohr model proposed that transitions between distinct energy levels, which electrons could only inhabit, might account for the observed spectral lines of atoms. The wave-like properties of electrons and the idea of energy quantization were used to explain the stability of these electron orbits.
The Larmor formula can only be used for non-relativistic particles, which limits its usefulness. The Liénard-Wiechert potential is a more comprehensive formula that must be employed for particles travelling at relativistic speeds. In certain situations, more intricate calculations including numerical techniques or perturbation theory could be necessary to precisely compute the radiation the charged particle emits.
See also
Atomic theory
Cyclotron radiation
Electromagnetic wave equation
Maxwell's equations in curved spacetime
Radiation reaction
Wave equation
Wheeler–Feynman absorber theory
References
(Third and last in a series of papers with the same name).
(Section 14.2ff)
Antennas (radio)
Atomic physics
Electrodynamics
Electromagnetic radiation
Electromagnetism
Eponymous equations of physics | 0.771125 | 0.989777 | 0.763241 |
Flight | Flight or flying is the process by which an object moves through a space without contacting any planetary surface, either within an atmosphere (i.e. air flight or aviation) or through the vacuum of outer space (i.e. spaceflight). This can be achieved by generating aerodynamic lift associated with gliding or propulsive thrust, aerostatically using buoyancy, or by ballistic movement.
Many things can fly, from animal aviators such as birds, bats and insects, to natural gliders/parachuters such as patagial animals, anemochorous seeds and ballistospores, to human inventions like aircraft (airplanes, helicopters, airships, balloons, etc.) and rockets which may propel spacecraft and spaceplanes.
The engineering aspects of flight are the purview of aerospace engineering which is subdivided into aeronautics, the study of vehicles that travel through the atmosphere, and astronautics, the study of vehicles that travel through space, and ballistics, the study of the flight of projectiles.
Types of flight
Buoyant flight
Humans have managed to construct lighter-than-air vehicles that raise off the ground and fly, due to their buoyancy in the air.
An aerostat is a system that remains aloft primarily through the use of buoyancy to give an aircraft the same overall density as air. Aerostats include free balloons, airships, and moored balloons. An aerostat's main structural component is its envelope, a lightweight skin that encloses a volume of lifting gas to provide buoyancy, to which other components are attached.
Aerostats are so named because they use "aerostatic" lift, a buoyant force that does not require lateral movement through the surrounding air mass to effect a lifting force. By contrast, aerodynes primarily use aerodynamic lift, which requires the lateral movement of at least some part of the aircraft through the surrounding air mass.
Aerodynamic flight
Unpowered flight versus powered flight
Some things that fly do not generate propulsive thrust through the air, for example, the flying squirrel. This is termed gliding. Some other things can exploit rising air to climb such as raptors (when gliding) and man-made sailplane gliders. This is termed soaring. However most other birds and all powered aircraft need a source of propulsion to climb. This is termed powered flight.
Animal flight
The only groups of living things that use powered flight are birds, insects, and bats, while many groups have evolved gliding. The extinct pterosaurs, an order of reptiles contemporaneous with the dinosaurs, were also very successful flying animals, and there were apparently some flying dinosaurs (see Flying and gliding animals#Non-avian dinosaurs). Each of these groups' wings evolved independently, with insects the first animal group to evolve flight. The wings of the flying vertebrate groups are all based on the forelimbs, but differ significantly in structure; insect wings are hypothesized to be highly modified versions of structures that form gills in most other groups of arthropods.
Bats are the only mammals capable of sustaining level flight (see bat flight). However, there are several gliding mammals which are able to glide from tree to tree using fleshy membranes between their limbs; some can travel hundreds of meters in this way with very little loss in height. Flying frogs use greatly enlarged webbed feet for a similar purpose, and there are flying lizards which fold out their mobile ribs into a pair of flat gliding surfaces. "Flying" snakes also use mobile ribs to flatten their body into an aerodynamic shape, with a back and forth motion much the same as they use on the ground.
Flying fish can glide using enlarged wing-like fins, and have been observed soaring for hundreds of meters. It is thought that this ability was chosen by natural selection because it was an effective means of escape from underwater predators. The longest recorded flight of a flying fish was 45 seconds.
Most birds fly (see bird flight), with some exceptions. The largest birds, the ostrich and the emu, are earthbound flightless birds, as were the now-extinct dodos and the Phorusrhacids, which were the dominant predators of South America in the Cenozoic era. The non-flying penguins have wings adapted for use under water and use the same wing movements for swimming that most other birds use for flight. Most small flightless birds are native to small islands, and lead a lifestyle where flight would offer little advantage.
Among living animals that fly, the wandering albatross has the greatest wingspan, up to ; the great bustard has the greatest weight, topping at .
Most species of insects can fly as adults. Insect flight makes use of either of two basic aerodynamic models: creating a leading edge vortex, found in most insects, and using clap and fling, found in very small insects such as thrips.
Many species of spiders, spider mites and lepidoptera use a technique called ballooning to ride air currents such as thermals, by exposing their gossamer threads which gets lifted by wind and atmospheric electric fields.
Mechanical
Mechanical flight is the use of a machine to fly. These machines include aircraft such as airplanes, gliders, helicopters, autogyros, airships, balloons, ornithopters as well as spacecraft. Gliders are capable of unpowered flight. Another form of mechanical flight is para-sailing, where a parachute-like object is pulled by a boat. In an airplane, lift is created by the wings; the shape of the wings of the airplane are designed specially for the type of flight desired. There are different types of wings: tempered, semi-tempered, sweptback, rectangular and elliptical. An aircraft wing is sometimes called an airfoil, which is a device that creates lift when air flows across it.
Supersonic
Supersonic flight is flight faster than the speed of sound. Supersonic flight is associated with the formation of shock waves that form a sonic boom that can be heard from the ground, and is frequently startling. The creation of this shockwave requires a significant amount of energy; because of this, supersonic flight is generally less efficient than subsonic flight at about 85% of the speed of sound.
Hypersonic
Hypersonic flight is very high speed flight where the heat generated by the compression of the air due to the motion through the air causes chemical changes to the air. Hypersonic flight is achieved primarily by reentering spacecraft such as the Space Shuttle and Soyuz.
Ballistic
Atmospheric
Some things generate little or no lift and move only or mostly under the action of momentum, gravity, air drag and in some cases thrust. This is termed ballistic flight. Examples include balls, arrows, bullets, fireworks etc.
Spaceflight
Essentially an extreme form of ballistic flight, spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space. Examples include ballistic missiles, orbital spaceflight, etc.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Solid-state propulsion
In 2018, researchers at Massachusetts Institute of Technology (MIT) managed to fly an aeroplane with no moving parts, powered by an "ionic wind" also known as electroaerodynamic thrust.
History
Many human cultures have built devices that fly, from the earliest projectiles such as stones and spears, the
boomerang in Australia, the hot air Kongming lantern, and kites.
Aviation
George Cayley studied flight scientifically in the first half of the 19th century, and in the second half of the 19th century Otto Lilienthal made over 200 gliding flights and was also one of the first to understand flight scientifically. His work was replicated and extended by the Wright brothers who made gliding flights and finally the first controlled and extended, manned powered flights.
Spaceflight
Spaceflight, particularly human spaceflight became a reality in the 20th century following theoretical and practical breakthroughs by Konstantin Tsiolkovsky and Robert H. Goddard. The first orbital spaceflight was in 1957, and Yuri Gagarin was carried aboard the first crewed orbital spaceflight in 1961.
Physics
There are different approaches to flight. If an object has a lower density than air, then it is buoyant and is able to float in the air without expending energy. A heavier than air craft, known as an aerodyne, includes flighted animals and insects, fixed-wing aircraft and rotorcraft. Because the craft is heavier than air, it must generate lift to overcome its weight. The wind resistance caused by the craft moving through the air is called drag and is overcome by propulsive thrust except in the case of gliding.
Some vehicles also use thrust in the place of lift; for example rockets and Harrier jump jets.
Forces
Forces relevant to flight are
Propulsive thrust (except in gliders)
Lift, created by the reaction to an airflow
Drag, created by aerodynamic friction
Weight, created by gravity
Buoyancy, for lighter than air flight
These forces must be balanced for stable flight to occur.
Thrust
A fixed-wing aircraft generates forward thrust when air is pushed in the direction opposite to flight. This can be done in several ways including by the spinning blades of a propeller, or a rotating fan pushing air out from the back of a jet engine, or by ejecting hot gases from a rocket engine. The forward thrust is proportional to the mass of the airstream multiplied by the difference in velocity of the airstream. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable-pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft and thrust vectoring V/STOL aircraft use engine thrust to support the weight of the aircraft, and vector sum of this thrust fore and aft to control forward speed.
Lift
In the context of an air flow relative to a flying body, the lift force is the component of the aerodynamic force that is perpendicular to the flow direction. Aerodynamic lift results when the wing causes the surrounding air to be deflected - the air then causes a force on the wing in the opposite direction, in accordance with Newton's third law of motion.
Lift is commonly associated with the wing of an aircraft, although lift is also generated by rotors on rotorcraft (which are effectively rotating wings, performing the same function without requiring that the aircraft move forward through the air). While common meanings of the word "lift" suggest that lift opposes gravity, aerodynamic lift can be in any direction. When an aircraft is cruising for example, lift does oppose gravity, but lift occurs at an angle when climbing, descending or banking. On high-speed cars, the lift force is directed downwards (called "down-force") to keep the car stable on the road.
Drag
For a solid object moving through a fluid, the drag is the component of the net aerodynamic or hydrodynamic force acting opposite to the direction of the movement. Therefore, drag opposes the motion of the object, and in a powered vehicle it must be overcome by thrust. The process which creates lift also causes some drag.
Lift-to-drag ratio
Aerodynamic lift is created by the motion of an aerodynamic object (wing) through the air, which due to its shape and angle deflects the air. For sustained straight and level flight, lift must be equal and opposite to weight. In general, long narrow wings are able deflect a large amount of air at a slow speed, whereas smaller wings need a higher forward speed to deflect an equivalent amount of air and thus generate an equivalent amount of lift. Large cargo aircraft tend to use longer wings with higher angles of attack, whereas supersonic aircraft tend to have short wings and rely heavily on high forward speed to generate lift.
However, this lift (deflection) process inevitably causes a retarding force called drag. Because lift and drag are both aerodynamic forces, the ratio of lift to drag is an indication of the aerodynamic efficiency of the airplane. The lift to drag ratio is the L/D ratio, pronounced "L over D ratio." An airplane has a high L/D ratio if it produces a large amount of lift or a small amount of drag. The lift/drag ratio is determined by dividing the lift coefficient by the drag coefficient, CL/CD.
The lift coefficient Cl is equal to the lift L divided by the (density r times half the velocity V squared times the wing area A). [Cl = L / (A * .5 * r * V^2)] The lift coefficient is also affected by the compressibility of the air, which is much greater at higher speeds, so velocity V is not a linear function. Compressibility is also affected by the shape of the aircraft surfaces.
The drag coefficient Cd is equal to the drag D divided by the (density r times half the velocity V squared times the reference area A). [Cd = D / (A * .5 * r * V^2)]
Lift-to-drag ratios for practical aircraft vary from about 4:1 for vehicles and birds with relatively short wings, up to 60:1 or more for vehicles with very long wings, such as gliders. A greater angle of attack relative to the forward movement also increases the extent of deflection, and thus generates extra lift. However a greater angle of attack also generates extra drag.
Lift/drag ratio also determines the glide ratio and gliding range. Since the glide ratio is based only on the relationship of the aerodynamics forces acting on the aircraft, aircraft weight will not affect it. The only effect weight has is to vary the time that the aircraft will glide for – a heavier aircraft gliding at a higher airspeed will arrive at the same touchdown point in a shorter time.
Buoyancy
Air pressure acting up against an object in air is greater than the pressure above pushing down. The buoyancy, in both cases, is equal to the weight of fluid displaced - Archimedes' principle holds for air just as it does for water.
A cubic meter of air at ordinary atmospheric pressure and room temperature has a mass of about 1.2 kilograms, so its weight is about 12 newtons. Therefore, any 1-cubic-meter object in air is buoyed up with a force of 12 newtons. If the mass of the 1-cubic-meter object is greater than 1.2 kilograms (so that its weight is greater than 12 newtons), it falls to the ground when released. If an object of this size has a mass less than 1.2 kilograms, it rises in the air. Any object that has a mass that is less than the mass of an equal volume of air will rise in air - in other words, any object less dense than air will rise.
Thrust to weight ratio
Thrust-to-weight ratio is, as its name suggests, the ratio of instantaneous thrust to weight (where weight means weight at the Earth's standard acceleration ). It is a dimensionless parameter characteristic of rockets and other jet engines and of vehicles propelled by such engines (typically space launch vehicles and jet aircraft).
If the thrust-to-weight ratio is greater than the local gravity strength (expressed in gs), then flight can occur without any forward motion or any aerodynamic lift being required.
If the thrust-to-weight ratio times the lift-to-drag ratio is greater than local gravity then takeoff using aerodynamic lift is possible.
Flight dynamics
Flight dynamics is the science of air and space vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of mass, known as pitch, roll and yaw (See Tait-Bryan rotations for an explanation).
The control of these dimensions can involve a horizontal stabilizer (i.e. "a tail"), ailerons and other movable aerodynamic devices which control angular stability i.e. flight attitude (which in turn affects altitude, heading). Wings are often angled slightly upwards- they have "positive dihedral angle" which gives inherent roll stabilization.
Energy efficiency
To create thrust so as to be able to gain height, and to push through the air to overcome the drag associated with lift all takes energy. Different objects and creatures capable of flight vary in the efficiency of their muscles, motors and how well this translates into forward thrust.
Propulsive efficiency determines how much energy vehicles generate from a unit of fuel.
Range
The range that powered flight articles can achieve is ultimately limited by their drag, as well as how much energy they can store on board and how efficiently they can turn that energy into propulsion.
For powered aircraft the useful energy is determined by their fuel fraction- what percentage of the takeoff weight is fuel, as well as the specific energy of the fuel used.
Power-to-weight ratio
All animals and devices capable of sustained flight need relatively high power-to-weight ratios to be able to generate enough lift and/or thrust to achieve take off.
Takeoff and landing
Vehicles that can fly can have different ways to takeoff and land. Conventional aircraft accelerate along the ground until sufficient lift is generated for takeoff, and reverse the process for landing. Some aircraft can take off at low speed; this is called a short takeoff. Some aircraft such as helicopters and Harrier jump jets can take off and land vertically. Rockets also usually take off and land vertically, but some designs can land horizontally.
Guidance, navigation and control
Navigation
Navigation is the systems necessary to calculate current position (e.g. compass, GPS, LORAN, star tracker, inertial measurement unit, and altimeter).
In aircraft, successful air navigation involves piloting an aircraft from place to place without getting lost, breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground.
The techniques used for navigation in the air will depend on whether the aircraft is flying under the visual flight rules (VFR) or the instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using dead reckoning combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids.
Guidance
A guidance system is a device or group of devices used in the navigation of a ship, aircraft, missile, rocket, satellite, or other moving object. Typically, guidance is responsible for the calculation of the vector (i.e., direction, velocity) toward an objective.
Control
A conventional fixed-wing aircraft flight control system consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered as flight controls as they change speed.
Traffic
In the case of aircraft, air traffic is controlled by air traffic control systems.
Collision avoidance is the process of controlling spacecraft to try to prevent collisions.
Flight safety
Air safety is a term encompassing the theory, investigation and categorization of flight failures, and the prevention of such failures through regulation, education and training. It can also be applied in the context of campaigns that inform the public as to the safety of air travel.
See also
Aerodynamics
Levitation
Transvection (flying)
References
Notes
Bibliography
Coulson-Thomas, Colin. The Oxford Illustrated Dictionary. Oxford, UK: Oxford University Press, 1976, First edition 1975, .
French, A. P. Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). New York: W. W. Norton & Company Inc., 1970.
Honicke, K., R. Lindner, P. Anders, M. Krahl, H. Hadrich and K. Rohricht. Beschreibung der Konstruktion der Triebwerksanlagen. Berlin: Interflug, 1968.
Sutton, George P. Oscar Biblarz. Rocket Propulsion Elements. New York: Wiley-Interscience, 2000 (7th edition). .
Walker, Peter. Chambers Dictionary of Science and Technology. Edinburgh: Chambers Harrap Publishers Ltd., 2000, First edition 1998. .
External links
History and photographs of early aeroplanes etc.
'Birds in Flight and Aeroplanes' by Evolutionary Biologist and trained Engineer John Maynard-Smith Freeview video provided by the Vega Science Trust.
Aerodynamics
Sky | 0.767349 | 0.994603 | 0.763208 |
Are | Are commonly refers to:
Are (unit), a unit of area equal to 100 m2
Are, ARE or Åre may also refer to:
Places
Åre, a locality in Sweden
Åre Municipality, a municipality in Sweden
Åre ski resort in Sweden
Are Parish, a municipality in Pärnu County, Estonia
Are, Estonia, a small borough in Are Parish
Are-Gymnasium, a secondary school in Bad Neuenahr-Ahrweiler
Are, Saare County, a village in Pöide Parish, Saare County, Estonia
Arab Republic of Egypt
United Arab Emirates (ISO 3166-1 alpha-3 country code ARE)
Science, technology, and mathematics
Are (moth), a genus of moth
Admiralty Research Establishment, a precursor to the UK's Defence Research Agency
Aircraft Reactor Experiment, a US military program in the 1950s
Algebraic Riccati equation, in control theory
Asymptotic relative efficiency, in statistics
AU-rich element, in genetics
Organisations
Admiralty Research Establishment, a precursor to the UK's Defence Research Agency
Association for Research and Enlightenment, an organization devoted to American claimed psychic Edgar Cayce
Associate of the Royal Society of Painter-Printmakers, in the UK
AIRES, a Colombian airline (ICAO code ARE)
Other uses
are, a form of the English verb "to be"
Are, note name, see Guidonian hand
Are (surname), a surname recorded in Chinese history
Dirk van Are, bishop and lord of Utrecht in the 13th century
Are languages, a subgroup of the Are-Taupota languages
Are language, a language from Papua New Guinea
A.R.E. Weapons, a band from New York City, formed in 1999
Architect Registration Examination, a professional licensure examination in the US
See also
Ar (disambiguation)
ARR (disambiguation)
Arre (disambiguation)
R (disambiguation) | 0.768705 | 0.992838 | 0.7632 |
Reification (fallacy) | Reification (also known as concretism, hypostatization, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete real event or physical entity.
In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory".
Reification is part of normal usage of natural language, as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy.
A potential consequence of reification is exemplified by Goodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself.
Etymology
The term "reification" originates from the combination of the Latin terms res ("thing") and -fication, a suffix related to facere ("to make"). Thus reification can be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object.
Theory
Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will".
Reification may derive from an innate tendency to simplify experience by assuming constancy as much as possible.
Fallacy of misplaced concreteness
According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness. Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference to its relations to other spatial or temporal extensions.
[...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.
Vicious abstractionism
William James used the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticize Immanuel Kant's and Georg Wilhelm Friedrich Hegel's idealistic philosophies. In The Meaning of Truth, James wrote:
Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ... The viciously privative employment of abstract characters and class names is, I am persuaded, one of the great original sins of the rationalistic mind.
In a chapter on "The Methods and Snares of Psychology" in The Principles of Psychology, James describes a related fallacy, the psychologist's fallacy, thus: "The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the "psychologist's fallacy" par excellence" (volume 1, p. 196). John Dewey followed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition".
Use of constructs in science
The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology, utility in economics, and gravitational field in physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena.
The degree to which a construct is useful and accepted as part of the current paradigm in a scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity).
Stephen Jay Gould draws heavily on the idea of fallacy of reification in his book The Mismeasure of Man. He argues that the error in using intelligence quotient scores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence."
Relation to other fallacies
Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings. Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.
The animistic fallacy involves attributing personal intention to an event or situation.
Reification fallacy should not be confused with other fallacies of ambiguity:
Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase
Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence
Composition, when one assumes that a whole has a property solely because its various parts have that property
Division, when one assumes that various parts have a property solely because the whole has that same property
Equivocation, the misleading use of a word with more than one meaning
As a rhetorical device
The rhetorical devices of metaphor and personification express a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as the pathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric.
Counterexamples
Reification, while usually fallacious, is sometimes considered a valid argument. Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders, preferred numbers, and many others. (Compare the theory of social constructionism.)
See also
All models are wrong
Counterfactual definiteness
Idolatry
Objectification
Philosophical realism
Problem of universals, a debate about the reality of categories
Surrogation
Hypostatic abstraction
References
Informal fallacies | 0.768271 | 0.993398 | 0.763199 |
Aerenchyma | Aerenchyma or aeriferous parenchyma or lacunae, is a modification of the parenchyma to form a spongy tissue that creates spaces or air channels in the leaves, stems and roots of some plants, which allows exchange of gases between the shoot and the root. The channels of air-filled cavities (see image to right) provide a low-resistance internal pathway for the exchange of gases such as oxygen, carbon dioxide and ethylene between the plant above the water and the submerged tissues. Aerenchyma is also widespread in aquatic and wetland plants which must grow in hypoxic soils.
The word "aerenchyma" is Modern Latin derived from Latin for "air" and Greek for "infusion."
Aerenchyma formation and hypoxia
Aerenchyma (air-filled cavities) occur in two forms. Lysigenous aerenchyma form via apoptosis of particular cortical root cells to form air-filled cavities. Schizogenous aerenchyma form via decomposition of pectic substances in the middle lamellae with consequent cell separation.
When soil is flooded, hypoxia develops, as soil microorganisms consume oxygen faster than diffusion occurs. The presence of hypoxic soils is one of the defining characteristics of wetlands. Many wetland plants possess aerenchyma, and in some, such as water-lilies, there is mass flow of atmospheric air through leaves and rhizomes. There are many other chemical consequences of hypoxia. For example, nitrification is inhibited as low oxygen occurs and toxic compounds are formed, as anaerobic bacteria use nitrate, manganese, and sulfate as alternative electron acceptors. The reduction-oxidation potential of the soil decreases and metal oxides such as iron and manganese dissolve, however, radial oxygen loss allows re-oxidation of these ions in the rhizosphere.
In general, low oxygen stimulates trees and plants to produce ethylene.
Advantages
The large air-filled cavities provide a low-resistance internal pathway for the exchange of gases between the plant organs above the water and the submerged tissues. This allows plants to grow without incurring the metabolic costs of anaerobic respiration. Moreover, the degradation of cortical cells during aerenchyma formation reduce the metabolic costs of plants during stresses such as drought. Some of the oxygen transported through the aerenchyma leaks through root pores into the surrounding soil. The resulting small rhizosphere of oxygenated soil around individual roots support microorganisms that prevent the influx of potentially toxic soil components such as sulfide, iron, and manganese.
References
Plant physiology
Plant cells
Wetlands
de:Parenchyma#hym | 0.777546 | 0.98154 | 0.763193 |
Arm swing in human locomotion | Arm swing in human bipedal walking is a natural motion wherein each arm swings with the motion of the opposing leg. Swinging arms in an opposing direction with respect to the lower limb reduces the angular momentum of the body, balancing the rotational motion produced during walking. Although such pendulum-like motion of arms is not essential for walking, recent studies point that arm swing improves the stability and energy efficiency in human locomotion. Those positive effects of arm swing have been utilized in sports, especially in racewalking and sprinting.
Kinematics
Studies on the role of arm swing consist mainly of analysis of bipedal walking models and treadmill experiments on human subjects. Bipedal walking models of various complexity levels provided an explanation for the effects of arm swing on human locomotion. On the course of bipedal walking, the leg swing results in an angular momentum that is balanced by the ground reaction moments on the stance foot. Swinging arms create an angular momentum in the opposing direction of lower limb rotation, reducing the total angular momentum of the body. Lower angular momentum of the body results in a decline on the ground reaction moment on the stance foot.
Amplitude or frequency of arm movements is determined by the gait, meaning that the swing motion is adaptive to changing conditions and perturbations. As the walking speed increases, the amplitude of the arm swing increases accordingly. The frequency of the arm movements changes with the speed as well. Studies showed that at speeds lower than approximately 0.8 m/s, the frequency ratio between arm and leg movements is 2:1 whereas above that speed the ratio becomes 1:1.
Theories
Stability
Both simulations on skeletal models and experiments on force plate agree that the free arm swing limits the ground reaction moments effective on the stance foot during walking, because the total angular momentum is lowered by the counterbalancing swing of arms with respect to the lower-limb. In other words, a subject exerts less reaction moment to the ground surface when there is arm swing. This implies that the friction force between the stance foot and the ground surface does not have to be as high as without the arm swing.
Energy efficiency
Whether arm swing is a passive, natural motion caused by the rotation of torso or is an active motion that requires active muscle work has been a critical discussion on arm swing that could illuminate its benefit and function. A recent study concentrated on the energy consumption during walking showed that at low speeds arm swing is a passive motion dictated by the kinematics of torso, no different from a pair of pendula hung from the shoulders. Active upper extremity muscle work, controlled by the brain, only takes part when there is a perturbation and restores that natural motion. However, at higher speeds, the passive motion is insufficient to explain the amplitude of the swing observed in the experiments. The contribution of active muscle work increases with the walking speed. Despite the fact that a certain amount of energy is consumed for the arm movements, the total energy consumption drops meaning that arm swing still reduces the cost of walking. That reduction in the energy is up to 12 percent at certain walking speeds, a significant saving.
Evolution
The inter-limb coordination in human locomotion, questioning whether the human gait is based on quadruped locomotion, is another major topic of interest. Some research indicates that inter-limb coordination during human locomotion is organized in a similar way to that in the cat, promoting the view that the arm swing may be a residual function from quadruped gait. Another work on the control mechanisms of arm movements during walking corroborated the former findings, showing that central pattern generator (CPG) might be involved in cyclic arm swing. However, these findings do not imply vestigiality of arm swing, which appears to be debateful after the 2003 evidences on the function of arm swing in bipedal locomotion.
Athletic performance
Energy efficiency of arm swing and its potential in adjusting the momentum of the body have been utilized in sports. Sprinters make use of the contribution of arm swing on the linear momentum in order to get a higher forward acceleration. Racewalkers are also utilizing the arm swing for its energy efficiency. Rather than the rhythmic movements during walking, swinging arms in the right way helps athletic performance in different disciplines. Standing long jump performance is shown to be improved by swinging arms forward during the onset of the jump and back-and-forth during landing since the linear momentum of the body can be adjusted with the help of moving arms. Use of arms in adjusting the rotational and linear momentum is also a common practice in somersaulting and gymnastics.
Robotics
The literature on the arm swing is partly created by robotics researchers as the stability in locomotion is a significant challenge especially in humanoid robots. So far although many humanoid robots preserve static equilibrium during walking which does not require arm swing, arm movements has been added to a recent humanoid robot walking in dynamic equilibrium.
The pendulum-like motion of arms is also utilized in passive dynamic walkers, a mechanism that can walk on its own.
Neuromechanical considerations
Understanding the underlying neural mechanisms on the organization of rhythmic arm movement and its coordination with the lower limb could enable development of effective strategies for rehabilitation of spinal cord injury and stroke patients. Rhythmic arm movements for different tasks -arm swing during walking, cycling arms while standing and arm swing while standing- were investigated in this perspective and the results pointed a common central control mechanism. Performing the left-lateralised Stroop task while walking on a treadmill tends to reduce arm swing on the right, particularly in older people, suggesting a significant supraspinal contribution to its maintenance. While men of all ages demonstrate this interference effect between cognitive load and right arm swing, women appear to be resistant until the age of 60.
Medical science
The role of arm movements in unhealthy subjects is another popular direction investigating the strategies adopted by patients in order to maintain stability in walking. As an example, children with hemiparetic CP showed substantial increases in angular momentum generated by the legs, which were compensated by increased angular momentum of the unaffected arm showing the way arm swing is utilized in order to balance the rotational motion of the body. Reduction in bilateral arm coordination may contribute to clinically observed asymmetry in arm swing behavior which could be a sign of Parkinson's disease. A quantitative study on the level of asymmetry in arm swing is considered to have utility for early and differential diagnosis, and for tracking Parkinson's disease progression.
See also
Biomechanics of sprint running
Bipedalism
Central pattern generator
Erb's palsy (unilateral reduced arm swing while walking)
Gunslinger's gait (unilateral reduced arm swing while walking)
Hemimotor neglect (may manifest as unilateral reduced arm swing while walking)
Hemiparesis/Hemiplegia (unilateral reduced arm swing while walking)
Interlimb coordination
Parkinson's disease (may manifest as unilateral reduced arm swing while walking)
References
Further reading
External links
Arm Swinging – Experiment
‘LOCOMOTION NEUROMECHANICS’ course APPH 6232 at Georgia Institute of Technology
Arm Swing: Maximize Your Upper Body and Reduce Your Legwork
How Important is Arm Swing to Running Form and Speed?
The Art of Running
Scientists find a reason for arm-swinging as you walk
Walking | 0.785555 | 0.971525 | 0.763187 |
Second law of thermodynamics | The second law of thermodynamics is a physical law based on universal empirical observation concerning heat and energy interconversions. A simple statement of the law is that heat always flows spontaneously from hotter to colder regions of matter (or 'downhill' in terms of the temperature gradient). Another statement is: "Not all heat can be converted into work in a cyclic process."
The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system. It predicts whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. For example, the first law allows the process of a cup falling off a table and breaking on the floor, as well as allowing the reverse process of the cup fragments coming back together and 'jumping' back onto the table, while the second law allows the former and denies the latter. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always tend toward a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.
Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, but this has been formally delegated to the zeroth law of thermodynamics.
Introduction
The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat. It can be linked to the law of conservation of energy. Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another.
The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. That is, the state of a natural system itself can be reversed, but not without increasing the entropy of the system's surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy.
For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change. A heat pump can reverse this heat flow, but the reversal process and the original process, both cause entropy production, thereby increasing the entropy of the system's surroundings. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, , increases.
In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat to the system of interest, divided by the common thermodynamic temperature of the system of interest and the auxiliary thermodynamic system:
Different notations are used for an infinitesimal amount of heat and infinitesimal change of entropy because entropy is a function of state, while heat, like work, is not.
For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality
This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature and the temperature of the surroundings.
The equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass),
which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry.
Introducing a set of internal variables to describe the deviation of a thermodynamic system from a chemical equilibrium state in physical equilibrium (with the required well-defined uniform pressure P and temperature T), one can record the equality
The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of the second principle of thermodynamics – the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy.
The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body.
Various statements of the law
The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent.
Carnot's principle
The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are:
...wherever there exists a difference of temperature, motive power can be produced.
The production of motive power is then due in steam engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body ...
The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric.
In modern terms, Carnot's principle may be stated more precisely:
The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.
Clausius statement
The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. His formulation of the second law, which was published in German in 1854, is known as the Clausius statement:
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other.
Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system.
Kelvin statements
Lord Kelvin expressed the second law in several wordings.
It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature.
It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.
Equivalence of the Clausius and the Kelvin statements
Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (the drained heat is fully converted to work) in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the right figure. The efficiency of a normal heat engine is η and so the efficiency of the reversed heat engine is 1/η. The net and sole effect of the combined pair of engines is to transfer heat from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the first law of thermodynamics, as for the total system's energy to remain the same; , so therefore , where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) is obtained by the definition of efficiency of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.
Planck's proposition
Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law.
It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the production of work and cooling of a heat reservoir.
Relation between Kelvin's statement and Planck's proposition
It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by ter Haar and Wergeland. This version, also known as the heat engine statement, of the second law states that
It is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work.
Planck's statement
Max Planck stated the second law as follows.
Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.
Rather like Planck's statement is that of George Uhlenbeck and G. W. Ford for irreversible phenomena.
... in an irreversible or spontaneous change from one equilibrium state to another (as for example the equalization of temperature of two bodies A and B, when brought in contact) the entropy always increases.
Principle of Carathéodory
Constantin Carathéodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows:
In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S.
With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, .
Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium.
Planck's principle
In 1926, Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle
The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant.
This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work." Planck wrote: "The production of heat by friction is irreversible."
Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. It is relevant that for a system at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy.
A statement that in a sense is complementary to Planck's principle is made by Claus Borgnakke and Richard E. Sonntag. They do not offer it as a full statement of the second law:
... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system.
Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy.
Relating the second law to the definition of temperature
The second law has been shown to be equivalent to the internal energy defined as a convex function of the other extensive properties of the system. That is, when a system is described by stating its internal energy , an extensive variable, as a function of its entropy , volume , and mol number , i.e. ), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy (essentially equivalent to the first equation for and held constant):
Second law statements, such as the Clausius inequality, involving radiative fluxes
The Clausius inequality, as well as some other statements of the second law, must be re-stated to have general applicability for all forms of heat transfer, i.e. scenarios involving radiative fluxes. For example, the integrand (đQ/T) of the Clausius expression applies to heat conduction and convection, and the case of ideal infinitesimal blackbody radiation (BR) transfer, but does not apply to most radiative transfer scenarios and in some cases has no physical meaning whatsoever. Consequently, the Clausius inequality was re-stated so that it is applicable to cycles with processes involving any form of heat transfer. The entropy transfer with radiative fluxes is taken separately from that due to heat transfer by conduction and convection, where the temperature is evaluated at the system boundary where the heat transfer occurs. The modified Clausius inequality, for all heat transfer scenarios, can then be expressed as,
In a nutshell, the Clausius inequality is saying that when a cycle is completed, the change in the state property S will be zero, so the entropy that was produced during the cycle must have transferred out of the system by heat transfer. The (or đ) indicates a path dependent integration.
Due to the inherent emission of radiation from all matter, most entropy flux calculations involve incident, reflected and emitted radiative fluxes. The energy and entropy of unpolarized blackbody thermal radiation, is calculated using the spectral energy and entropy radiance expressions derived by Max Planck using equilibrium statistical mechanics,
where c is the speed of light, k is the Boltzmann constant, h is the Planck constant, ν is frequency, and the quantities Kv and Lv are the energy and entropy fluxes per unit frequency, area, and solid angle. In deriving this blackbody spectral entropy radiance, with the goal of deriving the blackbody energy formula, Planck postulated that the energy of a photon was quantized (partly to simplify the mathematics), thereby starting quantum theory.
A non-equilibrium statistical mechanics approach has also been used to obtain the same result as Planck, indicating it has wider significance and represents a non-equilibrium entropy. A plot of Kv versus frequency (v) for various values of temperature (T) gives a family of blackbody radiation energy spectra, and likewise for the entropy spectra. For non-blackbody radiation (NBR) emission fluxes, the spectral entropy radiance Lv is found by substituting Kv spectral energy radiance data into the Lv expression (noting that emitted and reflected entropy fluxes are, in general, not independent). For the emission of NBR, including graybody radiation (GR), the resultant emitted entropy flux, or radiance L, has a higher ratio of entropy-to-energy (L/K), than that of BR. That is, the entropy flux of NBR emission is farther removed from the conduction and convection q/T result, than that for BR emission. This observation is consistent with Max Planck's blackbody radiation energy and entropy formulas and is consistent with the fact that blackbody radiation emission represents the maximum emission of entropy for all materials with the same temperature, as well as the maximum entropy emission for all radiation with the same energy radiance.
Generalized conceptual statement of the second law principle
Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) – between system and its environment – up or down. This 'special' category of processes, category IV, is characterized by movement in the direction of low disorder and low uniformity, counteracting the second law tendency towards uniformity and disorder.
The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where 'exergy' is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and 'instruction or intelligence', although subjective, is in the context of the set of category IV processes.
Consider a category IV example of robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity.
There are also situations where the entropy spontaneously decreases by means of energy and entropy transfer. When thermodynamic constraints are not present, spontaneously energy or mass, as well as accompanying entropy, may be transferred out of a system in a progress to reach external equilibrium or uniformity in intensive properties of the system with its surroundings. This occurs spontaneously because the energy or mass transferred from the system to its surroundings results in a higher entropy in the surroundings, that is, it results in higher overall entropy of the system plus its surroundings. Note that this transfer of entropy requires dis-equilibrium in properties, such as a temperature difference. One example of this is the cooling crystallization of water that can occur when the system's surroundings are below freezing temperatures. Unconstrained heat transfer can spontaneously occur, leading to water molecules freezing into a crystallized structure of reduced disorder (sticking together in a certain order due to molecular attraction). The entropy of the system decreases, but the system approaches uniformity with its surroundings (category III).
On the other hand, consider the refrigeration of water in a warm environment. Due to refrigeration, as heat is extracted from the water, the temperature and entropy of the water decreases, as the system moves further away from uniformity with its warm surroundings or environment (category IV). The main point, take-away, is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect.
Corollaries
Perpetual motion of the second kind
Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines.
Carnot's theorem
Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states:
All irreversible heat engines between two heat reservoirs are less efficient than a Carnot engine operating between the same reservoirs.
All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs.
In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle's reversibility and was condemned to be less efficient.
Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law.
Clausius inequality
The Clausius theorem (1854) states that in a cyclic process
The equality holds in the reversible case and the strict inequality holds in the irreversible case, with Tsurr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality.
Thermodynamic temperature
For an arbitrary heat engine, the efficiency is:
where Wn is the net work done by the engine per cycle, qH > 0 is the heat added to the engine from a hot reservoir, and qC = − < 0 is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio / .
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures TH and TC must have the same efficiency, that is to say, the efficiency is a function of temperatures only:
In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 and T3, where T1 > T2 > T3. This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures T1 and T3, then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below.
,
,
.
Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at T2. We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following.
In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3.
Then
Now consider the case where is a fixed reference temperature: the temperature of the triple point of water as 273.16 K; . Then for any T2 and T3,
Therefore, if thermodynamic temperature T* is defined by
then the function f, viewed as a function of thermodynamic temperatures, is simply
and the reference temperature T1* = 273.16 K × f(T1,T1) = 273.16 K. (Any reference temperature and any positive numerical value could be usedthe choice here corresponds to the Kelvin scale.)
Entropy
According to the Clausius equality, for a reversible process
That means the line integral is path independent for reversible processes.
So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies
With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals.
For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy.
Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop, with Tsurr as the temperature of the surroundings,
Thus,
where the equality holds if the transformation is reversible. If the process is an adiabatic process, then , so .
Energy, available useful work
An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an unlimited heat reservoir at temperature TR and pressure PR so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain TR; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain PR.
Whatever changes to dS and dSR occur in the entropies of the sub-system and the surroundings individually, the entropy Stot of the isolated total system must not decrease according to the second law of thermodynamics:
According to the first law of thermodynamics, the change dU in the internal energy of the sub-system is the sum of the heat δq added to the sub-system, minus any work δw done by the sub-system, plus any net chemical energy entering the sub-system d ΣμiRNi, so that:
where μiR are the chemical potentials of chemical species in the external surroundings.
Now the heat leaving the reservoir and entering the sub-system is
where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the second law inequality from above.
It therefore follows that any net work δw done by the sub-system must obey
It is useful to separate the work δw done by the subsystem into the useful work δwu that can be done by the sub-system, over and beyond the work pR dV done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:
It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the availability or exergy E of the subsystem,
The second law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,
i.e. the change in the subsystem's exergy plus the useful work done by the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done on the system) must be less than or equal to zero.
In sum, if a proper infinite-reservoir-like reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in E for an irreversible process and no change for a reversible process.
is equivalent to
This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the second law without directly measuring or considering entropy change in a total isolated system (see also Process engineer). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (see Exergy efficiency.)
This approach to the second law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines.
Direction of spontaneous processes
The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For isolated systems, no energy is provided by the surroundings and the second law requires that the entropy of the system alone must increase: ΔS > 0. Examples of spontaneous physical processes in isolated systems include the following:
1) Heat can be transferred from a region of higher temperature to a lower temperature (but not the reverse).
2) Mechanical energy can be converted to thermal energy (but not the reverse).
3) A solute can move from a region of higher concentration to a region of lower concentration (but not the reverse).
However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. This is possible provided the total entropy change of the system plus the surroundings is positive as required by the second law: ΔStot = ΔS + ΔSR > 0. For the three examples given above:
1) Heat can be transferred from a region of lower temperature to a higher temperature in a refrigerator or in a heat pump. These machines must provide sufficient work to the system.
2) Thermal energy can be converted to mechanical work in a heat engine, if sufficient heat is also expelled to the surroundings.
3) A solute can move from a region of lower concentration to a region of higher concentration in the biochemical process of active transport, if sufficient work is provided by a concentration gradient of a chemical such as ATP or by an electrochemical gradient.
Second law in chemical thermodynamics
For a spontaneous chemical process in a closed system at constant temperature and pressure without non-PV work, the Clausius inequality ΔS > Q/Tsurr transforms into a condition for the change in Gibbs free energy
or dG < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, . Thus, a negative value of the change in free energy (G or A) is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant T and p without electrical work is dG = 0.
History
The first theory of the conversion of heat into mechanical work is due to Nicolas Léonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings.
Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow spontaneously from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865).
Established during the 19th century, the Kelvin-Planck statement of the second law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This statement was shown to be equivalent to the statement of Clausius.
The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same.
There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system. This doctrine is obsolescent.
Account given by Clausius
In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form:
where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:
The entropy of the universe tends to a maximum.
This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description.
In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is:
where
S is the entropy of the system and
t is time.
The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is:
with
with the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature it gives the so-called dissipated energy .
The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is:
with
Here,
is the heat flow into the system
is the temperature at the point where the heat enters the system.
The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms.
For open systems (also allowing exchange of matter):
with
Here, is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions.
Statistical mechanics
Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/ where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.
Derivation from statistical mechanics
The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution.
Due to Loschmidt's paradox, derivations of the second law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested.
Given these assumptions, in statistical mechanics, the second law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of is:
where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on .
Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that is maximized at the given energy of the isolated system as that is the most probable situation in equilibrium.
If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value).
Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of . We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity increases monotonically as a function of time during the intermediate out of equilibrium state.
Derivation of the entropy change for reversible processes
The second part of the second law states that the entropy change of a system undergoing a reversible process is given by:
where the temperature is defined as:
See Microcanonical ensemble for the justification for this definition. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external variable x is defined such that is the work performed by the system if x is increased by an amount dx. For example, if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:
Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E – Y dx to E move from below E to above E. There are
such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is given by . The difference
is thus the net contribution to the increase in . If Y dx is larger than there will be the energy eigenstates that move from below E to above . They are counted in both and , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that:
Combining this with
gives:
Derivation for systems described by the canonical ensemble
If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble:
Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:
that
Inserting the formula for for the canonical ensemble in here gives:
Initial conditions at the Big Bang
As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seems to have been extremely uniform.
This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size.
As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions.
Living organisms
There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, including Erwin Schrödinger (in his book What is Life?) and Léon Brillouin.
To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of metabolism, give out breakdown products and heat. Plants take in radiative energy from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes.
Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs.
Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are a subject of ongoing research.
Gravitational systems
Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time.
This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter. When the entropy of the black-body radiation emitted by the bodies is included, however, the total entropy of the system can be shown to increase even as the entropy of the planet or star decreases.
Non-equilibrium states
The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium.
For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium. Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable fluctuations. There is an exception, the case of critical states, which exhibit to the naked eye the phenomenon of critical opalescence. For laboratory studies of critical states, exceptionally long observation times are needed.
In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system.
It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states.
There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal.
For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law.
The physics of macroscopically observable fluctuations is beyond the scope of this article.
Arrow of time
The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality).
Irreversibility
Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this.
Loschmidt's paradox
Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system.
In the opinion of Schrödinger, "It is now quite obvious in what manner you have to reformulate the law of entropyor for that matter, all other irreversible statementsso that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other." The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it.
Poincaré recurrence theorem
The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal.
Maxwell's demon
James Clerk Maxwell imagined one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics.
One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed.
Maxwell's 'demon' repeatedly alters the permeability of the wall between A and B. It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes.
Quotations
See also
Zeroth law of thermodynamics
First law of thermodynamics
Third law of thermodynamics
Clausius–Duhem inequality
Fluctuation theorem
Heat death of the universe
History of thermodynamics
Jarzynski equality
Laws of thermodynamics
Maximum entropy thermodynamics
Quantum thermodynamics
Reflections on the Motive Power of Fire
Relativistic heat conduction
Thermal diode
Thermodynamic equilibrium
References
Sources
Atkins, P.W., de Paula, J. (2006). Atkins' Physical Chemistry, eighth edition, W.H. Freeman, New York, .
Attard, P. (2012). Non-equilibrium Thermodynamics and Statistical Mechanics: Foundations and Applications, Oxford University Press, Oxford UK, .
Baierlein, R. (1999). Thermal Physics, Cambridge University Press, Cambridge UK, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, .
Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley.
Borgnakke, C., Sonntag., R.E. (2009). Fundamentals of Thermodynamics, seventh edition, Wiley, .
Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK.
Bridgman, P.W. (1943). The Nature of Thermodynamics, Harvard University Press, Cambridge MA.
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
Carnot, S. (1824/1986). Reflections on the motive power of fire, Manchester University Press, Manchester UK, . Also here.
Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London.
Translated into English:
Translated into English: Reprinted in:
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, .
Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, .
Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108–248, 343–524, reprinted in The Collected Works of J. Willard Gibbs, Ph.D, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353.
Griem, H.R. (2005). Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York .
Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, Wiley-Interscience, London, 1971, .
Greven, A., Keller, G., Warnecke (editors) (2003). Entropy, Princeton University Press, Princeton NJ, .
Guggenheim, E.A. (1949). 'Statistical basis of thermodynamics', Research, 2: 450–454.
Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North Holland, Amsterdam.
Gyarmati, I. (1967/1970) Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated by E. Gyarmati and W.F. Heinz, Springer, New York.
Kittel, C., Kroemer, H. (1969/1980). Thermal Physics, second edition, Freeman, San Francisco CA, .
Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, .
Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, .
Lieb, E.H., Yngvason, J. (2003). The Entropy of Classical Thermodynamics, pp. 147–195, Chapter 8 of Entropy, Greven, A., Keller, G., Warnecke (editors) (2003).
Müller, I. (1985). Thermodynamics, Pitman, London, .
Müller, I. (2003). Entropy in Nonequilibrium, pp. 79–109, Chapter 5 of Entropy, Greven, A., Keller, G., Warnecke (editors) (2003).
Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, .
Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK.
Planck, M. (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans Green, London, p. 100.
Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia.
Planck, M. (1926). Über die Begründung des zweiten Hauptsatzes der Thermodynamik, Sitzungsberichte der Preussischen Akademie der Wissenschaften: Physikalisch-mathematische Klasse: 453–463.
Pokrovskii V.N. (2005) Extended thermodynamics in a discrete-system approach, Eur. J. Phys. vol. 26, 769–781.
Quinn, T.J. (1983). Temperature, Academic Press, London, .
Roberts, J.K., Miller, A.R. (1928/1960). Heat and Thermodynamics, (first edition 1928), fifth edition, Blackie & Son Limited, Glasgow.
Schrödinger, E. (1950). Irreversibility, Proc. R. Ir. Acad., A53: 189–195.
ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA.
Also published in
Thomson, W. (1852). On the universal tendency in nature to the dissipation of mechanical energy Philosophical Magazine, Ser. 4, p. 304.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA.
Truesdell, C. (1980). The Tragicomical History of Thermodynamics 1822–1854, Springer, New York, .
Uffink, J. (2001). Bluff your way in the second law of thermodynamics, Stud. Hist. Phil. Mod. Phys., 32(3): 305–394.
Uffink, J. (2003). Irreversibility and the Second Law of Thermodynamics, Chapter 7 of Entropy, Greven, A., Keller, G., Warnecke (editors) (2003), Princeton University Press, Princeton NJ, .
Uhlenbeck, G.E., Ford, G.W. (1963). Lectures in Statistical Mechanics, American Mathematical Society, Providence RI.
Zemansky, M.W. (1968). Heat and Thermodynamics. An Intermediate Textbook, fifth edition, McGraw-Hill Book Company, New York.
Further reading
Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. Chpts. 4–9 contain an introduction to the second law, one a bit less technical than this entry.
Leff, Harvey S., and Rex, Andrew F. (eds.) 2003. Maxwell's Demon 2 : Entropy, classical and quantum information, computing. Bristol UK; Philadelphia PA: Institute of Physics.
(technical).
(full text of 1897 ed.) (html )
Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Cañada, CA: DCW Industries. .
also at .
External links
Stanford Encyclopedia of Philosophy: "Philosophy of Statistical Mechanics" – by Lawrence Sklar.
Second law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky
E.T. Jaynes, 1988, "The evolution of Carnot's principle," in G. J. Erickson and C. R. Smith (eds.)Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol,.1: p. 267.
Caratheodory, C., "Examination of the foundations of thermodynamics," trans. by D. H. Delphenich
The second law of Thermodynamics, BBC Radio 4 discussion with John Gribbin, Peter Atkins & Monica Grady (In Our Time, December 16, 2004)
The Journal of the International Society for the History of Philosophy of Science, 2012
Equations of physics
2
Non-equilibrium thermodynamics
Philosophy of thermal and statistical physics | 0.763558 | 0.999474 | 0.763156 |
Electromagnetic spectrum | The electromagnetic spectrum is the full range of electromagnetic radiation, organized by frequency or wavelength. The spectrum is divided into separate bands, with different names for the electromagnetic waves within each band. From low to high frequency these are: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The electromagnetic waves in each of these bands have different characteristics, such as how they are produced, how they interact with matter, and their practical applications.
Radio waves, at the low-frequency end of the spectrum, have the lowest photon energy and the longest wavelengths—thousands of kilometers, or more. They can be emitted and received by antennas, and pass through the atmosphere, foliage, and most building materials.
Gamma rays, at the high-frequency end of the spectrum, have the highest photon energies and the shortest wavelengths—much smaller than an atomic nucleus. Gamma rays, X-rays, and extreme ultraviolet rays are called ionizing radiation because their high photon energy is able to ionize atoms, causing chemical reactions. Longer-wavelength radiation such as visible light is nonionizing; the photons do not have sufficient energy to ionize atoms.
Throughout most of the electromagnetic spectrum, spectroscopy can be used to separate waves of different frequencies, so that the intensity of the radiation can be measured as a function of frequency or wavelength. Spectroscopy is used to study the interactions of electromagnetic waves with matter.
History and discovery
Humans have always been aware of visible light and radiant heat but for most of history it was not known that these phenomena were connected or were representatives of a more extensive principle. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Light was intensively studied from the beginning of the 17th century leading to the invention of important instruments like the telescope and microscope. Isaac Newton was the first to use the term spectrum for the range of colours that white light could be split into with a prism. Starting in 1666, Newton showed that these colours were intrinsic to light and could be recombined into white light. A debate arose over whether light had a wave nature or a particle nature with René Descartes, Robert Hooke and Christiaan Huygens favouring a wave description and Newton favouring a particle description. Huygens in particular had a well developed theory from which he was able to derive the laws of reflection and refraction. Around 1801, Thomas Young measured the wavelength of a light beam with his two-slit experiment thus conclusively demonstrating that light was a wave.
In 1800, William Herschel discovered infrared radiation. He was studying the temperature of different colours by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays", a type of light ray that could not be seen. The next year, Johann Ritter, working at the other end of the spectrum, noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions). These behaved similarly to visible violet light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation.
The study of electromagnetism began in 1820 when Hans Christian Ørsted discovered that electric currents produce magnetic fields (Oersted's law). Light was first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s, James Clerk Maxwell developed four partial differential equations (Maxwell's equations) for the electromagnetic field. Two of these equations predicted the possibility and behavior of waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave. Maxwell's equations predicted an infinite range of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum.
Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886, the physicist Heinrich Hertz built an apparatus to generate and detect what are now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio.
In 1895, Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called this radiation "x-rays" and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for this radiography.
The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900, Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he at first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta particles) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths.
The wave-particle debate was rekindled in 1901 when Max Planck discovered that light is absorbed only in discrete "quanta", now called photons, implying that light has a particle nature. This idea was made explicit by Albert Einstein in 1905, but never accepted by Planck and many other contemporaries. The modern position of science is that electromagnetic radiation has both a wave and a particle nature, the wave-particle duality. The contradictions arising from this position are still being debated by scientists and philosophers.
Range
Electromagnetic waves are typically described by any of the following three physical properties: the frequency f, wavelength λ, or photon energy E. Frequencies observed in astronomy range from (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency, so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be indefinitely long. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations:
where:
c is the speed of light in vacuum
h is the Planck constant.
Whenever electromagnetic waves travel in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, whatever medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated.
Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries.
Spectroscopy can detect a much wider region of the EM spectrum than the visible wavelength range of 400 nm to 700 nm in a vacuum. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae and frequencies as high as have been detected from astrophysical sources.
Regions
The types of electromagnetic radiation are broadly classified into the following classes (regions, bands or types):
Gamma radiation
X-ray radiation
Ultraviolet radiation
Visible light (light that humans can see)
Infrared radiation
Microwave radiation
Radio waves
This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation.
There are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow (which is the sub-spectrum of visible light). Radiation of each frequency and wavelength (or in each band) has a mix of properties of the two regions of the spectrum that bound it. For example, red light resembles infrared radiation in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system.
The distinction between X-rays and gamma rays is partly based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process are always termed gamma rays, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons. In general, nuclear transitions are much more energetic than electronic transitions, so gamma rays are more energetic than X-rays, but exceptions exist. By analogy to electronic transitions, muonic atom transitions are also said to produce X-rays, even though their energy may exceed , whereas there are many (77 known to be less than ) low-energy nuclear transitions (e.g., the nuclear transition of thorium-229m), and, despite being one million-fold less energetic than some muonic X-rays, the emitted photons are still called gamma rays due to their nuclear origin.
The convention that EM radiation that is known to come from the nucleus is always called "gamma ray" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high-energy physics and in medical radiotherapy, very high energy EMR (in the > 10 MeV region)—which is of higher energy than any nuclear gamma ray—is not called X-ray or gamma ray, but instead by the generic term of "high-energy photons".
The region of the spectrum where a particular observed electromagnetic radiation falls is reference frame-dependent (due to the Doppler shift for light), so EM radiation that one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos.
Rationale for names
Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons arising from these qualitative interaction differences.
Types of radiation
Radio waves
Radio waves are emitted and received by antennas, which consist of conductors such as metal rod resonators. In artificial generation of radio waves, an electronic device called a transmitter generates an alternating electric current which is applied to an antenna. The oscillating electrons in the antenna generate oscillating electric and magnetic fields that radiate away from the antenna as radio waves. In reception of radio waves, the oscillating electric and magnetic fields of a radio wave couple to the electrons in an antenna, pushing them back and forth, creating oscillating currents which are applied to a radio receiver. Earth's atmosphere is mainly transparent to radio waves, except for layers of charged particles in the ionosphere which can reflect certain frequencies.
Radio waves are extremely widely used to transmit information across distances in radio communication systems such as radio broadcasting, television, two way radios, mobile phones, communication satellites, and wireless networking. In a radio communication system, a radio frequency current is modulated with an information-bearing signal in a transmitter by varying either the amplitude, frequency or phase, and applied to an antenna. The radio waves carry the information across space to a receiver, where they are received by an antenna and the information extracted by demodulation in the receiver. Radio waves are also used for navigation in systems like Global Positioning System (GPS) and navigational beacons, and locating distant objects in radiolocation and radar. They are also used for remote control, and for industrial heating.
The use of the radio spectrum is strictly regulated by governments, coordinated by the International Telecommunication Union (ITU) which allocates frequencies to different users for different uses.
Microwaves
Microwaves are radio waves of short wavelength, from about 10 centimeters to one millimeter, in the SHF and EHF frequency bands. Microwave energy is produced with klystron and magnetron tubes, and with solid state devices such as Gunn and IMPATT diodes. Although they are emitted and absorbed by short antennas, they are also absorbed by polar molecules, coupling to vibrational and rotational modes, resulting in bulk heating. Unlike higher frequency waves such as infrared and visible light which are absorbed mainly at surfaces, microwaves can penetrate into materials and deposit their energy below the surface. This effect is used to heat food in microwave ovens, and for industrial heating and medical diathermy. Microwaves are the main wavelengths used in radar, and are used for satellite communication, and wireless networking technologies such as Wi-Fi. The copper cables (transmission lines) which are used to carry lower-frequency radio waves to antennas have excessive power losses at microwave frequencies, and metal pipes called waveguides are used to carry them. Although at the low end of the band the atmosphere is mainly transparent, at the upper end of the band absorption of microwaves by atmospheric gases limits practical propagation distances to a few kilometers.
Terahertz radiation or sub-millimeter radiation is a region of the spectrum from about 100 GHz to 30 terahertz (THz) between microwaves and far infrared which can be regarded as belonging to either band. Until recently, the range was rarely studied and few sources existed for microwave energy in the so-called terahertz gap, but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment. Terahertz radiation is strongly absorbed by atmospheric gases, making this frequency range useless for long-distance communication.
Infrared radiation
The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz to 400 THz (1 mm – 750 nm). It can be divided into three parts:
Far-infrared, from 300 GHz to 30 THz (1 mm – 10 μm). The lower part of this range may also be called microwaves or terahertz waves. This radiation is typically absorbed by so-called rotational modes in gas-phase molecules, by molecular motions in liquids, and by phonons in solids. The water in Earth's atmosphere absorbs so strongly in this range that it renders the atmosphere in effect opaque. However, there are certain wavelength ranges ("windows") within the opaque range that allow partial transmission, and can be used for astronomy. The wavelength range from approximately 200 μm up to a few mm is often referred to as Submillimetre astronomy, reserving far infrared for wavelengths below 200 μm.
Mid-infrared, from 30 THz to 120 THz (10–2.5 μm). Hot objects (black-body radiators) can radiate strongly in this range, and human skin at normal body temperature radiates strongly at the lower end of this region. This radiation is absorbed by molecular vibrations, where the different atoms in a molecule vibrate around their equilibrium positions. This range is sometimes called the fingerprint region, since the mid-infrared absorption spectrum of a compound is very specific for that compound.
Near-infrared, from 120 THz to 400 THz (2,500–750 nm). Physical processes that are relevant for this range are similar to those for visible light. The highest frequencies in this region can be detected directly by some types of photographic film, and by many types of solid state image sensors for infrared photography and videography.
Visible light
Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light. By definition, visible light is the part of the EM spectrum the human eye is the most sensitive to. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underlie human vision and plant photosynthesis. The light that excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow whilst ultraviolet would appear just beyond the opposite violet end.
Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colours of light observed in the visible spectrum between 400 nm and 780 nm.
If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently understood psychophysical phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves.
Ultraviolet radiation
Next in frequency comes ultraviolet (UV). In frequency (and thus energy), UV rays sit between the violet end of the visible spectrum and the X-ray range. The UV wavelength spectrum ranges from 399 nm to 10 nm and is divided into 3 sections: UVA, UVB, and UVC.
UV is the lowest energy range energetic enough to ionize atoms, separating electrons from them, and thus causing chemical reactions. UV, X-rays, and gamma rays are thus collectively called ionizing radiation; exposure to them can damage living tissue. UV can also cause substances to glow with visible light; this is called fluorescence. UV fluorescence is used by forensics to detect any evidence like blood and urine, that is produced by a crime scene. Also UV fluorescence is used to detect counterfeit money and IDs, as they are laced with material that can glow under UV.
At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen. Due to skin cancer caused by UV, the sunscreen industry was invented to combat UV damage. Mid UV wavelengths are called UVB and UVB lights such as germicidal lamps are used to kill germs and also to sterilize water.
The Sun emits UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface. The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage.
X-rays
After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena. X-rays are also emitted by stellar corona and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 g/cm2), equivalent to 10 meters thickness of water. This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below).
Gamma rays
After hard X-rays come gamma rays, which were discovered by Paul Ulrich Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes. They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans. The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering.
See also
Notes and references
External links
Australian Radiofrequency Spectrum Allocations Chart (from Australian Communications and Media Authority)
Canadian Table of Frequency Allocations (from Industry Canada)
U.S. Frequency Allocation Chart – Covering the range 3 kHz to 300 GHz (from Department of Commerce)
UK frequency allocation table (from Ofcom, which inherited the Radiocommunications Agency's duties, pdf format)
Flash EM Spectrum Presentation / Tool – Very complete and customizable.
Poster "Electromagnetic Radiation Spectrum" (992 kB)
Waves | 0.763353 | 0.999741 | 0.763155 |
Fundamental interaction | In physics, the fundamental interactions or fundamental forces are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist in nature:
gravity
electromagnetism
weak interaction
strong interaction
The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms.
Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative.
Each of the known fundamental interactions can be described mathematically as a field. The gravitational force is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics.
Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies.
Many theoretical physicists believe these fundamental forces to be related and to become unified into a single force at very high energies on a minuscule scale, the Planck scale, but particle accelerators cannot produce the enormous energies required to experimentally probe this. Devising a common theoretical framework that would explain the relation between the forces in a single theory is perhaps the greatest goal of today's theoretical physicists. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Some physicists seek to unite the electroweak and strong fields within what is called a Grand Unified Theory (GUT). An even bigger challenge is to find a way to quantize the gravitational field, resulting in a theory of quantum gravity (QG) which would unite gravity in a common theoretical framework with the other three forces. Some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything (ToE).
History
Classical theory
In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one.
In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.)
Standard Model
The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED).
The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory.
Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support.
Overview of the fundamental interactions
In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ± (intrinsic angular momentum ±, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons.
The interaction of any pair of fermions in perturbation theory can then be modelled thus:
Two fermions go in → interaction by boson exchange → two changed fermions go out.
The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from + to − (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force".
According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of:
Electric and magnetic force into electromagnetism;
The electromagnetic interaction and the weak interaction into the electroweak interaction; see below.
Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research.
The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples.
Interactions
Gravity
Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate.
Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out.
Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe.
The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high.
Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime.
Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton.
Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible.
Proposed extra dimensions could explain why the gravity force is so weak.
Electroweak interaction
Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force.
The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force.
For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.
Electromagnetism
Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other.
Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together. It is responsible for everyday phenomena like light, magnets, electricity, and friction. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements.
In a four kilogram (~1 gallon) jug of water, there is
of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of
This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates.
Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes.
The constant speed of light in vacuum (customarily denoted with a lowercase letter ) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space.
In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function.
Weak interaction
The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT.
Strong interaction
The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows.
After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV.
The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably:
The pions were understood to be oscillations of vacuum condensates;
Jun John Sakurai proposed the rho and omega vector bosons to be force carrying particles for approximate symmetries of isospin and hypercharge;
Geoffrey Chew, Edward K. Burdett and Steven Frautschi grouped the heavier hadrons into families that could be understood as vibrational and rotational excitations of strings.
While each of these approaches offered insights, no approach led directly to a fundamental theory.
Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined.
In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons.
Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions.
QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances.
Higgs interaction
Conventionally, the Higgs interaction is not counted among the four fundamental forces.
Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form
,
with Yukawa coupling , particle mass (in eV), and Higgs vacuum expectation value . Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form
,
with Higgs mass . Because the reduced Compton wavelength of the Higgs boson is so small (, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances.
Beyond the Standard Model
Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification.
Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces (this was, for example, verified at the Large Electron–Positron Collider in 1991 for supersymmetric theories).
Theories of everything, which integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it.
Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow.
See also
Quintessence, a hypothesized fifth force
Gerardus 't Hooft
Edward Witten
Howard Georgi
References
Bibliography
2nd ed.
While all interactions are discussed, discussion is especially thorough on the weak.
Physical phenomena | 0.764627 | 0.998064 | 0.763147 |
Molecular geometry | Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
Determination
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,
dihedral angles,
angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds.
Influence of thermal excitation
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited,
we inspect the Boltzmann factor , where ΔE is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are:
β = 0.089 for ΔE = 500 cm−1
β = 0.008 for ΔE = 1000 cm−1
β = 0.0007 for ΔE = 1500 cm−1.
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster,
which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Bonding
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ11, θ22, θ33, and θ44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry.
Isomers
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure).
Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders.
Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol.
Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions.
Protein folding concerns the complex geometries and different isomers that proteins can take.
Types of molecular structure
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape.
Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride.
Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs.
Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(−) = 109.47°. For example, methane (CH4) is a tetrahedral molecule.
Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule.
Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair – bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value. For example, ammonia (NH3).
VSEPR table
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does.
3D representations
Line or stick – atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex.
Electron density plot – shows the electron density determined either crystallographically or using quantum mechanics rather than distinct atoms or bonds.
Ball and stick – atomic nuclei are represented by spheres (balls) and the bonds as sticks.
Spacefilling models or CPK models (also an atomic coloring scheme in representations) – the molecule is represented by overlapping spheres representing the atoms.
Cartoon – a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe).
The greater the amount of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them.
See also
Jemmis mno rules
Lewis structure
Molecular design software
Molecular graphics
Molecular mechanics
Molecular modelling
Molecular symmetry
Molecule editor
Polyhedral skeletal electron pair theory
Quantum chemistry
Ribbon diagram
Styx rule (for boranes)
Topology (chemistry)
References
External links
Molecular Geometry & Polarity Tutorial 3D visualization of molecules to determine polarity.
Molecular Geometry using Crystals 3D structure visualization of molecules using Crystallography. | 0.769173 | 0.992152 | 0.763136 |
Exercise | Exercise is physical activity that enhances or maintains fitness and overall health. It is performed for various reasons, including weight loss or maintenance, to aid growth and improve strength, develop muscles and the cardiovascular system, hone athletic skills, improve health, or simply for enjoyment. Many individuals choose to exercise outdoors where they can congregate in groups, socialize, and improve well-being as well as mental health.
In terms of health benefits, usually, 2.5 hours of moderate-intensity exercise per week is recommended for reducing the risk of health problems. At the same time, even doing a small amount of exercise is healthier than doing none. Only doing an hour and a quarter (11 minutes/day) of exercise could reduce the risk of early death, cardiovascular disease, stroke, and cancer.
Classification
Physical exercises are generally grouped into three types, depending on the overall effect they have on the human body:
Aerobic exercise is any physical activity that uses large muscle groups and causes the body to use more oxygen than it would while resting. The goal of aerobic exercise is to increase cardiovascular endurance. Examples of aerobic exercise include running, cycling, swimming, brisk walking, skipping rope, rowing, hiking, dancing, playing tennis, continuous training, and long distance running.
Anaerobic exercise, which includes strength and resistance training, can firm, strengthen, and increase muscle mass, as well as improve bone density, balance, and coordination. Examples of strength exercises are push-ups, pull-ups, lunges, squats, bench press. Anaerobic exercise also includes weight training, functional training, Eccentric Training, interval training, sprinting, and high-intensity interval training which increase short-term muscle strength.
Flexibility exercises stretch and lengthen muscles. Activities such as stretching help to improve joint flexibility and keep muscles limber. The goal is to improve the range of motion which can reduce the chance of injury.
Physical exercise can also include training that focuses on accuracy, agility, power, and speed.
Types of exercise can also be classified as dynamic or static. 'Dynamic' exercises such as steady running, tend to produce a lowering of the diastolic blood pressure during exercise, due to the improved blood flow. Conversely, static exercise (such as weight-lifting) can cause the systolic pressure to rise significantly, albeit transiently, during the performance of the exercise.
Health effects
Physical exercise is important for maintaining physical fitness and can contribute to maintaining a healthy weight, regulating the digestive system, building and maintaining healthy bone density, muscle strength, and joint mobility, promoting physiological well-being, reducing surgical risks, and strengthening the immune system. Some studies indicate that exercise may increase life expectancy and the overall quality of life. People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who by comparison are not physically active. Moderate levels of exercise have been correlated with preventing aging by reducing inflammatory potential. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week, with diminishing returns at higher levels of activity. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for transportation 25 minutes on a daily basis would together achieve about 3000 MET minutes a week. A lack of physical activity causes approximately 6% of the burden of disease from coronary heart disease, 7% of type 2 diabetes, 10% of breast cancer, and 10% of colon cancer worldwide. Overall, physical inactivity causes 9% of premature mortality worldwide.
The American-British writer Bill Bryson wrote: "If someone invented a pill that could do for us all that a moderate amount of exercise achieves, it would instantly become the most successful drug in history."
Fitness
Most people can increase fitness by increasing physical activity levels. Increases in muscle size from resistance training are primarily determined by diet and testosterone. This genetic variation in improvement from training is one of the key physiological differences between elite athletes and the larger population. There is evidence that exercising in middle age may lead to better physical ability later in life.
Early motor skills and development is also related to physical activity and performance later in life. Children who are more proficient with motor skills early on are more inclined to be physically active, and thus tend to perform well in sports and have better fitness levels. Early motor proficiency has a positive correlation to childhood physical activity and fitness levels, while less proficiency in motor skills results in a more sedentary lifestyle.
The type and intensity of physical activity performed may have an effect on a person's fitness level. There is some weak evidence that high-intensity interval training may improve a person's VO2 max slightly more than lower intensity endurance training. However, unscientific fitness methods could lead to sports injuries.
Cardiovascular system
The beneficial effect of exercise on the cardiovascular system is well documented. There is a direct correlation between physical inactivity and cardiovascular disease, and physical inactivity is an independent risk factor for the development of coronary artery disease. Low levels of physical exercise increase the risk of cardiovascular diseases mortality.
Children who participate in physical exercise experience greater loss of body fat and increased cardiovascular fitness. Studies have shown that academic stress in youth increases the risk of cardiovascular disease in later years; however, these risks can be greatly decreased with regular physical exercise.
There is a dose-response relationship between the amount of exercise performed from approximately kcal of energy expenditure per week and all-cause mortality and cardiovascular disease mortality in middle-aged and elderly men. The greatest potential for reduced mortality is seen in sedentary individuals who become moderately active.
Studies have shown that since heart disease is the leading cause of death in women, regular exercise in aging women leads to healthier cardiovascular profiles.
The most beneficial effects of physical activity on cardiovascular disease mortality can be attained through moderate-intensity activity (40–60% of maximal oxygen uptake, depending on age). After a myocardial infarction, survivors who changed their lifestyle to include regular exercise had higher survival rates. Sedentary people are most at risk for mortality from cardiovascular and all other causes. According to the American Heart Association, exercise reduces the risk of cardiovascular diseases, including heart attack and stroke.
Some have suggested that increases in physical exercise might decrease healthcare costs, increase the rate of job attendance, as well as increase the amount of effort women put into their jobs.
Immune system
Although there have been hundreds of studies on physical exercise and the immune system, there is little direct evidence on its connection to illness. Epidemiological evidence suggests that moderate exercise has a beneficial effect on the human immune system; an effect which is modeled in a J curve. Moderate exercise has been associated with a 29% decreased incidence of upper respiratory tract infections (URTI), but studies of marathon runners found that their prolonged high-intensity exercise was associated with an increased risk of infection occurrence. However, another study did not find the effect. Immune cell functions are impaired following acute sessions of prolonged, high-intensity exercise, and some studies have found that athletes are at a higher risk for infections. Studies have shown that strenuous stress for long durations, such as training for a marathon, can suppress the immune system by decreasing the concentration of lymphocytes. The immune systems of athletes and nonathletes are generally similar. Athletes may have a slightly elevated natural killer cell count and cytolytic action, but these are unlikely to be clinically significant.
Vitamin C supplementation has been associated with a lower incidence of upper respiratory tract infections in marathon runners.
Biomarkers of inflammation such as C-reactive protein, which are associated with chronic diseases, are reduced in active individuals relative to sedentary individuals, and the positive effects of exercise may be due to its anti-inflammatory effects. In individuals with heart disease, exercise interventions lower blood levels of fibrinogen and C-reactive protein, an important cardiovascular risk marker. The depression in the immune system following acute bouts of exercise may be one of the mechanisms for this anti-inflammatory effect.
Cancer
A systematic review evaluated 45 studies that examined the relationship between physical activity and cancer survival rates. According to the review, "[there] was consistent evidence from 27 observational studies that physical activity is associated with reduced all-cause, breast cancer–specific, and colon cancer–specific mortality. There is currently insufficient evidence regarding the association between physical activity and mortality for survivors of other cancers." Evidence suggests that exercise may positively affect the quality of life in cancer survivors, including factors such as anxiety, self-esteem and emotional well-being. For people with cancer undergoing active treatment, exercise may also have positive effects on health-related quality of life, such as fatigue and physical functioning. This is likely to be more pronounced with higher intensity exercise.
Exercise may contribute to a reduction of cancer-related fatigue in survivors of breast cancer. Although there is only limited scientific evidence on the subject, people with cancer cachexia are encouraged to engage in physical exercise. Due to various factors, some individuals with cancer cachexia have a limited capacity for physical exercise. Compliance with prescribed exercise is low in individuals with cachexia and clinical trials of exercise in this population often have high drop-out rates.
There is low-quality evidence for an effect of aerobic physical exercises on anxiety and serious adverse events in adults with hematological malignancies. Aerobic physical exercise may result in little to no difference in the mortality, quality of life, or physical functioning. These exercises may result in a slight reduction in depression and reduction in fatigue.
Neurobiological
Depression
Continuous aerobic exercise can induce a transient state of euphoria, colloquially known as a "runner's high" in distance running or a "rower's high" in crew, through the increased biosynthesis of at least three euphoriant neurochemicals: anandamide (an endocannabinoid), β-endorphin (an endogenous opioid), and phenethylamine (a trace amine and amphetamine analog).
Sleep
Preliminary evidence from a 2012 review indicated that physical training for up to four months may increase sleep quality in adults over 40 years of age. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2018 systematic review and meta-analysis suggested that exercise can improve sleep quality in people with insomnia.
Libido
One 2013 study found that exercising improved sexual arousal problems related to antidepressant use.
Respiratory system
People who participate in physical exercise experience increased cardiovascular fitness.
There is some level of concern about additional exposure to air pollution when exercising outdoors, especially near traffic.
Mechanism of effects
Skeletal muscle
Resistance training and subsequent consumption of a protein-rich meal promotes muscle hypertrophy and gains in muscle strength by stimulating myofibrillar muscle protein synthesis (MPS) and inhibiting muscle protein breakdown (MPB). The stimulation of muscle protein synthesis by resistance training occurs via phosphorylation of the mechanistic target of rapamycin (mTOR) and subsequent activation of mTORC1, which leads to protein biosynthesis in cellular ribosomes via phosphorylation of mTORC1's immediate targets (the p70S6 kinase and the translation repressor protein 4EBP1). The suppression of muscle protein breakdown following food consumption occurs primarily via increases in plasma insulin. Similarly, increased muscle protein synthesis (via activation of mTORC1) and suppressed muscle protein breakdown (via insulin-independent mechanisms) has also been shown to occur following ingestion of β-hydroxy β-methylbutyric acid.
Aerobic exercise induces mitochondrial biogenesis and an increased capacity for oxidative phosphorylation in the mitochondria of skeletal muscle, which is one mechanism by which aerobic exercise enhances submaximal endurance performance. These effects occur via an exercise-induced increase in the intracellular AMP:ATP ratio, thereby triggering the activation of AMP-activated protein kinase (AMPK) which subsequently phosphorylates peroxisome proliferator-activated receptor gamma coactivator-1α (PGC-1α), the master regulator of mitochondrial biogenesis.
Other peripheral organs
Developing research has demonstrated that many of the benefits of exercise are mediated through the role of skeletal muscle as an endocrine organ. That is, contracting muscles release multiple substances known as myokines which promote the growth of new tissue, tissue repair, and multiple anti-inflammatory functions, which in turn reduce the risk of developing various inflammatory diseases. Exercise reduces levels of cortisol, which causes many health problems, both physical and mental. Endurance exercise before meals lowers blood glucose more than the same exercise after meals. There is evidence that vigorous exercise (90–95% of VO2 max) induces a greater degree of physiological cardiac hypertrophy than moderate exercise (40 to 70% of VO2 max), but it is unknown whether this has any effects on overall morbidity and/or mortality. Both aerobic and anaerobic exercise work to increase the mechanical efficiency of the heart by increasing cardiac volume (aerobic exercise), or myocardial thickness (strength training). Ventricular hypertrophy, the thickening of the ventricular walls, is generally beneficial and healthy if it occurs in response to exercise.
Central nervous system
The effects of physical exercise on the central nervous system may be mediated in part by specific neurotrophic factor hormones released into the blood by muscles, including BDNF, IGF-1, and VEGF.
Public health measures
Community-wide and school campaigns are often used in an attempt to increase a population's level of physical activity. Studies to determine the effectiveness of these types of programs need to be interpreted cautiously as the results vary. There is some evidence that certain types of exercise programmes for older adults, such as those involving gait, balance, co-ordination and functional tasks, can improve balance. Following progressive resistance training, older adults also respond with improved physical function. Brief interventions promoting physical activity may be cost-effective, however this evidence is weak and there are variations between studies.
Environmental approaches appear promising: signs that encourage the use of stairs, as well as community campaigns, may increase exercise levels. The city of Bogotá, Colombia, for example, blocks off of roads on Sundays and holidays to make it easier for its citizens to get exercise. Such pedestrian zones are part of an effort to combat chronic diseases and to maintain a healthy BMI.
Parents can promote physical activity by modelling healthy levels of physical activity or by encouraging physical activity. According to the Centers for Disease Control and Prevention in the United States, children and adolescents should do 60 minutes or more of physical activity each day. Implementing physical exercise in the school system and ensuring an environment in which children can reduce barriers to maintain a healthy lifestyle is essential.
The European Commission's Directorate-General for Education and Culture (DG EAC) has dedicated programs and funds for Health Enhancing Physical Activity (HEPA) projects within its Horizon 2020 and Erasmus+ program, as research showed that too many Europeans are not physically active enough. Financing is available for increased collaboration between players active in this field across the EU and around the world, the promotion of HEPA in the EU and its partner countries, and the European Sports Week. The DG EAC regularly publishes a Eurobarometer on sport and physical activity.
Exercise trends
Worldwide there has been a large shift toward less physically demanding work. This has been accompanied by increasing use of mechanized transportation, a greater prevalence of labor-saving technology in the home, and fewer active recreational pursuits. Personal lifestyle changes, however, can correct the lack of physical exercise.
Research published in 2015 suggests that incorporating mindfulness into physical exercise interventions increases exercise adherence and self-efficacy, and also has positive effects both psychologically and physiologically.
Social and cultural variation
Exercising looks different in every country, as do the motivations behind exercising. In some countries, people exercise primarily indoors (such as at home or health clubs), while in others, people primarily exercise outdoors. People may exercise for personal enjoyment, health and well-being, social interactions, competition or training, etc. These differences could potentially be attributed to a variety of reasons including geographic location and social tendencies.
In Colombia, for example, citizens value and celebrate the outdoor environments of their country. In many instances, they use outdoor activities as social gatherings to enjoy nature and their communities. In Bogotá, Colombia, a 70-mile stretch of road known as the Ciclovía is shut down each Sunday for bicyclists, runners, rollerbladers, skateboarders and other exercisers to work out and enjoy their surroundings.
Similarly to Colombia, citizens of Cambodia tend to exercise socially outside. In this country, public gyms have become quite popular. People will congregate at these outdoor gyms not only to use the public facilities, but also to organize aerobics and dance sessions, which are open to the public.
Sweden has also begun developing outdoor gyms, called utegym. These gyms are free to the public and are often placed in beautiful, picturesque environments. People will swim in rivers, use boats, and run through forests to stay healthy and enjoy the natural world around them. This works particularly well in Sweden due to its geographical location.
Exercise in some areas of China, particularly among those who are retired, seems to be socially grounded. In the mornings, square dances are held in public parks; these gatherings may include Latin dancing, ballroom dancing, tango, or even the jitterbug. Dancing in public allows people to interact with those with whom they would not normally interact, allowing for both health and social benefits.
These sociocultural variations in physical exercise show how people in different geographic locations and social climates have varying motivations and methods of exercising. Physical exercise can improve health and well-being, as well as enhance community ties and appreciation of natural beauty.
Nutrition and recovery
Proper nutrition is as important to health as exercise. When exercising, it becomes even more important to have a good diet to ensure that the body has the correct ratio of macronutrients while providing ample micronutrients, to aid the body with the recovery process following strenuous exercise.
Active recovery is recommended after participating in physical exercise because it removes lactate from the blood more quickly than inactive recovery. Removing lactate from circulation allows for an easy decline in body temperature, which can also benefit the immune system, as an individual may be vulnerable to minor illnesses if the body temperature drops too abruptly after physical exercise. Exercise physiologists recommend the "4-Rs framework":
Rehydration
Replacing any fluid and electrolyte deficits
Refuel
Consuming carbohydrates to replenish muscle and liver glycogen
Repair
Consuming high-quality protein sources with additional supplementation of creatine monohydrate
Rest
Getting long and high-quality sleep after exercise, additionally improved by consuming casein proteins, antioxidant-rich fruits, and high-glycemic-index meals
Exercise has an effect on appetite, but whether it increases or decreases appetite varies from individual to individual, and is affected by the intensity and duration of the exercise.
Excessive exercise
History
The benefits of exercise have been known since antiquity. Dating back to 65 BCE, it was Marcus Cicero, Roman politician and lawyer, who stated: "It is exercise alone that supports the spirits, and keeps the mind in vigor." Exercise was also seen to be valued later in history during the Early Middle Ages as a means of survival by the Germanic peoples of Northern Europe.
More recently, exercise was regarded as a beneficial force in the 19th century. In 1858, Archibald MacLaren opened a gymnasium at the University of Oxford and instituted a training regimen for Major Frederick Hammersley and 12 non-commissioned officers. This regimen was assimilated into the training of the British Army, which formed the Army Gymnastic Staff in 1860 and made sport an important part of military life. Several mass exercise movements were started in the early twentieth century as well. The first and most significant of these in the UK was the Women's League of Health and Beauty, founded in 1930 by Mary Bagot Stack, that had 166,000 members in 1937.
The link between physical health and exercise (or lack of it) was further established in 1949 and reported in 1953 by a team led by Jerry Morris. Morris noted that men of similar social class and occupation (bus conductors versus bus drivers) had markedly different rates of heart attacks, depending on the level of exercise they got: bus drivers had a sedentary occupation and a higher incidence of heart disease, while bus conductors were forced to move continually and had a lower incidence of heart disease.
Other animals
Animals like chimpanzees, orangutans, gorillas and bonobos, which are closely related to humans, without ill effect engage in considerably less physical activity than is required for human health, raising the question of how this is biochemically possible.
Studies of animals indicate that physical activity may be more adaptable than changes in food intake to regulate energy balance.
Mice having access to activity wheels engaged in voluntary exercise and increased their propensity to run as adults. Artificial selection of mice exhibited significant heritability in voluntary exercise levels, with "high-runner" breeds having enhanced aerobic capacity, hippocampal neurogenesis, and skeletal muscle morphology.
The effects of exercise training appear to be heterogeneous across non-mammalian species. As examples, exercise training of salmon showed minor improvements of endurance, and a forced swimming regimen of yellowtail amberjack and rainbow trout accelerated their growth rates and altered muscle morphology favorable for sustained swimming. Crocodiles, alligators, and ducks showed elevated aerobic capacity following exercise training. No effect of endurance training was found in most studies of lizards, although one study did report a training effect. In lizards, sprint training had no effect on maximal exercise capacity, and muscular damage from over-training occurred following weeks of forced treadmill exercise.
See also
Active living
Behavioural change theories
Bodybuilding
Exercise hypertension
Exercise intensity
Exercise intolerance
Exercise-induced anaphylaxis
Exercise-induced asthma
Exercise-induced nausea
Kinesiology
Metabolic equivalent
Neurobiological effects of physical exercise
Non-exercise associated thermogenesis
Supercompensation
Unilateral training
Warming up
References
External links
Adult Compendium of Physical Activities – a website containing lists of Metabolic Equivalent of Task (MET) values for a number of physical activities, based upon
MedLinePlus Topic on Exercise and Physical Fitness
Physical activity and the environment – guidance on the promotion and creation of physical environments that support increased levels of physical activity.
Science Daily's reference on physical exercise | 0.76405 | 0.9988 | 0.763134 |
Cosmogony | Cosmogony is any model concerning the origin of the cosmos or the universe.
Overview
Scientific theories
In astronomy, cosmogony is the study of the origin of particular astrophysical objects or systems, and is most commonly used in reference to the origin of the universe, the Solar System, or the Earth–Moon system. The prevalent cosmological model of the early development of the universe is the Big Bang theory.
Sean M. Carroll, who specializes in theoretical cosmology and field theory, explains two competing explanations for the origins of the singularity, which is the center of a space in which a characteristic is limitless (one example is the singularity of a black hole, where gravity is the characteristic that becomes infinite).
It is generally thought that the universe began at a point of singularity, but among Modern Cosmologists and Physicists, a singularity usually represents a lack of understanding, and in the case of Cosmology/Cosmogony, requires a theory of quantum gravity to understand. When the universe started to expand, what is colloquially known as the Big Bang occurred, which evidently began the universe. The other explanation, held by proponents such as Stephen Hawking, asserts that time did not exist when it emerged along with the universe. This assertion implies that the universe does not have a beginning, as time did not exist "prior" to the universe. Hence, it is unclear whether properties such as space or time emerged with the singularity and the known universe.
Despite the research, there is currently no theoretical model that explains the earliest moments of the universe's existence (during the Planck epoch) due to a lack of a testable theory of quantum gravity. Nevertheless, researchers of string theory, its extensions (such as M-theory), and of loop quantum cosmology, like Barton Zwiebach and Washington Taylor, have proposed solutions to assist in the explanation of the universe's earliest moments. Cosmogonists have only tentative theories for the early stages of the universe and its beginning. The proposed theoretical scenarios include string theory, M-theory, the Hartle–Hawking initial state, emergent Universe, string landscape, cosmic inflation, the Big Bang, and the ekpyrotic universe. Some of these proposed scenarios, like the string theory, are compatible, whereas others are not.
Mythology
In mythology, creation or cosmogonic myths are narratives describing the beginning of the universe or cosmos.
Some methods of the creation of the universe in mythology include:
the will or action of a supreme being or beings,
the process of metamorphosis,
the copulation of female and male deities,
from chaos,
or via a cosmic egg.
Creation myths may be etiological, attempting to provide explanations for the origin of the universe. For instance, Eridu Genesis, the oldest known creation myth, contains an account of the creation of the world in which the universe was created out of a primeval sea (Abzu). Creation myths vary, but they may share similar deities or symbols. For instance, the ruler of the gods in Greek mythology, Zeus, is similar to the ruler of the gods in Roman mythology, Jupiter. Another example is the ruler of the gods in Tagalog mythology, Bathala, who is similar to various rulers of certain pantheons within Philippine mythology such as the Bisaya's Kaptan.
Compared with cosmology
In the humanities, the distinction between cosmogony and cosmology is blurred. For example, in theology, the cosmological argument for the existence of God (pre-cosmic cosmogonic bearer of personhood) is an appeal to ideas concerning the origin of the universe and is thus cosmogonical. Some religious cosmogonies have an impersonal first cause (for example Taoism).
However, in astronomy, cosmogony can be distinguished from cosmology, which studies the universe and its existence, but does not necessarily inquire into its origins. There is therefore a scientific distinction between cosmological and cosmogonical ideas. Physical cosmology is the science that attempts to explain all observations relevant to the development and characteristics of the universe on its largest scale. Some questions regarding the behaviour of the universe have been described by some physicists and cosmologists as being extra-scientific or metaphysical. Attempted solutions to such questions may include the extrapolation of scientific theories to untested regimes (such as the Planck epoch), or the inclusion of philosophical or religious ideas.
See also
Why there is anything at all
References
External links
Creation myths
Greek words and phrases
Natural philosophy
Origins
Physical cosmology
Concepts in astronomy | 0.765682 | 0.996661 | 0.763125 |
Castigliano's method | Castigliano's method, named after Carlo Alberto Castigliano, is a method for determining the displacements of a linear-elastic system based on the partial derivatives of the energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
Examples
For a thin, straight cantilever beam with a load P at the end, the displacement at the end can be found by Castigliano's second theorem :
where is Young's modulus, is the second moment of area of the cross-section, and is the expression for the internal moment at a point at distance from the end. The integral evaluates to:
The result is the standard formula given for cantilever beams under end loads.
Castigliano's theorems apply if the strain energy is finite. This is true if . It is the order of the energy (= the highest derivative in the energy), , is the index of the Dirac delta (single force, ) and is the dimension of the space. To second order equations, , belong two Dirac deltas, , force and , dislocation and to fourth order equations, , four Dirac deltas, force, moment, bend, dislocation.
Example: If a plate, , is loaded with a single force, , the inequality is not valid, , also not in , . Nor does it apply to a membrane (Laplace), , or a Reissner-Mindlin plate, . In general Castigliano's theorems do not apply to and problems. The exception is the Kirchhoff plate, , since . But a moment, , causes the energy of a Kirchhoff plate to overflow, . In problems the strain energy is finite if .
Menabrea's theorem is subject to the same restriction. It needs that 2 is valid. It is the order of the support reaction, single force , moment . Except for a Kirchhoff plate and (single force as support reaction), it is generally not valid in and because the presence of point supports results in infinitely large energy.
External links
Carlo Alberto Castigliano
Castigliano's method: some examples
References
Beam theory
Eponymous theorems of physics
Structural analysis | 0.779065 | 0.979533 | 0.763119 |
Science, technology, engineering, and mathematics | Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (humanities, arts, and social sciences), rebranded in 2020 as SHAPE (social sciences, humanities and the arts for people and the economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering education; it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering, and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was founded in 1998.
In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym.
Other variations
A-STEM (arts, science, technology, engineering, and mathematics); more focused and based on humanism and arts.
eSTEM (environmental STEM)
GEMS (girls in engineering, math, and science); used for programs to encourage women to enter these fields.
MINT (mathematics, informatics, natural sciences, and technology)
SHTEAM (science, humanities, technology, engineering, arts, and mathematics)
SMET (science, mathematics, engineering, and technology); previous name
STEAM (science, technology, engineering, arts, and mathematics)
STEAM (science, technology, engineering, agriculture, and mathematics); add agriculture
STEAM (science, technology, engineering, and applied mathematics); has more focus on applied mathematics
STEEM (science, technology, engineering, economics, and mathematics); adds economics as a field
STEMIE (science, technology, engineering, mathematics, invention, and entrepreneurship); adds Inventing and Entrepreneurship as a means to apply STEM to real-world problem-solving and markets.
STEMM (science, technology, engineering, mathematics, and medicine)
STM (scientific, technical, and mathematics or science, technology, and medicine)
STREAM (science, technology, robotics, engineering, arts, and mathematics); adds robotics and arts as fields
STREAM (science, technology, reading, engineering, arts, and mathematics); adds reading and arts
STREAM (science, technology, recreation, engineering, arts, and mathematics); adds recreation and arts
Geographic distribution
Australia
The Australian Curriculum, Assessment, and Reporting Authority 2015 report entitled, National STEM School Education Strategy, stated that "A renewed national focus on STEM in school education is critical to ensuring that all young Australians are equipped with the necessary STEM skills and knowledge that they must need to succeed." Its goals were to:
"Ensure all students finish school with strong foundational knowledge in STEM and related skills"
"Ensure that students are inspired to take on more challenging STEM subjects"
Events and programs meant to help develop STEM in Australian schools include the Victorian Model Solar Vehicle Challenge, the Maths Challenge (Australian Mathematics Trust), Go Girl Go Global and the Australian Informatics Olympiad.
Canada
Canada ranks 12th out of 16 peer countries in the percentage of its graduates who studied in STEM programs, with 21.2%, a number higher than the United States, but lower than France, Germany, and Austria. The peer country with the greatest proportion of STEM graduates, Finland, has over 30% of its university graduates coming from science, mathematics, computer science, and engineering programs.
SHAD is an annual Canadian summer enrichment program for high-achieving high school students in July. The program focuses on academic learning, particularly in STEAM fields.
Scouts Canada has taken similar measures to their American counterpart to promote STEM fields to youth. Their STEM program began in 2015.
In 2011 Canadian entrepreneur and philanthropist Seymour Schulich established the Schulich Leader Scholarships, $100 million in $60,000 scholarships for students beginning their university education in a STEM program at 20 institutions across Canada. Each year 40 Canadian students would be selected to receive the award, two at each institution, with the goal of attracting gifted youth into the STEM fields. The program also supplies STEM scholarships to five participating universities in Israel.
China
To promote STEM in China, the Chinese government issued a guideline in 2016 on national innovation-driven development strategy, "instructing that by 2020, China should become an innovative country; by 2030, it should be at the forefront of innovative countries; and by 2050, it should become a technology innovation power."
"[I]n May 2018, the launching ceremony and press conference for the 2029 Action Plan for China's STEM Education was held in Beijing, China. This plan aims to allow as many students to benefit from STEM education as possible and equip all students with scientific thinking and the ability to innovate." "In response to encouraging policies by the government, schools in both public and private sectors around the country have begun to carry out STEM education programs."
"However, to effectively implement STEM curricula, full-time teachers specializing in STEM education and relevant content to be taught are needed." Currently, "China lacks qualified STEM teachers and a training system is yet to be established."
Several Chinese cities have taken bold measures to add programming as a compulsory course for elementary and middle school students. This is the case of the city of Chongqing. However, most students from small and medium-sized cities have not been exposed to the concept of STEM until they enter college.
Europe
Several European projects have promoted STEM education and careers in Europe. For instance, Scientix is a European cooperation of STEM teachers, education scientists, and policymakers. The SciChallenge project used a social media contest and student-generated content to increase the motivation of pre-university students for STEM education and careers. The Erasmus programme project AutoSTEM used automata to introduce STEM subjects to very young children.
Finland
The LUMA Center is the leading advocate for STEM-oriented education. Its aim is to promote the instruction and research of natural sciences, mathematics, computer science, and technology across all educational levels in the country. In the native tongue luma stands for "luonnontieteellis-matemaattinen" (lit. adj. "scientific-mathematical"). The short is more or less a direct translation of STEM, with engineering fields included by association. However, unlike STEM, the term is also a portmanteau from lu and ma. To address the decline in interest in learning the areas of science, the Finnish National Board of Education launched the LUMA scientific education development program. The project's main goal was to raise the level of Finnish education and to enhance students' competencies, improve educational practices, and foster interest in science. The initiative led to the establishment of 13 LUMA centers at universities across Finland supervised by LUMA Center.
France
The name of STEM in France is industrial engineering sciences (sciences industrielles or sciences de l'ingénieur). The STEM organization in France is the association UPSTI.
Hong Kong
STEM education has not been promoted among the local schools in Hong Kong until recent years. In November 2015, the Education Bureau of Hong Kong released a document titled Promotion of STEM Education, which proposes strategies and recommendations for promoting STEM education.
India
India is next only to China with STEM graduates per population of 1 to 52. The total number of fresh STEM graduates was 2.6 million in 2016. STEM graduates have been contributing to the Indian economy with well-paid salaries locally and abroad for the past two decades. The turnaround of the Indian economy with comfortable foreign exchange reserves is mainly attributed to the skills of its STEM graduates. In India, women make up an impressive 43% of STEM graduates, the highest percentage worldwide. However, they hold only 14% of STEM-related jobs. Additionally, among the 280,000 scientists and engineers working in research and development institutes in the country, women represent a mere 14%
Nigeria
In Nigeria, the Association of Professional Women Engineers Of Nigeria (APWEN) has involved girls between the ages of 12 and 19 in science-based courses in order for them to pursue science-based courses in higher institutions of learning. The National Science Foundation (NSF) In Nigeria has made conscious efforts to encourage girls to innovate, invent, and build through the "invent it, build it" program sponsored by NNPC.
Pakistan
STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grades, culminating in Matriculation exams. These electives are pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths), and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grades, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are FSc pre-medical (Physics, Chemistry, Biology), FSc pre-engineering (Physics, Chemistry, Maths), and ICS (Physics/Statistics, Computer Science, Maths). These electives are intended to aid students in pursuing STEM-related careers in the future by preparing them for the study of these courses at university.
A STEM education project has been approved by the government to establish STEM labs in public schools. The Ministry of Information Technology and Telecommunication has collaborated with Google to launch Pakistan's first grassroots-level Coding Skills Development Program, based on Google's CS First Program, a global initiative aimed at developing coding skills in children. The program aims to develop applied coding skills using gamification techniques for children between the ages of 9 and 14.
The KPITBs Early Age Programming initiative, established in the province of Khyber Pakhtunkhwa, has been successfully introduced in 225 Elementary and Secondary Schools. Many private organizations are working in Pakistan to introduce STEM education in schools.
Philippines
In the Philippines, STEM is a two-year program and strand that is used for Senior High School (Grades 11 and 12), assigned by the Department of Education or DepEd. The STEM strand is under the Academic Track, which also includes other strands like ABM, HUMSS, and GAS. The purpose of the STEM strand is to educate students in the field of science, technology, engineering, and mathematics, in an interdisciplinary and applied approach, and to give students advanced knowledge and application in the field. After completing the program, the students will earn a Diploma in Science, Technology, Engineering, and Mathematics. In some colleges and universities, they require students applying for STEM degrees (like medicine, engineering, computer studies, etc.) to be a graduate of STEM, if not, they will need to enter a bridging program.
Qatar
In Qatar, AL-Bairaq is an outreach program to high-school students with a curriculum that focuses on STEM, run by the Center for Advanced Materials (CAM) at Qatar University. Each year around 946 students, from about 40 high schools, participate in AL-Bairaq competitions. AL-Bairaq makes use of project-based learning, encourages students to solve authentic problems, and inquires them to work with each other as a team to build real solutions. Research has so far shown positive results for the program.
Singapore
STEM is part of the Applied Learning Programme (ALP) that the Singapore Ministry of Education (MOE) has been promoting since 2013, and currently, all secondary schools have such a program. It is expected that by 2023, all primary schools in Singapore will have an ALP. There are no tests or exams for ALPs. The emphasis is for students to learn through experimentation – they try, fail, try, learn from it, and try again. The MOE actively supports schools with ALPs to further enhance and strengthen their capabilities and programs that nurture innovation and creativity.
The Singapore Science Centre established a STEM unit in January 2014, dedicated to igniting students' passion for STEM. To further enrich students' learning experiences, their Industrial Partnership Programme (IPP) creates opportunities for students to get early exposure to real-world STEM industries and careers. Curriculum specialists and STEM educators from the Science Centre will work hand-in-hand with teachers to co-develop STEM lessons, provide training to teachers, and co-teach such lessons to provide students with early exposure and develop their interest in STEM.
Thailand
In 2017, Thai Education Minister Teerakiat Jareonsettasin said after the 49th Southeast Asia Ministers of Education Organisation (SEAMEO) Council Conference in Jakarta that the meeting approved the establishment of two new SEAMEO regional centers in Thailand. One would be the STEM Education Centre, while the other would be a Sufficient Economy Learning Centre.
Teerakiat said that the Thai government had already allocated Bt250 million over five years for the new STEM center. The center will be the regional institution responsible for STEM education promotion. It will not only set up policies to improve STEM education, but it will also be the center for information and experience sharing among the member countries and education experts. According to him, "This is the first SEAMEO regional center for STEM education, as the existing science education center in Malaysia only focuses on the academic perspective. Our STEM education center will also prioritize the implementation and adaptation of science and technology."
The Institute for the Promotion of Teaching Science and Technology has initiated a STEM Education Network. Its goals are to promote integrated learning activities improve student creativity and application of knowledge, and establish a network of organations and personnel for the promotion of STEM education in the country.
Turkey
Turkish STEM Education Task Force (or FeTeMM—Fen Bilimleri, Teknoloji, Mühendislik ve Matematik) is a coalition of academicians and teachers who show an effort to increase the quality of education in STEM fields rather than focussing on increasing the number of STEM graduates.
United States
In the United States, the acronym began to be used in education and immigration debates in initiatives to begin to address the perceived lack of qualified candidates for high-tech jobs. It also addresses concern that the subjects are often taught in isolation, instead of as an integrated curriculum. Maintaining a citizenry that is well-versed in the STEM fields is a key portion of the public education agenda of the United States. The acronym has been widely used in the immigration debate regarding access to United States work visas for immigrants who are skilled in these fields. It has also become commonplace in education discussions as a reference to the shortage of skilled workers and inadequate education in these areas. The term tends not to refer to the non-professional and less visible sectors of the fields, such as electronics assembly line work.
National Science Foundation
Many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field. The NSF uses a broad definition of STEM subjects that includes subjects in the fields of chemistry, computer and information technology science, engineering, geoscience, life sciences, mathematical sciences, physics and astronomy, social sciences (anthropology, economics, psychology, and sociology), and STEM education and learning research.
The NSF is the only American federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences. Its disciplinary program areas include scholarships, grants, and fellowships in fields such as biological sciences, computer and information science and engineering, education and human resources, engineering, environmental research and education, geoscience, international science and engineering, mathematical and physical sciences, social, behavioral and economic sciences, cyberinfrastructure, and polar programs.
Immigration policy
Although many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field, the United States Department of Homeland Security (DHS) has its own functional definition used for immigration policy. In 2012, DHS or ICE announced an expanded list of STEM-designated degree programs that qualify eligible graduates on student visas for an optional practical training (OPT) extension. Under the OPT program, international students who graduate from colleges and universities in the United States can stay in the country and receive up to twelve months of training through work experience. Students who graduate from a designated STEM degree program can stay for an additional seventeen months on an OPT STEM extension.
As of 2023, the U.S. faces a shortage of high-skilled workers in STEM, and foreign talents must navigate difficult hurdles to immigrate. Meanwhile, some other countries, such as Australia, Canada, and the United Kingdom, have introduced programs to attract talent at the expense of the United States. In the case of China, the United States risks losing its edge over a strategic rival.
Education
By cultivating an interest in the natural and social sciences in preschool or immediately following school entry, the chances of STEM success in high school can be greatly improved.
STEM supports broadening the study of engineering within each of the other subjects and beginning engineering at younger grades, even elementary school. It also brings STEM education to all students rather than only the gifted programs. In his 2012 budget, President Barack Obama renamed and broadened the "Mathematics and Science Partnership (MSP)" to award block grants to states for improving teacher education in those subjects.
In the 2015 run of the international assessment test the Program for International Student Assessment (PISA), American students came out 35th in mathematics, 24th in reading, and 25th in science, out of 109 countries. The United States also ranked 29th in the percentage of 24-year-olds with science or mathematics degrees.
STEM education often uses new technologies such as 3D printers to encourage interest in STEM fields. STEM education can also leverage the combination of new technologies, such as photovoltaics and environmental sensors, with old technologies such as composting systems and irrigation within land lab environments.
In 2006 the United States National Academies expressed their concern about the declining state of STEM education in the United States. Its Committee on Science, Engineering, and Public Policy developed a list of 10 actions. Their top three recommendations were to:
Increase America's talent pool by improving K–12 science and mathematics education
Strengthen the skills of teachers through additional training in science, mathematics, and technology
Enlarge the pipeline of students prepared to enter college and graduate with STEM degrees
The National Aeronautics and Space Administration also has implemented programs and curricula to advance STEM education to replenish the pool of scientists, engineers, and mathematicians who will lead space exploration in the 21st century.
Individual states, such as California, have run pilot after-school STEM programs to learn what the most promising practices are and how to implement them to increase the chance of student success. Another state to invest in STEM education is Florida, where Florida Polytechnic University, Florida's first public university for engineering and technology dedicated to science, technology, engineering, and mathematics (STEM), was established. During school, STEM programs have been established for many districts throughout the U.S. Some states include New Jersey, Arizona, Virginia, North Carolina, Texas, and Ohio.
Continuing STEM education has expanded to the post-secondary level through masters programs such as the University of Maryland's STEM Program as well as the University of Cincinnati.
Racial gap in STEM fields
In the United States, the National Science Foundation found that the average science score on the 2011 National Assessment of Educational Progress was lower for black and Hispanic students than for white, Asian, and Pacific Islanders. In 2011, eleven percent of the U.S. workforce was black, while only six percent of STEM workers were black. Though STEM in the U.S. has typically been dominated by white males, there have been considerable efforts to create initiatives to make STEM a more racially and gender-diverse field. Some evidence suggests that all students, including black and Hispanic students, have a better chance of earning a STEM degree if they attend a college or university at which their entering academic credentials are at least as high as the average student's.
Gender gaps in STEM
Although women make up 47% of the workforce in the U.S., they hold only 24% of STEM jobs. Research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical STEM fields by half. Campaigns from organizations like the National Inventors Hall of Fame aimed to achieve a 50/50 gender balance in their youth STEM programs by 2020. The gender gap in Zimbabwe's STEM fields is also significant, with only 28.79% of women holding STEM degrees compared to 71.21% of men.
American Competitiveness Initiative
In the State of the Union Address on January 31, 2006, President George W. Bush announced the American Competitiveness Initiative. Bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the STEM fields. In detail, the initiative called for significant increases in federal funding for advanced R&D programs (including a doubling of federal funding support for advanced research in the physical sciences through DOE) and an increase in U.S. higher education graduates within STEM disciplines.
The NASA Means Business competition, sponsored by the Texas Space Grant Consortium, furthers that goal. College students compete to develop promotional plans to encourage students in middle and high school to study STEM subjects and to inspire professors in STEM fields to involve their students in outreach activities that support STEM education.
The National Science Foundation has numerous programs in STEM education, including some for K–12 students such as the ITEST Program that supports The Global Challenge Award ITEST Program. STEM programs have been implemented in some Arizona schools. They implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the STEM fields.
Project Lead The Way (PLTW) is a provider of STEM education curricular programs to middle and high schools in the United States. Programs include a high school engineering curriculum called Pathway To Engineering, a high school biomedical sciences program, and a middle school engineering and technology program called Gateway To Technology. PLTW programs have been endorsed by President Barack Obama and United States Secretary of Education Arne Duncan as well as various state, national, and business leaders.
STEM Education Coalition
The Science, Technology, Engineering, and Mathematics (STEM) Education Coalition works to support STEM programs for teachers and students at the U.S. Department of Education, the National Science Foundation, and other agencies that offer STEM-related programs. Activity of the STEM Coalition seems to have slowed since September 2008.
Scouting
In 2012, the Boy Scouts of America began handing out awards, titled NOVA and SUPERNOVA, for completing specific requirements appropriate to the scouts' program level in each of the four main STEM areas. The Girl Scouts of the USA has similarly incorporated STEM into their program through the introduction of merit badges such as "Naturalist" and "Digital Art".
SAE is an international organization, and provider specializing in supporting education, award, and scholarship programs for STEM matters, from pre-K to college degrees. It also promotes scientific and technological innovation.
Department of Defense programs
eCybermission is a free, web-based science, mathematics, and technology competition for students in grades six through nine sponsored by the U.S. Army. Each webinar is focused on a different step of the scientific method and is presented by an experienced eCybermission CyberGuide. CyberGuides are military and civilian volunteers with a strong background in STEM and STEM education, who can provide insight into science, technology, engineering, and mathematics to students and team advisers.
STARBASE is an educational program, sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. Students interact with military personnel to explore careers and make connections with the "real world". The program provides students with 20–25 hours of experience at the National Guard, Navy, Marines, Air Force Reserve, and Air Force bases across the nation.
SeaPerch is an underwater robotics program that trains teachers to teach their students how to build an underwater remotely operated vehicle (ROV) in an in-school or out-of-school setting. Students build the ROV from a kit composed of low-cost, easily accessible parts, following a curriculum that teaches basic engineering and science concepts with a marine engineering theme.
NASA
NASAStem is a program of the U.S. space agency NASA to increase diversity within its ranks, including age, disability, and gender as well as race/ethnicity.
Legislation
The America COMPETES Act (P.L. 110–69) became law on August 9, 2007. It is intended to increase the nation's investment in science and engineering research and in STEM education from kindergarten to graduate school and postdoctoral education. The act authorizes funding increases for the National Science Foundation, National Institute of Standards and Technology laboratories, and the Department of Energy (DOE) Office of Science over FY2008–FY2010. Robert Gabrys, Director of Education at NASA's Goddard Space Flight Center, articulated success as increased student achievement, early expression of student interest in STEM subjects, and student preparedness to enter the workforce.
Jobs
In November 2012 the White House announcement before the congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign. The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers." The identified sectors were as follows: advanced manufacturing, Automotive, construction, financial services, geospatial technology, homeland security, information technology, Transportation, Aerospace, Biotechnology, energy, healthcare, hospitality, and retail.
The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century. The report also notes that STEM workers play a key role in the sustained growth and stability of the U.S. economy, and training in STEM fields generally results in higher wages, whether or not they work in a STEM field.
In 2015, there were around 9.0 million STEM jobs in the United States, representing 6.1% of American employment. STEM jobs were increasing by around 9% percent per year. Brookings Institution found that the demand for competent technology graduates will surpass the number of capable applicants by at least one million individuals.
According to Pew Research Center, a typical STEM worker earns two-thirds more than those employed in other fields.
Recent progress
According to the 2014 US census "74 percent of those who have a bachelor's degree in science, technology, engineering and math — commonly referred to as STEM — are not employed in STEM occupations."
In September 2017, several large American technology firms collectively pledged to donate $300 million for computer science education in the U.S.
PEW findings revealed in 2018 that Americans identified several issues that hound STEM education which included unconcerned parents, disinterested students, obsolete curriculum materials, and too much focus on state parameters. 57 percent of survey respondents pointed out that one main problem of STEM is the lack of students' concentration in learning.
The recent National Assessment of Educational Progress (NAEP) report card made public technology as well as engineering literacy scores which determines whether students can apply technology and engineering proficiency to real-life scenarios. The report showed a gap of 28 points between low-income students and their high-income counterparts. The same report also indicated a 38-point difference between white and black students.
The Smithsonian Science Education Center (SSEC) announced the release of a five-year strategic plan by the Committee on STEM Education of the National Science and Technology Council on December 4, 2018. The plan is entitled "Charting a Course for Success: America's Strategy for STEM Education." The objective is to propose a federal strategy anchored on a vision for the future so that all Americans are given permanent access to premium-quality education in Science, Technology, Engineering, and Mathematics. In the end, the United States can emerge as a world leader in STEM mastery, employment, and innovation. The goals of this plan are building foundations for STEM literacy; enhancing diversity, equality, and inclusion in STEM; and preparing the STEM workforce for the future.
The 2019 fiscal budget proposal of the White House supported the funding plan in President Donald Trump's Memorandum on STEM Education which allocated around $200 million (grant funding) for STEM education every year. This budget also supports STEM through a grant program worth $20 million for career as well as technical education programs.
Events and programs to help develop STEM in US schools
FIRST Tech Challenge
VEX Robotics Competitions
FIRST Robotics Competition
Vietnam
In Vietnam, beginning in 2012 many private education organizations have STEM education initiatives.
In 2015, the Ministry of Science and Technology and Liên minh STEM organized the first National STEM Day, followed by many similar events across the country.
in 2015, the Ministry of Education and Training included STEM as an area that needed to be encouraged in the national school year program.
In May 2017, the Prime Minister signed a Directive No. 16 stating: "Dramatically change the policies, contents, education and vocational training methods to create a human resource capable of receiving new production technology trends, with a focus on promoting training in science, technology, engineering and mathematics (STEM), foreign languages, information technology in general education; " and asking "Ministry of Education and Training (to): Promote the deployment of science, technology, engineering and mathematics (STEM) education in general education program; Pilot organize in some high schools from 2017 to 2018.
Women
Women constitute 47% of the U.S. workforce and perform 24% of STEM-related jobs. In the UK women perform 13% of STEM-related jobs (2014). In the U.S. women with STEM degrees are more likely to work in education or healthcare rather than STEM fields compared with their male counterparts.
The gender ratio depends on the field of study. For example, in the European Union in 2012 women made up 47.3% of the total, 51% of the social sciences, business, and law, 42% of the science, mathematics, and computing, 28% of engineering, manufacturing, and construction, and 59% of PhD graduates in Health and Welfare.
In a study from 2019, it was shown that part of the success of women in STEM depends on the way women in STEM are viewed. In a study that researched grants given based primarily on a project versus primarily based on the project lead there was almost no difference in the evaluation between projects from men or women when evaluated on the project, but those evaluated mainly on the project leader showed that projects headed by women were given grants four percent less often.
Improving the experiences of women in STEM is a major component of increasing the number of women in STEM. One part of this includes the need for role models and mentors who are women in STEM. Along with this, having good resources for information and networking opportunities can improve women's ability to flourish in STEM fields.
Adding to the complexity, global studies indicate that biology may play a significant role in the gender gaps in STEM fields because the propensity for women to pursue college degrees in STEM fields declines consistently as countries become more wealthy and egalitarian. As women are more free to choose their careers, they are more prone to chose careers that relate to people rather than objects.
LGBTQ+
People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing, the father of computer science, and Sara Josephine Baker, an American physician and public-health leader.
Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields. A recent study has shown that sexual minority students were less likely to have completed a bachelor's degree in a STEM field, having opted to switch their major. Those that remained in a STEM field were however more likely to participate in undergraduate research programs. According to the study sexual minorities did show higher overall retention rates within STEM related fields as compared to heterosexual woman. Another study concluded that queer people are more likely to experience exclusion, harassment, and other negative impacts while in a STEM career while also having fewer opportunities and resources available to them.
Multiple programs and institutions are working towards increasing the inclusion and acceptance of LGBTQ+ people in STEM. In the US, the National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP) has organized people to address homophobia since the 1980s and now promotes activism and support for queer scientists. Other programs, including 500 Queer Scientists and Pride in STEM, function as visibility campaigns for LGBTQ+ people in STEM worldwide.
Criticism
The focus on increasing participation in STEM fields has attracted criticism. In the 2014 article "The Myth of the Science and Engineering Shortage" in The Atlantic, demographer Michael S. Teitelbaum criticized the efforts of the U.S. government to increase the number of STEM graduates, saying that, among studies on the subject, "No one has been able to find any evidence indicating current widespread labor market shortages or hiring difficulties in science and engineering occupations that require bachelor's degrees or higher", and that "Most studies report that real wages in many—but not all—science and engineering occupations have been flat or slow-growing, and unemployment as high or higher than in many comparably-skilled occupations." Teitelbaum also wrote that the then-current national fixation on increasing STEM participation paralleled previous U.S. government efforts since World War II to increase the number of scientists and engineers, all of which he stated ultimately ended up in "mass layoffs, hiring freezes, and funding cuts"; including one driven by the Space Race of the late 1950s and 1960s, which he wrote led to "a bust of serious magnitude in the 1970s."
IEEE Spectrum contributing editor Robert N. Charette echoed these sentiments in the 2013 article "The STEM Crisis Is a Myth", also noting that there was a "mismatch between earning a STEM degree and having a STEM job" in the United States, with only around of STEM graduates working in STEM fields, while less than half of workers in STEM fields have a STEM degree.
Economics writer Ben Casselman, in a 2014 study of post-graduation earnings in the United States for FiveThirtyEight, wrote that, based on the data, science should not be grouped with the other three STEM categories, because, while the other three generally result in high-paying jobs, "many sciences, particularly the life sciences, pay below the overall median for recent college graduates."
A 2017 article from the University of Leicester concluded, that
"maintaining accounts of a ‘crisis’ in the supply of STEM workers has usually been in the interests of industry, the education
sector and government, as well as the lobby groups that represent them. Concerns about a shortage have meant the allocation of significant additional resources to the sector whose representatives have, in turn, become powerful voices in advocating for further funds and further investment."
A 2022 report from Rutgers University stated:
"In the United States, the STEM crisis theme is a perennial policy favorite, appearing every few years as an urgent concern in the nation’s competition with whatever other nation is ascendant, or as the cause of whatever problem is ailing the domestic economy. And the solution is always the same: increase the supply of STEM workers through expanding STEM education. Time and again, serious and empirically grounded studies find little evidence of any systemic failures or an inability of market responses to address whatever supply is required to meet workforce needs."
A study of the UK job market, published in 2022, found similar problems, which have been reported for the USA earlier: "It is not clear that having a degree in the sciences, rather than in other subjects, provides any sort of advantage in terms of short- or long-term employability... While only a minority of STEM graduates ever work in highly-skilled STEM jobs, we identified three particular characteristics of the STEM labour market that may present challenges for employers: STEM employment appears to be predicated on early
entry to the sector; a large proportion of STEM graduates are likely to never work in the sector; and there may be more movement out of HS STEM positions by older workers than in other sectors... "
See also
Craft Academy for Excellence in Science and Mathematics
Hard and soft science
List of African American women in STEM fields
Maker culture
NASA RealWorld-InWorld Engineering Design Challenge
National Society of Black Engineers (NSBE)
Pre-STEM
Science, Technology, Engineering and Mathematics Network
Society of Hispanic Professional Engineers (SHPE)
STEM Academy
STEM.org
STEM pipeline
Tech ed
Underrepresented group
References
Citations
Further reading
Kaye Husbands Fealing, Aubrey Incorvaia, and Richard Utz, "Humanizing Science and Engineering for the Twenty-First Century." Issues in Science and Technology, Fall issue, 2022: 54-57.
Carla C. Johnson, et al., eds. (2020) Handbook of research on STEM education (Routledge, 2020).
Unesco publication on girls education in STEM – Cracking the code: girls' and women's education in science, technology, engineering and mathematics (STEM) "http://unesdoc.unesco.org/images/0025/002534/253479E.pdf "
External links
Education by subject
Education policy
Experiential learning
Science education
Technology education
Engineering education
Mathematics education
Learning programs
Science and technology studies | 0.764004 | 0.998831 | 0.763111 |
Forced convection | Forced convection is a mechanism, or type of transport, in which fluid motion is generated by an external source (like a pump, fan, suction device, etc.). Alongside natural convection, thermal radiation, and thermal conduction it is one of the methods of heat transfer and allows significant amounts of heat energy to be transported very efficiently.
Applications
This mechanism is found very commonly in everyday life, including central heating and air conditioning and in many other machines. Forced convection is often encountered by engineers designing or analyzing heat exchangers, pipe flow, and flow over a plate at a different temperature than the stream (the case of a shuttle wing during re-entry, for example).
Mixed convection
In any forced convection situation, some amount of natural convection is always present whenever there are gravitational forces present (i.e., unless the system is in an inertial frame or free-fall). When the natural convection is not negligible, such flows are typically referred to as mixed convection.
Mathematical analysis
When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convection. The Archimedes number is the ratio of Grashof number and the square of Reynolds number, which represents the ratio of buoyancy force and inertia force, and which stands in for the contribution of natural convection. When Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
When natural convection isn't a significant factor, mathematical analysis with forced convection theories typically yields accurate results. The parameter of importance in forced convection is the Péclet number, which is the ratio of advection (movement by currents) and diffusion (movement from high to low concentrations) of heat.
When the Peclet number is much greater than unity (1), advection dominates diffusion. Similarly, much smaller ratios indicate a higher rate of diffusion relative to advection.
See also
Convective heat transfer
Combined forced and natural convection
References
External links
Thermodynamics
Heat transfer | 0.78305 | 0.974492 | 0.763076 |
Natural units | In physics, natural unit systems are measurement systems for which selected physical constants have been set to 1 through nondimensionalization of physical units. For example, the speed of light may be set to 1, and it may then be omitted, equating mass and energy directly rather than using as a conversion factor in the typical mass–energy equivalence equation . A purely natural system of units has all of its dimensions collapsed, such that the physical constants completely define the system of units and the relevant physical laws contain no conversion constants.
While natural unit systems simplify the form of each equation, it is still necessary to keep track of the non-collapsed dimensions of each quantity or expression in order to reinsert physical constants (such dimensions uniquely determine the full formula). Dimensional analysis in the collapsed system is uninformative as most quantities have the same dimensions.
Systems of natural units
Summary table
where:
is the fine-structure constant ( ≈ 0.007297)
≈
≈
A dash (—) indicates where the system is not sufficient to express the quantity.
Stoney units
The Stoney unit system uses the following defining constants:
, , , ,
where is the speed of light, is the gravitational constant, is the Coulomb constant, and is the elementary charge.
George Johnstone Stoney's unit system preceded that of Planck by 30 years. He presented the idea in a lecture entitled "On the Physical Units of Nature" delivered to the British Association in 1874.
Stoney units did not consider the Planck constant, which was discovered only after Stoney's proposal.
Planck units
The Planck unit system uses the following defining constants:
, , , ,
where is the speed of light, is the reduced Planck constant, is the gravitational constant, and is the Boltzmann constant.
Planck units form a system of natural units that is not defined in terms of properties of any prototype, physical object, or even elementary particle. They only refer to the basic structure of the laws of physics: and are part of the structure of spacetime in general relativity, and is at the foundation of quantum mechanics. This makes Planck units particularly convenient and common in theories of quantum gravity, including string theory.
Planck considered only the units based on the universal constants , , , and B to arrive at natural units for length, time, mass, and temperature, but no electromagnetic units. The Planck system of units is now understood to use the reduced Planck constant, , in place of the Planck constant, .
Schrödinger units
The Schrödinger system of units (named after Austrian physicist Erwin Schrödinger) is seldom mentioned in literature. Its defining constants are:
, , , .
Geometrized units
Defining constants:
, .
The geometrized unit system, used in general relativity, the base physical units are chosen so that the speed of light, , and the gravitational constant, , are set to one.
Atomic units
The atomic unit system uses the following defining constants:
, , , .
The atomic units were first proposed by Douglas Hartree and are designed to simplify atomic and molecular physics and chemistry, especially the hydrogen atom. For example, in atomic units, in the Bohr model of the hydrogen atom an electron in the ground state has orbital radius, orbital velocity and so on with particularly simple numeric values.
Natural units (particle and atomic physics)
This natural unit system, used only in the fields of particle and atomic physics, uses the following defining constants:
, , , ,
where is the speed of light, e is the electron mass, is the reduced Planck constant, and 0 is the vacuum permittivity.
The vacuum permittivity 0 is implicitly used as a nondimensionalization constant, as is evident from the physicists' expression for the fine-structure constant, written , which may be compared to the correspoding expression in SI: .
Strong units
Defining constants:
, , .
Here, is the proton rest mass. Strong units are "convenient for work in QCD and nuclear physics, where quantum mechanics and relativity are omnipresent and the proton is an object of central interest".
In this system of units the speed of light changes in inverse proportion to the fine-structure constant, therefore it has gained some interest recent years in the niche hypothesis of time-variation of fundamental constants.
See also
Anthropic units
Astronomical system of units
Dimensionless physical constant
International System of Units
N-body units
Outline of metrology and measurement
Unit of measurement
Notes and references
External links
The NIST website (National Institute of Standards and Technology) is a convenient source of data on the commonly recognized constants.
K.A. Tomilin: NATURAL SYSTEMS OF UNITS; To the Centenary Anniversary of the Planck System A comparative overview/tutorial of various systems of natural units having historical use.
Pedagogic Aides to Quantum Field Theory Click on the link for Chap. 2 to find an extensive, simplified introduction to natural units.
Natural System Of Units In General Relativity (PDF), by Alan L. Myers (University of Pennsylvania). Equations for conversions from natural to SI units.
Metrology | 0.768572 | 0.992842 | 0.763071 |
Nondimensionalization | Nondimensionalization is the partial or full removal of physical dimensions from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved. It is closely related to dimensional analysis. In some physical systems, the term scaling is used interchangeably with nondimensionalization, in order to suggest that certain quantities are better measured relative to some appropriate unit. These units refer to quantities intrinsic to the system, rather than units such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, since the latter procedure results in variables that still carry units.
Nondimensionalization can also recover characteristic properties of a system. For example, if a system has an intrinsic resonance frequency, length, or time constant, nondimensionalization can recover these values. The technique is especially useful for systems that can be described by differential equations. One important use is in the analysis of control systems.
One of the simplest characteristic units is the doubling time of a system experiencing exponential growth, or conversely the half-life of a system experiencing exponential decay; a more natural pair of characteristic units is mean age/mean lifetime, which correspond to base e rather than base 2.
Many illustrative examples of nondimensionalization originate from simplifying differential equations. This is because a large body of physical problems can be formulated in terms of differential equations. Consider the following:
List of dynamical systems and differential equations topics
List of partial differential equation topics
Differential equations of mathematical physics
Although nondimensionalization is well adapted for these problems, it is not restricted to them. An example of a non-differential-equation application is dimensional analysis; another example is normalization in statistics.
Measuring devices are practical examples of nondimensionalization occurring in everyday life. Measuring devices are calibrated relative to some known unit. Subsequent measurements are made relative to this standard. Then, the absolute value of the measurement is recovered by scaling with respect to the standard.
Rationale
Suppose a pendulum is swinging with a particular period T. For such a system, it is advantageous to perform calculations relating to the swinging relative to T. In some sense, this is normalizing the measurement with respect to the period.
Measurements made relative to an intrinsic property of a system will apply to other systems which also have the same intrinsic property. It also allows one to compare a common property of different implementations of the same system. Nondimensionalization determines in a systematic manner the characteristic units of a system to use, without relying heavily on prior knowledge of the system's intrinsic properties
(one should not confuse characteristic units of a system with natural units of nature). In fact, nondimensionalization can suggest the parameters which should be used for analyzing a system. However, it is necessary to start with an equation that describes the system appropriately.
Nondimensionalization steps
To nondimensionalize a system of equations, one must do the following:
Identify all the independent and dependent variables;
Replace each of them with a quantity scaled relative to a characteristic unit of measure to be determined;
Divide through by the coefficient of the highest order polynomial or derivative term;
Choose judiciously the definition of the characteristic unit for each variable so that the coefficients of as many terms as possible become 1;
Rewrite the system of equations in terms of their new dimensionless quantities.
The last three steps are usually specific to the problem where nondimensionalization is applied. However, almost all systems require the first two steps to be performed.
Conventions
There are no restrictions on the variable names used to replace "x" and "t". However, they are generally chosen so that it is convenient and intuitive to use for the problem at hand. For example, if "x" represented mass, the letter "m" might be an appropriate symbol to represent the dimensionless mass quantity.
In this article, the following conventions have been used:
t – represents the independent variable – usually a time quantity. Its nondimensionalized counterpart is .
x – represents the dependent variable – can be mass, voltage, or any measurable quantity. Its nondimensionalized counterpart is .
A subscript 'c' added to a quantity's variable name is used to denote the characteristic unit used to scale that quantity. For example, if x is a quantity, then xc is the characteristic unit used to scale it.
As an illustrative example, consider a first order differential equation with constant coefficients:
In this equation the independent variable here is t, and the dependent variable is x.
Set . This results in the equation
The coefficient of the highest ordered term is in front of the first derivative term. Dividing by this gives
The coefficient in front of only contains one characteristic variable tc, hence it is easiest to choose to set this to unity first:
Subsequently,
The final dimensionless equation in this case becomes completely independent of any parameters with units:
Substitutions
Suppose for simplicity that a certain system is characterized by two variables – a dependent variable x and an independent variable t, where x is a function of t. Both x and t represent quantities with units. To scale these two variables, assume there are two intrinsic units of measurement xc and tc with the same units as x and t respectively, such that these conditions hold:
These equations are used to replace x and t when nondimensionalizing. If differential operators are needed to describe the original system, their scaled counterparts become dimensionless differential operators.
Differential operators
Consider the relationship
The dimensionless differential operators with respect to the independent variable becomes
Forcing function
If a system has a forcing function then
Hence, the new forcing function is made to be dependent on the dimensionless quantity .
Linear differential equations with constant coefficients
First order system
Consider the differential equation for a first order system:
The derivation of the characteristic units to and for this system gave
Second order system
A second order system has the form
Substitution step
Replace the variables x and t with their scaled quantities. The equation becomes
This new equation is not dimensionless, although all the variables with units are isolated in the coefficients. Dividing by the coefficient of the highest ordered term, the equation becomes
Now it is necessary to determine the quantities of xc and tc so that the coefficients become normalized. Since there are two free parameters, at most only two coefficients can be made to equal unity.
Determination of characteristic units
Consider the variable tc:
If the first order term is normalized.
If the zeroth order term is normalized.
Both substitutions are valid. However, for pedagogical reasons, the latter substitution is used for second order systems. Choosing this substitution allows xc to be determined by normalizing the coefficient of the forcing function:
The differential equation becomes
The coefficient of the first order term is unitless. Define
The factor 2 is present so that the solutions can be parameterized in terms of ζ. In the context of mechanical or electrical systems, ζ is known as the damping ratio, and is an important parameter required in the analysis of control systems. 2ζ is also known as the linewidth of the system. The result of the definition is the universal oscillator equation.
Higher order systems
The general nth order linear differential equation with constant coefficients has the form:
The function f(t) is known as the forcing function.
If the differential equation only contains real (not complex) coefficients, then the properties of such a system behaves as a mixture of first and second order systems only. This is because the roots of its characteristic polynomial are either real, or complex conjugate pairs. Therefore, understanding how nondimensionalization applies to first and second ordered systems allows the properties of higher order systems to be determined through superposition.
The number of free parameters in a nondimensionalized form of a system increases with its order. For this reason, nondimensionalization is rarely used for higher order differential equations. The need for this procedure has also been reduced with the advent of symbolic computation.
Examples of recovering characteristic units
A variety of systems can be approximated as either first or second order systems. These include mechanical, electrical, fluidic, caloric, and torsional systems. This is because the fundamental physical quantities involved within each of these examples are related through first and second order derivatives.
Mechanical oscillations
Suppose we have a mass attached to a spring and a damper, which in turn are attached to a wall, and a force acting on the mass along the same line.
Define
= displacement from equilibrium [m]
= time [s]
= external force or "disturbance" applied to system [kg⋅m⋅s−2]
= mass of the block [kg]
= damping constant of dashpot [kg⋅s−1]
= force constant of spring [kg⋅s−2]
Suppose the applied force is a sinusoid , the differential equation that describes the motion of the block is
Nondimensionalizing this equation the same way as described under yields several characteristics of the system:
The intrinsic unit xc corresponds to the distance the block moves per unit force
The characteristic variable tc is equal to the period of the oscillations
The dimensionless variable 2ζ corresponds to the linewidth of the system.
ζ itself is the damping ratio
Electrical oscillations
First-order series RC circuit
For a series RC attached to a voltage source
with substitutions
The first characteristic unit corresponds to the total charge in the circuit. The second characteristic unit corresponds to the time constant for the system.
Second-order series RLC circuit
For a series configuration of R, C, L components where Q is the charge in the system
with the substitutions
The first variable corresponds to the maximum charge stored in the circuit. The resonance frequency is given by the reciprocal of the characteristic time. The last expression is the linewidth of the system. The Ω can be considered as a normalized forcing function frequency.
Quantum mechanics
Quantum harmonic oscillator
The Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator is
The modulus square of the wavefunction represents probability density that, when integrated over , gives a dimensionless probability. Therefore, has units of inverse length. To nondimensionalize this, it must be rewritten as a function of a dimensionless variable. To do this, we substitute
where is some characteristic length of this system. This gives us a dimensionless wave function defined via
The differential equation then becomes
To make the term in front of dimensionless, set
The fully nondimensionalized equation is
where we have defined
The factor in front of is in fact (coincidentally) the ground state energy of the harmonic oscillator. Usually, the energy term is not made dimensionless as we are interested in determining the energies of the quantum states. Rearranging the first equation, the familiar equation for the harmonic oscillator becomes
Statistical analogs
In statistics, the analogous process is usually dividing a difference (a distance) by a scale factor (a measure of statistical dispersion), which yields a dimensionless number, which is called normalization. Most often, this is dividing errors or residuals by the standard deviation or sample standard deviation, respectively, yielding standard scores and studentized residuals.
See also
Buckingham π theorem
Dimensionless number
Natural units
System equivalence
RLC circuit
RL circuit
RC circuit
Logistic equation
Per-unit system
References
External links
Analysis of differential equation models in biology: a case study for clover meristem populations (Application of nondimensionalization to a problem in biology).
Course notes for Mathematical Modelling and Industrial Mathematics Jonathan Evans, Department of Mathematical Sciences, University of Bath. (see Chapter 3).
Scaling of Differential Equations Hans Petter Langtangen, Geir K. Pedersen, Center for Biomedical Computing, Simula Research Laboratory and Department of Informatics, University of Oslo.
Dimensional analysis | 0.772959 | 0.987207 | 0.76307 |
Brillouin scattering | In electromagnetism, Brillouin scattering (also known as Brillouin light scattering or BLS), named after Léon Brillouin, refers to the interaction of light with the material waves in a medium (e.g. electrostriction and magnetostriction). It is mediated by the refractive index dependence on the material properties of the medium; as described in optics, the index of refraction of a transparent material changes under deformation (compression-distension or shear-skewing).
The result of the interaction between the light-wave and the carrier-deformation wave is that a fraction of the transmitted light-wave changes its momentum (thus its frequency and energy) in preferential directions, as if by diffraction caused by an oscillating 3-dimensional diffraction grating.
If the medium is a solid crystal, a macromolecular chain condensate or a viscous liquid or gas, then the low frequency atomic-chain-deformation waves within the transmitting medium (not the transmitted electro-magnetic wave) in the carrier (represented as a quasiparticle) could be for example:
mass oscillation (acoustic) modes (called phonons);
charge displacement modes (in dielectrics, called polarons);
magnetic spin oscillation modes (in magnetic materials, called magnons).
Mechanism
From the perspective of solid state physics, Brillouin scattering is an interaction between an electromagnetic wave and one of the three above-mentioned crystalline lattice waves (e.g. electrostriction and magnetostriction). The scattering is inelastic i.e. the photon may lose energy (Stokes process) and in the process create one of the three quasiparticle types (phonon, polariton, magnon) or it may gain energy (anti-Stokes process) by absorbing one of those quasiparticle types. Such a shift in photon energy, corresponding to a Brillouin shift in frequency, is equal to the energy of the released or absorbed quasiparticle. Thus, Brillouin scattering can be used to measure the energies, wavelengths and frequencies of various atomic chain oscillation types ('quasiparticles'). To measure a Brillouin shift a commonly employed device called the Brillouin spectrometer is used, the design of which is derived from a Fabry–Pérot interferometer. Alternatively, high-speed photodiodes, such as those recovered from inexpensive 25-gigabit Ethernet optical transceivers, may be used in combination with a software-defined radio or RF spectrum analyzer.
Contrast with Rayleigh scattering
Rayleigh scattering, too, can be considered to be due to fluctuations in the density, composition and orientation of molecules within the transmitting medium, and hence of its refraction index, in small volumes of matter (particularly in gases or liquids). The difference is that Rayleigh scattering involves only the random and incoherent thermal fluctuations, in contrast with the correlated, periodic fluctuations (phonons) that cause Brillouin scattering. Moreover, Rayleigh scattering is elastic in that no energy is lost or gained.
Contrast with Raman scattering
Raman scattering is another phenomenon that involves inelastic scattering of light caused by the vibrational properties of matter. The detected range of frequency shifts and other effects are very different compared to Brillouin scattering. In Raman scattering, photons are scattered by the effect of vibrational and rotational transitions in the bonds between first-order neighboring atoms, while Brillouin scattering results from the scattering of photons caused by large scale, low-frequency phonons. The effects of the two phenomena provide very different information about the sample: Raman spectroscopy can be used to determine the transmitting medium's chemical composition and molecular structure, while Brillouin scattering can be used to measure the material's properties on a larger scale – such as its elastic behavior. The frequency shifts from Brillouin scattering, a technique known as Brillouin spectroscopy, are detected with an interferometer while Raman scattering uses either an interferometer or a dispersive (grating) spectrometer.
Stimulated Brillouin scattering
For intense beams of light (e.g. laser) traveling in a medium or in a waveguide, such as an optical fiber, the variations in the electric field of the beam itself may induce acoustic vibrations in the medium via electrostriction or radiation pressure. The beam may display Brillouin scattering as a result of those vibrations, usually in the direction opposite the incoming beam, a phenomenon known as stimulated Brillouin scattering (SBS). For liquids and gases, the frequency shifts typically created are of the order of 1–10 GHz resulting in wavelength shifts of ~1–10 pm in the visible light. Stimulated Brillouin scattering is one effect by which optical phase conjugation can take place.
Discovery
Inelastic scattering of light caused by acoustic phonons was first predicted by Léon Brillouin in 1914
. Leonid Mandelstam is believed to have recognised the possibility of such scattering as early as 1918, but he published his idea only in 1926.
In order to credit Mandelstam, the effect is also called Brillouin-Mandelstam scattering (BMS). Other commonly used names are Brillouin light scattering (BLS) and Brillouin-Mandelstam light scattering (BMLS).
The process of stimulated Brillouin scattering (SBS) was first observed by Chiao et al. in 1964. The optical phase conjugation aspect of the SBS process was discovered by Boris Yakovlevich Zeldovich et al. in 1972.
Fiber optic sensing
Brillouin scattering can also be employed to sense mechanical strain and temperature in optical fibers.
See also
Brillouin spectroscopy
Scattering
Raman scattering
Nonlinear optics
References
Notes
Sources
L.I. Mandelstam, Zh. Russ. Fiz-Khim., Ova. 58, 381 (1926).
B.Ya. Zel’dovich, V.I.Popovichev, V.V.Ragulskii and F.S.Faisullov, "Connection between the wavefronts of the reflected and exciting light in stimulated Mandel’shtam Brillouin scattering," Sov. Phys. JETP, 15, 109 (1972)
External links
CIMIT Center for Integration of Medicine and Innovative Technology
Brillouin scattering in the Encyclopedia of Laser Physics and Technology
Surface Brillouin Scattering, U. Hawaii
List of labs performing Brillouin scattering measurements (source BS Lab in ICMM-CSIC)
Scattering, absorption and radiative transfer (optics)
Scattering
Fiber-optic communications | 0.775816 | 0.983568 | 0.763068 |
Scientific visualization | Scientific visualization (also spelled scientific visualisation) is an interdisciplinary branch of science concerned with the visualization of scientific phenomena. It is also considered a subset of computer graphics, a branch of computer science. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data. Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information.
History
One of the earliest examples of three-dimensional scientific visualisation was Maxwell's thermodynamic surface, sculpted in clay in 1874 by James Clerk Maxwell. This prefigured modern scientific visualization techniques that use computer graphics.
Notable early two-dimensional examples include the flow map of Napoleon's March on Moscow produced by Charles Joseph Minard in 1869; the "coxcombs" used by Florence Nightingale in 1857 as part of a campaign to improve sanitary conditions in the British Army; and the dot map used by John Snow in 1855 to visualise the Broad Street cholera outbreak.
Data visualization methods
Criteria for classifications:
dimension of the data
method
textura based methods
geometry-based approaches such as arrow plots, streamlines, pathlines, timelines, streaklines, particle tracing, surface particles, stream arrows, stream tubes, stream balls, flow volumes and topological analysis
Two-dimensional data sets
Scientific visualization using computer graphics gained in popularity as graphics matured. Primary applications were scalar fields and vector fields from computer simulations and also measured data. The primary methods for visualizing two-dimensional (2D) scalar fields are color mapping and drawing contour lines. 2D vector fields are visualized using glyphs and streamlines or line integral convolution methods. 2D tensor fields are often resolved to a vector field by using one of the two eigenvectors to represent the tensor each point in the field and then visualized using vector field visualization methods.
Three-dimensional data sets
For 3D scalar fields the primary methods are volume rendering and isosurfaces. Methods for visualizing vector fields include glyphs (graphical icons) such as arrows, streamlines and streaklines, particle tracing, line integral convolution (LIC) and topological methods. Later, visualization techniques such as hyperstreamlines were developed to visualize 2D and 3D tensor fields.
Topics
Computer animation
Computer animation is the art, technique, and science of creating moving images via the use of computers. It is becoming more common to be created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films. Applications include medical animation, which is most commonly utilized as an instructional tool for medical professionals or their patients.
Computer simulation
Computer simulation is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, and computational physics, chemistry and biology; human systems in economics, psychology, and social science; and in the process of engineering and new technology, to gain insight into the operation of those systems, or to observe their behavior. The simultaneous visualization and simulation of a system is called visulation.
Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for months. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computing Modernization Program.
Information visualization
Information visualization is the study of "the visual representation of large-scale collections of non-numerical information, such as files and lines of code in software systems, library and bibliographic databases, networks of relations on the internet, and so forth".
Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways. Visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. The key difference between scientific visualization and information visualization is that information visualization is often applied to data that is not generated by scientific inquiry. Some examples are graphical representations of data for business, government, news and social media.
Interface technology and perception
Interface technology and perception shows how new interfaces and a better understanding of underlying perceptual issues create new opportunities for the scientific visualization community.
Surface rendering
Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three-dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output. Important rendering techniques are:
Scanline rendering and rasterisation
A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In 3D rendering, triangles and polygons in space might be primitives.
Ray casting
Ray casting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.
Radiosity
Radiosity, also known as Global Illumination, is a method that attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.
Ray tracing
Ray tracing is an extension of the same technique developed in scanline rendering and ray casting. Like those, it handles complicated objects well, and the objects may be described mathematically. Unlike scanline and casting, ray tracing is almost always a Monte Carlo technique, that is one based on averaging a number of randomly generated samples from a model.
Volume rendering
Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.
Volume visualization
According to Rosenblum (1994) "volume visualization examines a set of techniques that allows viewing an object without mathematically representing the other surface. Initially used in medical imaging, volume visualization has become an essential technique for many sciences, portraying phenomena become an essential technique such as clouds, water flows, and molecular and biological structure. Many volume visualization algorithms are computationally expensive and demand large data storage. Advances in hardware and software are generalizing volume visualization as well as real time performances".
Developments of web-based technologies, and in-browser rendering have allowed of simple volumetric presentation of a cuboid with a changing frame of reference to show volume, mass and density data.
Applications
This section will give a series of examples how scientific visualization can be applied today.
In the natural sciences
Star formation: The featured plot is a Volume plot of the logarithm of gas/dust density in an Enzo star and galaxy simulation. Regions of high density are white while less dense regions are more blue and also more transparent.
Gravitational waves: Researchers used the Globus Toolkit to harness the power of multiple supercomputers to simulate the gravitational effects of black-hole collisions.
Massive Star Supernovae Explosions: In the image, three-Dimensional Radiation Hydrodynamics Calculations of Massive Star Supernovae Explosions The DJEHUTY stellar evolution code was used to calculate the explosion of SN 1987A model in three dimensions.
Molecular rendering: VisIt's general plotting capabilities were used to create the molecular rendering shown in the featured visualization. The original data was taken from the Protein Data Bank and turned into a VTK file before rendering.
In geography and ecology
Terrain visualization: VisIt can read several file formats common in the field of Geographic Information Systems (GIS), allowing one to plot raster data such as terrain data in visualizations. The featured image shows a plot of a DEM dataset containing mountainous areas near Dunsmuir, CA. Elevation lines are added to the plot to help delineate changes in elevation.
Tornado Simulation: This image was created from data generated by a tornado simulation calculated on NCSA's IBM p690 computing cluster. High-definition television animations of the storm produced at NCSA were included in an episode of the PBS television series NOVA called "Hunt for the Supertwister." The tornado is shown by spheres that are colored according to pressure; orange and blue tubes represent the rising and falling airflow around the tornado.
Climate visualization: This visualization depicts the carbon dioxide from various sources that are advected individually as tracers in the atmosphere model. Carbon dioxide from the ocean is shown as plumes during February 1900.
Atmospheric Anomaly in Times Square In the image the results from the SAMRAI simulation framework of an atmospheric anomaly in and around Times Square are visualized.
In mathematics
Scientific visualization of mathematical structures has been undertaken for purposes of building intuition and for aiding the forming of mental models.
Higher-dimensional objects can be visualized in form of projections (views) in lower dimensions. In particular, 4-dimensional objects are visualized by means of projection in three dimensions. The lower-dimensional projections of higher-dimensional objects can be used for purposes of virtual object manipulation, allowing 3D objects to be manipulated by operations performed in 2D, and 4D objects by interactions performed in 3D.
In complex analysis, functions of the complex plane are inherently 4-dimensional, but there is no natural geometric projection into lower dimensional visual representations. Instead, colour vision is exploited to capture dimensional information using techniques such as domain coloring.
In the formal sciences
Computer mapping of topographical surfaces: Through computer mapping of topographical surfaces, mathematicians can test theories of how materials will change when stressed. The imaging is part of the work on the NSF-funded Electronic Visualization Laboratory at the University of Illinois at Chicago.
Curve plots: VisIt can plot curves from data read from files and it can be used to extract and plot curve data from higher-dimensional datasets using lineout operators or queries. The curves in the featured image correspond to elevation data along lines drawn on DEM data and were created with the feature lineout capability. Lineout allows you to interactively draw a line, which specifies a path for data extraction. The resulting data was then plotted as curves.
Image annotations: The featured plot shows Leaf Area Index (LAI), a measure of global vegetative matter, from a NetCDF dataset. The primary plot is the large plot at the bottom, which shows the LAI for the whole world. The plots on top are actually annotations that contain images generated earlier. Image annotations can be used to include material that enhances a visualization such as auxiliary plots, images of experimental data, project logos, etc.
Scatter plot: VisIt's Scatter plot allows visualizing multivariate data of up to four dimensions. The Scatter plot takes multiple scalar variables and uses them for different axes in phase space. The different variables are combined to form coordinates in the phase space and they are displayed using glyphs and colored using another scalar variable.
In the applied sciences
Porsche 911 model (NASTRAN model): The featured plot contains a Mesh plot of a Porsche 911 model imported from a NASTRAN bulk data file. VisIt can read a limited subset of NASTRAN bulk data files, in general enough to import model geometry for visualization.
YF-17 aircraft Plot: The featured image displays plots of a CGNS dataset representing a YF-17 jet aircraft. The dataset consists of an unstructured grid with solution. The image was created by using a pseudocolor plot of the dataset's Mach variable, a Mesh plot of the grid, and Vector plot of a slice through the Velocity field.
City rendering: An ESRI shapefile containing a polygonal description of the building footprints was read in and then the polygons were resampled onto a rectilinear grid, which was extruded into the featured cityscape.
Inbound traffic measured: This image is a visualization study of inbound traffic measured in billions of bytes on the NSFNET T1 backbone for the month of September 1991. The traffic volume range is depicted from purple (zero bytes) to white (100 billion bytes). It represents data collected by Merit Network, Inc.
Organizations
Important laboratories in the field are:
Electronic Visualization Laboratory
Kitware
Los Alamos National Laboratory
NASA Advanced Supercomputing Division
National Center for Supercomputing Applications
Sandia National Laboratory
San Diego Supercomputer Center
Scientific Computing and Imaging Institute
Texas Advanced Computing Center
Conferences in this field, ranked by significance in scientific visualization research, are:
IEEE Visualization
SIGGRAPH
EuroVis
Conference on Human Factors in Computing Systems (CHI)
Eurographics
PacificVis
See further: Computer graphics organizations, Supercomputing facilities
See also
General
Data Presentation Architecture
Data visualization
Mathematical visualization
Molecular graphics
Skin friction line
Sonification
Tensor glyph
Visual analytics
Publications
ACM Transactions on Graphics
IEEE Transactions on Visualization and Computer Graphics
SIAM Journal on Scientific Computing
The Visualization Handbook
Software
Amira
Avizo
Baudline
Bitplane
Dataplot
MeVisLab
NCAR Command Language
Orange
OpenVisus
Origin
ParaView
Tecplot
tomviz
VAPOR
Vis5D
VisAD
VisIt
VTK
:Category:Free data visualization software
References
Further reading
Charles D. Hansen and Christopher R. Johnson (eds.) (2005). The Visualization Handbook. Elsevier.
Bruce H. McCormick, Thomas A. DeFanti and Maxine D. Brown (eds.) (1987). Visualization in Scientific Computing. ACM Press.
Gregory M. Nielson, Hans Hagen and Heinrich Müller (1997). Scientific Visualization: Overviews, Methodologies, and Techniques. IEEE Computer Society.
Clifford A. Pickover (ed.) (1994). Frontiers of Scientific Visualization. New York: John Wiley Inc.
Lawrence J. Rosenblum (ed.) (1994). Scientific Visualization: Advances and challenges. Academic Press.
Will Schroeder, Ken Martin, Bill Lorensen (2003). The Visualization Toolkit. Kitware, Inc.
Leland Wilkinson (2005). The Grammar of Graphics, Springer.
External links
National Institute of Standards and Technology Scientific Visualizations, with an overview of applications.
Scientific Visualization Tutorials, Georgia Tech
NASA Scientific Visualization Studio. They facilitate scientific inquiry and outreach within NASA programs through visualization.
Subunit Studios Scientific and Molecular Visualization Studio. Scientific illustration and animation services for scientists by scientists.
scienceviz.com - Scientific Vizualisation, Simulation and CG Animation for Universities, Architects and Engineers
Articles containing video clips | 0.7806 | 0.977537 | 0.763065 |
Parallax | Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or half-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects, so parallax can be used to determine distances.
To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit. These distances form the lowest rung of what is called "the cosmic distance ladder", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder.
Parallax also affects optical instruments such as rifle scopes, binoculars, microscopes, and twin-lens reflex cameras that view objects from slightly different angles. Many animals, along with humans, have two eyes with overlapping visual fields that use parallax to gain depth perception; this process is known as stereopsis. In computer vision the effect is used for computer stereo vision, and there is a device called a parallax rangefinder that uses it to find the range, and in some variations also altitude to a target.
A simple everyday example of parallax can be seen in the dashboards of motor vehicles that use a needle-style mechanical speedometer. When viewed from directly in front, the speed may show exactly 60, but when viewed from the passenger seat, the needle may appear to show a slightly different speed due to the angle of viewing combined with the displacement of the needle from the plane of the numerical dial.
Visual perception
Because the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects.
Animals also use motion parallax, in which the animals (or just the head) move to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth.
The motion parallax is exploited also in wiggle stereoscopy, computer graphics that provide depth cues through viewpoint-shifting animation rather than through binocular vision.
Distance measurement
Parallax arises due to a change in viewpoint occurring due to the motion of the observer, of the observed, or both. What is essential is relative motion. By observing parallax, measuring angles, and using geometry, one can determine distance.
Distance measurement by parallax is a special case of the principle of triangulation, which states that one can solve for all the sides and angles in a network of triangles if, in addition to all the angles in the network, the length of at least one side has been measured. Thus, the careful measurement of the length of one baseline can fix the scale of an entire triangulation network. In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (always less than 1 arcsecond, leaving the other two close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined.
In astronomy, assuming the angle is small, the distance to a star (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687 = .
On Earth, a coincidence rangefinder or parallax rangefinder can be used to find distance to a target. In surveying, the problem of resection explores angular measurements from a known baseline for determining an unknown point's coordinates.
Astronomy
Metrology
Measurements made by viewing the position of some marker relative to something to be measured are subject to parallax error if the marker is some distance away from the object of measurement and not viewed from the correct position. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift and the reading will be less accurate than the ruler is capable of.
A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user's eye is positioned so that the pointer obscures its reflection, guaranteeing that the user's line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car's speedometer by a driver in front of it and a passenger off to the side, values read from a graticule, not in actual contact with the display on an oscilloscope, etc.
Photogrammetry
When viewed through a stereo viewer, aerial picture pair offers a pronounced stereo effect of landscape and buildings. High buildings appear to "keel over" in the direction away from the center of the photograph. Measurements of this parallax are used to deduce the height of the buildings, provided that flying height and baseline distances are known. This is a key component of the process of photogrammetry.
Photography
Parallax error can be seen when taking photos with many types of cameras, such as twin-lens reflex cameras and those including viewfinders (such as rangefinder cameras). In such cameras, the eye sees the subject through different optics (the viewfinder, or a second lens) than the one through which the photo is taken. As the viewfinder is often found above the lens of the camera, photos with parallax error are often slightly lower than intended, the classic example being the image of a person with their head cropped off. This problem is addressed in single-lens reflex cameras, in which the viewfinder sees through the same lens through which the photo is taken (with the aid of a movable mirror), thus avoiding parallax error.
Parallax is also an issue in image stitching, such as for panoramas.
Weapon sights
Parallax affects sighting devices of ranged weapons in many ways. On sights fitted on small arms and bows, etc., the perpendicular distance between the sight and the weapon's launch axis (e.g. the bore axis of a gun)—generally referred to as "sight height"—can induce significant aiming errors when shooting at close range, particularly when shooting at small targets. This parallax error is compensated for (when needed) via calculations that also take in other variables such as bullet drop, windage, and the distance at which the target is expected to be. Sight height can be used to advantage when "sighting in" rifles for field use. A typical hunting rifle (.222 with telescopic sights) sighted in at 75m will still be useful from without needing further adjustment.
Optical sights
In some reticled optical instruments such as telescopes, microscopes or in telescopic sights ("scopes") used on small arms and theodolites, parallax can create problems when the reticle is not coincident with the focal plane of the target image. This is because when the reticle and the target are not at the same focus, the optically corresponded distances being projected through the eyepiece are also different, and the user's eye will register the difference in parallaxes between the reticle and the target (whenever eye position changes) as a relative displacement on top of each other. The term parallax shift refers to the resultant apparent "floating" movements of the reticle over the target image when the user moves his/her head/eye laterally (up/down or left/right) behind the sight, i.e. an error where the reticle does not stay aligned with the user's optical axis.
Some firearm scopes are equipped with a parallax compensation mechanism, which consists of a movable optical element that enables the optical system to shift the focus of the target image at varying distances into the same optical plane of the reticle (or vice versa). Many low-tier telescopic sights may have no parallax compensation because in practice they can still perform very acceptably without eliminating parallax shift. In this case, the scope is often set fixed at a designated parallax-free distance that best suits their intended usage. Typical standard factory parallax-free distances for hunting scopes are 100 yd (or 90 m) to make them suited for hunting shots that rarely exceed 300 yd/m. Some competition and military-style scopes without parallax compensation may be adjusted to be parallax free at ranges up to 300 yd/m to make them better suited for aiming at longer ranges. Scopes for guns with shorter practical ranges, such as airguns, rimfire rifles, shotguns, and muzzleloaders, will have parallax settings for shorter distances, commonly for rimfire scopes and for shotguns and muzzleloaders. Airgun scopes are very often found with adjustable parallax, usually in the form of an adjustable objective (or "AO" for short) design, and may adjust down to as near as .
Non-magnifying reflector or "reflex" sights can be theoretically "parallax free". But since these sights use parallel collimated light this is only true when the target is at infinity. At finite distances, eye movement perpendicular to the device will cause parallax movement in the reticle image in exact relationship to the eye position in the cylindrical column of light created by the collimating optics. Firearm sights, such as some red dot sights, try to correct for this via not focusing the reticle at infinity, but instead at some finite distance, a designed target range where the reticle will show very little movement due to parallax. Some manufacturers market reflector sight models they call "parallax free", but this refers to an optical system that compensates for off axis spherical aberration, an optical error induced by the spherical mirror used in the sight that can cause the reticle position to diverge off the sight's optical axis with change in eye position.
Artillery gunfire
Because of the positioning of field or naval artillery guns, each one has a slightly different perspective of the target relative to the location of the fire-control system itself. Therefore, when aiming its guns at the target, the fire control system must compensate for parallax in order to assure that fire from each gun converges on the target.
Art
Several of Mark Renn's sculptural works play with parallax, appearing abstract until viewed from a specific angle. One such sculpture is The Darwin Gate (pictured) in Shrewsbury, England, which from a certain angle appears to form a dome, according to Historic England, in "the form of a Saxon helmet with a Norman window... inspired by features of St Mary's Church which was attended by Charles Darwin as a boy".
As a metaphor
In a philosophic/geometric sense: an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. The apparent displacement, or difference of position, of an object, as seen from two different stations, or points of view. In contemporary writing, parallax can also be the same story, or a similar story from approximately the same timeline, from one book, told from a different perspective in another book. The word and concept feature prominently in James Joyce's 1922 novel, Ulysses. Orson Scott Card also used the term when referring to Ender's Shadow as compared to Ender's Game.
The metaphor is invoked by Slovenian philosopher Slavoj Žižek in his 2006 book The Parallax View, borrowing the concept of "parallax view" from the Japanese philosopher and literary critic Kojin Karatani. Žižek notes
See also
Binocular disparity
Lutz–Kelker bias
Parallax mapping, in computer graphics
Parallax scrolling, in computer graphics
Spectroscopic parallax
Triangulation, wherein a point is calculated given its angles from other known points
Trigonometry
True range multilateration, wherein a point is calculated given its distances from other known points
Xallarap
Notes
References
Bibliography
.
External links
Instructions for having background images on a web page use parallax effects
Actual parallax project measuring the distance to the moon within 2.3%
BBC's Sky at Night program: Patrick Moore demonstrates Parallax using Cricket. (Requires RealPlayer)
Berkeley Center for Cosmological Physics Parallax
Parallax on an educational website, including a quick estimate of distance based on parallax using eyes and a thumb only
Angle
Astrometry
Geometry in computer vision
Optics
Trigonometry
Vision | 0.764339 | 0.998326 | 0.76306 |
Ductility | Ductility refers to the ability of a material to sustain significant plastic deformation before fracture. Plastic deformation is the permanent distortion of a material under applied stress, as opposed to elastic deformation, which is reversible upon removing the stress. Ductility is a critical mechanical performance indicator, particularly in applications that require materials to bend, stretch, or deform in other ways without breaking. The extent of ductility can be quantitatively assessed using the percent elongation at break, given by the equation:
where is the length of the material after fracture and is the original length before testing. This formula helps in quantifying how much a material can stretch under tensile stress before failure, providing key insights into its ductile behavior. Ductility is an important consideration in engineering and manufacturing. It defines a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload. Some metals that are generally described as ductile include gold and copper, while platinum is the most ductile of all metals in pure form. However, not all metals experience ductile failure as some can be characterized with brittle failure like cast iron. Polymers generally can be viewed as ductile materials as they typically allow for plastic deformation.
Inorganic materials, including a wide variety of ceramics and semiconductors, are generally characterized by their brittleness. This brittleness primarily stems from their strong ionic or covalent bonds, which maintain the atoms in a rigid, densely packed arrangement. Such a rigid lattice structure restricts the movement of atoms or dislocations, essential for plastic deformation. The significant difference in ductility observed between metals and inorganic semiconductor or insulator can be traced back to each material’s inherent characteristics, including the nature of their defects, such as dislocations, and their specific chemical bonding properties. Consequently, unlike ductile metals and some organic materials with ductility (%EL) from 1.2% to over 1200%, brittle inorganic semiconductors and ceramic insulators typically show much smaller ductility at room temperature.
Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. Historically, materials were considered malleable if they were amenable to forming by hammering or rolling. Lead is an example of a material which is relatively malleable but not ductile.
Materials science
Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed.
High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter.
The ductility of steel varies depending on the alloying constituents. Increasing the levels of carbon decreases ductility. Many plastics and amorphous solids, such as Play-Doh, are also malleable. The most ductile metal is platinum and the most malleable metal is gold. When highly stretched, such metals distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening.
Quantification
Basic definitions
The quantities commonly used to define ductility in a tension test are relative elongation (in percent, sometimes denoted as ) and reduction of area (sometimes denoted as ) at fracture. Fracture strain is the engineering strain at which a test specimen fractures during a uniaxial tensile test. Percent elongation, or engineering strain at fracture, can be written as:
Percent reduction in area can be written as:
where the area of concern is the cross-sectional area of the gauge of the specimen.
According to Shigley's Mechanical Engineering Design, significant denotes about 5.0 percent elongation.
Effect of sample dimensions
An important point concerning the value of the ductility (nominal strain at failure) in a tensile test is that it commonly exhibits a dependence on sample dimensions. However, a universal parameter should exhibit no such dependence (and, indeed, there is no dependence for properties such as stiffness, yield stress and ultimate tensile strength). This occurs because the measured strain (displacement) at fracture commonly incorporates contributions from both the uniform deformation occurring up to the onset of necking and the subsequent deformation of the neck (during which there is little or no deformation in the rest of the sample). The significance of the contribution from neck development depends on the "aspect ratio" (length / diameter) of the gauge length, being greater when the ratio is low. This is a simple geometric effect, which has been clearly identified. There have been both experimental studies and theoretical explorations of the effect, mostly based on Finite Element Method (FEM) modelling. Nevertheless, it is not universally appreciated and, since the range of sample dimensions in common use is quite wide, it can lead to highly significant variations (by factors of up to 2 or 3) in ductility values obtained for the same material in different tests.
A more meaningful representation of ductility would be obtained by identifying the strain at the onset of necking, which should be independent of sample dimensions. This point can be difficult to identify on a (nominal) stress-strain curve, because the peak (representing the onset of necking) is often relatively flat. Moreover, some (brittle) materials fracture before the onset of necking, such that there is no peak. In practice, for many purposes it is preferable to carry out a different kind of test, designed to evaluate the toughness (energy absorbed during fracture), rather than use ductility values obtained in tensile tests.
In an absolute sense, "ductility" values are therefore virtually meaningless. The actual (true) strain in the neck at the point of fracture bears no direct relation to the raw number obtained from the nominal stress-strain curve; the true strain in the neck is often considerably higher. Also, the true stress at the point of fracture is usually higher than the apparent value according to the plot. The load often drops while the neck develops, but the sectional area in the neck is also dropping (more sharply), so the true stress there is rising. There is no simple way of estimating this value, since it depends on the geometry of the neck. While the true strain at fracture is a genuine indicator of "ductility", it cannot readily be obtained from a conventional tensile test.
The Reduction in Area (RA) is defined as the decrease in sectional area at the neck (usually obtained by measurement of the diameter at one or both of the fractured ends), divided by the original sectional area. It is sometimes stated that this is a more reliable indicator of the "ductility" than the elongation at failure (partly in recognition of the fact that the latter is dependent on the aspect ratio of the gauge length, although this dependence is far from being universally appreciated). There is something in this argument, but the RA is still some way from being a genuinely meaningful parameter. One objection is that it is not easy to measure accurately, particularly with samples that are not circular in section. Rather more fundamentally, it is affected by both the uniform plastic deformation that took place before necking and by the development of the neck. Furthermore, it is sensitive to exactly what happens in the latter stages of necking, when the true strain is often becoming very high and the behavior is of limited significance in terms of a meaningful definition of strength (or toughness). There has again been extensive study of this issue.
Ductile–brittle transition temperature
Metals can undergo two different types of fractures: brittle fracture or ductile fracture. Failure propagation occurs faster in brittle materials due to the ability for ductile materials to undergo plastic deformation. Thus, ductile materials are able to sustain more stress due to their ability to absorb more energy prior to failure than brittle materials are. The plastic deformation results in the material following a modification of the Griffith equation, where the critical fracture stress increases due to the plastic work required to extend the crack adding to the work necessary to form the crack - work corresponding to the increase in surface energy that results from the formation of an addition crack surface. The plastic deformation of ductile metals is important as it can be a sign of the potential failure of the metal. Yet, the point at which the material exhibits a ductile behavior versus a brittle behavior is not only dependent on the material itself but also on the temperature at which the stress is being applied to the material. The temperature where the material changes from brittle to ductile or vice versa is crucial for the design of load-bearing metallic products. The minimum temperature at which the metal transitions from a brittle behavior to a ductile behavior, or from a ductile behavior to a brittle behavior, is known as the ductile-brittle transition temperature (DBTT). Below the DBTT, the material will not be able to plastically deform, and the crack propagation rate increases rapidly leading to the material undergoing brittle failure rapidly. Furthermore, DBTT is important since, once a material is cooled below the DBTT, it has a much greater tendency to shatter on impact instead of bending or deforming (low temperature embrittlement). Thus, the DBTT indicates the temperature at which, as temperature decreases, a material's ability to deform in a ductile manner decreases and so the rate of crack propagation drastically increases. In other words, solids are very brittle at very low temperatures, and their toughness becomes much higher at elevated temperatures.
For more general applications, it is preferred to have a lower DBTT to ensure the material has a wider ductility range. This ensures that sudden cracks are inhibited so that failures in the metal body are prevented. It has been determined that the more slip systems a material has, the wider the range of temperatures ductile behavior is exhibited at. This is due to the slip systems allowing for more motion of dislocations when a stress is applied to the material. Thus, in materials with a lower amount of slip systems, dislocations are often pinned by obstacles leading to strain hardening, which increases the materials strength which makes the material more brittle. For this reason, FCC (face centered cubic) structures are ductile over a wide range of temperatures, BCC (body centered cubic) structures are ductile only at high temperatures, and HCP (hexagonal closest packed) structures are often brittle over wide ranges of temperatures. This leads to each of these structures having different performances as they approach failure (fatigue, overload, and stress cracking) under various temperatures, and shows the importance of the DBTT in selecting the correct material for a specific application. For example, zamak 3 exhibits good ductility at room temperature but shatters when impacted at sub-zero temperatures. DBTT is a very important consideration in selecting materials that are subjected to mechanical stresses. A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials. The DBTT is also dependent on the size of the grains within the metal, as typically smaller grain size leads to an increase in tensile strength, resulting in an increase in ductility and decrease in the DBTT. This increase in tensile strength is due to the smaller grain sizes resulting in grain boundary hardening occurring within the material, where the dislocations require a larger stress to cross the grain boundaries and continue to propagate throughout the material. It has been shown that by continuing to refine ferrite grains to reduce their size, from 40 microns down to 1.3 microns, that it is possible to eliminate the DBTT entirely so that a brittle fracture never occurs in ferritic steel (as the DBTT required would be below absolute zero).
In some materials, the transition is sharper than others and typically requires a temperature-sensitive deformation mechanism. For example, in materials with a body-centered cubic (bcc) lattice the DBTT is readily apparent, as the motion of screw dislocations is very temperature sensitive because the rearrangement of the dislocation core prior to slip requires thermal activation. This can be problematic for steels with a high ferrite content. This famously resulted in serious hull cracking in Liberty ships in colder waters during World War II, causing many sinkings. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT.
The most accurate method of measuring the DBTT of a material is by fracture testing. Typically four-point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. Two fracture tests are typically utilized to determine the DBTT of specific metals: the Charpy V-Notch test and the Izod test. The Charpy V-notch test determines the impact energy absorption ability or toughness of the specimen by measuring the potential energy difference resulting from the collision between a mass on a free-falling pendulum and the machined V-shaped notch in the sample, resulting in the pendulum breaking through the sample. The DBTT is determined by repeating this test over a variety of temperatures and noting when the resulting fracture changes to a brittle behavior which occurs when the absorbed energy is dramatically decreased. The Izod test is essentially the same as the Charpy test, with the only differentiating factor being the placement of the sample; In the former the sample is placed vertically, while in the latter the sample is placed horizontally with respect to the bottom of the base.
For experiments conducted at higher temperatures, dislocation activity increases. At a certain temperature, dislocations shield the crack tip to such an extent that the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). The temperature at which this occurs is the ductile–brittle transition temperature. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture, and the transition temperature is raised.
See also
Deformation
Work hardening, which improves ductility in uniaxial tension by delaying the onset of instability
Strength of materials
Further reading
References
External links
Ductility definition at engineersedge.com
DoITPoMS Teaching and Learning Package- "The Ductile-Brittle Transition
Continuum mechanics
Deformation (mechanics)
Physical properties | 0.765653 | 0.996607 | 0.763056 |
Schwarzschild geodesics | In general relativity, Schwarzschild geodesics describe the motion of test particles in the gravitational field of a central fixed mass that is, motion in the Schwarzschild metric. Schwarzschild geodesics have been pivotal in the validation of Einstein's theory of general relativity. For example, they provide accurate predictions of the anomalous precession of the planets in the Solar System and of the deflection of light by gravity.
Schwarzschild geodesics pertain only to the motion of particles of masses so small they contribute little to the gravitational field. However, they are highly accurate in many astrophysical scenarios provided that is many-fold smaller than the central mass , e.g., for planets orbiting their star. Schwarzschild geodesics are also a good approximation to the relative motion of two bodies of arbitrary mass, provided that the Schwarzschild mass is set equal to the sum of the two individual masses and . This is important in predicting the motion of binary stars in general relativity.
Historical context
The Schwarzschild metric is named in honour of its discoverer Karl Schwarzschild, who found the solution in 1915, only about a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution.
In 1931, Yusuke Hagihara published a paper showing that the trajectory of a test particle in the Schwarzschild metric can be expressed in terms of elliptic functions.
Samuil Kaplan in 1949 has shown that there is a minimum radius for the circular orbit to be stable in Schwarzschild metric.
Schwarzschild metric
An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of an uncharged, non-rotating, spherically symmetric body of mass . The Schwarzschild solution can be written as
where
, in the case of a test particle of small positive mass, is the proper time (time measured by a clock moving with the particle) in seconds,
is the speed of light in meters per second,
is, for , the time coordinate (time measured by a stationary clock at infinity) in seconds,
is, for , the radial coordinate (circumference of a circle centered at the star divided by ) in meters,
is the colatitude (angle from North) in radians,
is the longitude in radians, and
is the Schwarzschild radius of the massive body (in meters), which is related to its mass by
where is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio goes to zero. In that limit, the metric returns to that defined by special relativity.
In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius of the Earth is roughly 9 mm ( inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes.
Orbits of test particles
We may simplify the problem by using symmetry to eliminate one variable from consideration. Since the Schwarzschild metric is symmetrical about , any geodesic that begins moving in that plane will remain in that plane indefinitely (the plane is totally geodesic). Therefore, we orient the coordinate system so that the orbit of the particle lies in that plane, and fix the coordinate to be so that the metric (of this plane) simplifies to
Two constants of motion (values that do not change over proper time ) can be identified (cf. the derivation given below). One is the total energy :
and the other is the specific angular momentum:
where is the total angular momentum of the two bodies, and is the reduced mass. When , the reduced mass is approximately equal to . Sometimes it is assumed that . In the case of the planet Mercury this simplification introduces an error more than twice as large as the relativistic effect. When discussing geodesics, can be considered fictitious, and what matters are the constants and . In order to cover all possible geodesics, we need to consider cases in which is infinite (giving trajectories of photons) or imaginary (for tachyonic geodesics). For the photonic case, we also need to specify a number corresponding to the ratio of the two constants, namely , which may be zero or a non-zero real number.
Substituting these constants into the definition of the Schwarzschild metric
yields an equation of motion for the radius as a function of the proper time :
The formal solution to this is
Note that the square root will be imaginary for tachyonic geodesics.
Using the relation higher up between and , we can also write
Since asymptotically the integrand is inversely proportional to , this shows that in the frame of reference if approaches it does so exponentially without ever reaching it. However, as a function of , does reach .
The above solutions are valid while the integrand is finite, but a total solution may involve two or an infinity of pieces, each described by the integral but with alternating signs for the square root.
When and , we can solve for and explicitly:
and for photonic geodesics with zero angular momentum
(Although the proper time is trivial in the photonic case, one can define an affine parameter , and then the solution to the geodesic equation is .)
Another solvable case is that in which and and are constant. In the volume where this gives for the proper time
This is close to solutions with small and positive. Outside of the solution is tachyonic and the "proper time" is space-like:
This is close to other tachyonic solutions with small and negative. The constant tachyonic geodesic outside is not continued by a constant geodesic inside , but rather continues into a "parallel exterior region" (see Kruskal–Szekeres coordinates). Other tachyonic solutions can enter a black hole and re-exit into the parallel exterior region. The constant solution inside the event horizon is continued by a constant solution in a white hole.
When the angular momentum is not zero we can replace the dependence on proper time by a dependence on the angle using the definition of
which yields the equation for the orbit
where, for brevity, two length-scales, and , have been defined by
Note that in the tachyonic case, will be imaginary and real or infinite.
The same equation can also be derived using a Lagrangian approach or the Hamilton–Jacobi equation (see below). The solution of the orbit equation is
This can be expressed in terms of the Weierstrass elliptic function .
Local and delayed velocities
Unlike in classical mechanics, in Schwarzschild coordinates and are not the radial and transverse components of the local velocity (relative to a stationary observer), instead they give the components for the celerity which are related to by
for the radial and
for the transverse component of motion, with . The coordinate bookkeeper far away from the scene observes the shapiro-delayed velocity , which is given by the relation
and .
The time dilation factor between the bookkeeper and the moving test-particle can also be put into the form
where the numerator is the gravitational, and the denominator is the kinematic component of the time dilation. For a particle falling in from infinity the left factor equals the right factor, since the in-falling velocity matches the escape velocity in this case.
The two constants angular momentum and total energy of a test-particle with mass are in terms of
and
where
and
For massive testparticles is the Lorentz factor and is the proper time, while for massless particles like photons is set to and takes the role of an affine parameter. If the particle is massless is replaced with and with , where is the Planck constant and the locally observed frequency.
Exact solution using elliptic functions
The fundamental equation of the orbit is easier to solve if it is expressed in terms of the inverse radius
The right-hand side of this equation is a cubic polynomial, which has three roots, denoted here as , , and
The sum of the three roots equals the coefficient of the term
A cubic polynomial with real coefficients can either have three real roots, or one real root and two complex conjugate roots. If all three roots are real numbers, the roots are labeled so that . If instead there is only one real root, then that is denoted as ; the complex conjugate roots are labeled and . Using Descartes' rule of signs, there can be at most one negative root; is negative if and only if . As discussed below, the roots are useful in determining the types of possible orbits.
Given this labeling of the roots, the solution of the fundamental orbital equation is
where represents the function (one of the Jacobi elliptic functions) and is a constant of integration reflecting the initial position. The elliptic modulus of this elliptic function is given by the formula
Newtonian limit
To recover the Newtonian solution for the planetary orbits, one takes the limit as the Schwarzschild radius goes to zero. In this case, the third root becomes roughly , and much larger than or . Therefore, the modulus tends to zero; in that limit, becomes the trigonometric sine function
Consistent with Newton's solutions for planetary motions, this formula describes a focal conic of eccentricity
If is a positive real number, then the orbit is an ellipse where and represent the distances of furthest and closest approach, respectively. If is zero or a negative real number, the orbit is a parabola or a hyperbola, respectively. In these latter two cases, represents the distance of closest approach; since the orbit goes to infinity, there is no distance of furthest approach.
Roots and overview of possible orbits
A root represents a point of the orbit where the derivative vanishes, i.e., where . At such a turning point, reaches a maximum, a minimum, or an inflection point, depending on the value of the second derivative, which is given by the formula
If all three roots are distinct real numbers, the second derivative is positive, negative, and positive at u1, u2, and u3, respectively. It follows that a graph of u versus φ may either oscillate between u1 and u2, or it may move away from u3 towards infinity (which corresponds to r going to zero). If u1 is negative, only part of an "oscillation" will actually occur. This corresponds to the particle coming from infinity, getting near the central mass, and then moving away again toward infinity, like the hyperbolic trajectory in the classical solution.
If the particle has just the right amount of energy for its angular momentum, u2 and u3 will merge. There are three solutions in this case. The orbit may spiral in to , approaching that radius as (asymptotically) a decreasing exponential in φ, , or . Or one can have a circular orbit at that radius. Or one can have an orbit that spirals down from that radius to the central point. The radius in question is called the inner radius and is between and 3 times rs. A circular orbit also results when is equal to , and this is called the outer radius. These different types of orbits are discussed below.
If the particle comes at the central mass with sufficient energy and sufficiently low angular momentum then only will be real. This corresponds to the particle falling into a black hole. The orbit spirals in with a finite change in φ.
Precession of orbits
The function sn and its square sn2 have periods of 4K and 2K, respectively, where K is defined by the equation
Therefore, the change in φ over one oscillation of (or, equivalently, one oscillation of ) equals
In the classical limit, u3 approaches and is much larger than or . Hence, is approximately
For the same reasons, the denominator of Δφ is approximately
Since the modulus is close to zero, the period K can be expanded in powers of ; to lowest order, this expansion yields
Substituting these approximations into the formula for Δφ yields a formula for angular advance per radial oscillation
For an elliptical orbit, and represent the inverses of the longest and shortest distances, respectively. These can be expressed in terms of the ellipse's semi-major axis and its orbital eccentricity ,
giving
Substituting the definition of gives the final equation
Bending of light by gravity
In the limit as the particle mass m goes to zero (or, equivalently if the light is heading directly toward the central mass, as the length-scale a goes to infinity), the equation for the orbit becomes
Expanding in powers of , the leading order term in this formula gives the approximate angular deflection δφ for a massless particle coming in from infinity and going back out to infinity:
Here, is the impact parameter, somewhat greater than the distance of closest approach, :
Although this formula is approximate, it is accurate for most measurements of gravitational lensing, due to the smallness of the ratio . For light grazing the surface of the sun, the approximate angular deflection is roughly 1.75 arcseconds, roughly one millionth part of a circle.
More generally, the geodesics of a photon emitted from a light source located at a radial coordinate can be calculated as follows, by applying the equation
The equation can be derived as
which leads to
This equation with second derivative can be numerically integrated as follows by a 4th order Runge-Kutta method, considering a step size and with:
,
,
and
.
The value at the next step is
and the value at the next step is
The step can be chosen to be constant or adaptive, depending on the accuracy required on .
Relation to Newtonian physics
Effective radial potential energy
The equation of motion for the particle derived above
can be rewritten using the definition of the Schwarzschild radius rs as
which is equivalent to a particle moving in a one-dimensional effective potential
The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution
where is the semi-major axis and is the eccentricity.
The third term is attractive and dominates at small values, giving a critical inner radius rinner at which a particle is drawn inexorably inwards to ; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the length-scale defined above.
Circular orbits and their stability
The effective potential can be re-written in terms of the length .
Circular orbits are possible when the effective force is zero
i.e., when the two attractive forces — Newtonian gravity (first term) and the attraction unique to general relativity (third term) — are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as rinner and router
which are obtained using the quadratic formula. The inner radius rinner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from rinner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem.
When is much greater than (the classical case), these formulae become approximately
Substituting the definitions of and rs into router yields the classical formula for a particle of mass orbiting a body of mass .
where ωφ is the orbital angular speed of the particle. This formula is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force:
Where is the reduced mass.
In our notation, the classical orbital angular speed equals
At the other extreme, when a2 approaches 3rs2 from above, the two radii converge to a single value
The quadratic solutions above ensure that router is always greater than 3rs, whereas rinner lies between rs and 3rs. Circular orbits smaller than rs are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at rinner = rs. The sphere of this radius is sometimes known as the photon sphere.
Precession of elliptical orbits
The orbital precession rate may be derived using this radial effective potential V. A small radial deviation from a circular orbit of radius router will oscillate stably with an angular frequency
which equals
Taking the square root of both sides and performing a Taylor series expansion yields
Multiplying by the period T of one revolution gives the precession of the orbit per revolution
where we have used ωφT = 2п and the definition of the length-scale a. Substituting the definition of the Schwarzschild radius rs gives
This may be simplified using the elliptical orbit's semiaxis A and eccentricity e related by the formula
to give the precession angle
Mathematical derivations of the orbital equation
Christoffel symbols
The non-vanishing Christoffel symbols for the Schwarzschild-metric are:
Geodesic equation
According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is
where Γ represents the Christoffel symbol and the variable parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor , or rather on how it changes with position. The variable is a constant multiple of the proper time for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable . Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed.
Therefore, to solve for the motion of a particle, the most straightforward way is to solve the geodesic equation, an approach adopted by Einstein and others. The Schwarzschild metric may be written as
where the two functions and its reciprocal are defined for brevity. From this metric, the Christoffel symbols may be calculated, and the results substituted into the geodesic equations
It may be verified that is a valid solution by substitution into the first of these four equations. By symmetry, the orbit must be planar, and we are free to arrange the coordinate frame so that the equatorial plane is the plane of the orbit. This solution simplifies the second and fourth equations.
To solve the second and third equations, it suffices to divide them by and , respectively.
which yields two constants of motion.
Lagrangian approach
Because test particles follow geodesics in a fixed metric, the orbits of those particles may be determined using the calculus of variations, also called the Lagrangian approach. Geodesics in space-time are defined as curves for which small local variations in their coordinates (while holding their endpoints events fixed) make no significant change in their overall length s. This may be expressed mathematically using the calculus of variations
where τ is the proper time, s = cτ is the arc-length in space-time and T is defined as
in analogy with kinetic energy. If the derivative with respect to proper time is represented by a dot for brevity
T may be written as
Constant factors (such as c or the square root of two) don't affect the answer to the variational problem; therefore, taking the variation inside the integral yields Hamilton's principle
The solution of the variational problem is given by Lagrange's equations
When applied to t and φ, these equations reveal two constants of motion
which may be expressed in terms of two constant length-scales, and
As shown above, substitution of these equations into the definition of the Schwarzschild metric yields the equation for the orbit.
Hamiltonian approach
A Lagrangian solution can be recast into an equivalent Hamiltonian form. In this case, the Hamiltonian is given by
Once again, the orbit may be restricted to by symmetry. Since and do not appear in the Hamiltonian, their conjugate momenta are constant; they may be expressed in terms of the speed of light and two constant length-scales and
The derivatives with respect to proper time are given by
Dividing the first equation by the second yields the orbital equation
The radial momentum pr can be expressed in terms of r using the constancy of the Hamiltonian ; this yields the fundamental orbital equation
Hamilton–Jacobi approach
The orbital equation can be derived from the Hamilton–Jacobi equation. The advantage of this approach is that it equates the motion of the particle with the propagation of a wave, and leads neatly into the derivation of the deflection of light by gravity in general relativity, through Fermat's principle. The basic idea is that, due to gravitational slowing of time, parts of a wave-front closer to a gravitating mass move more slowly than those further away, thus bending the direction of the wave-front's propagation.
Using general covariance, the Hamilton–Jacobi equation for a single particle of unit mass can be expressed in arbitrary coordinates as
This is equivalent to the Hamiltonian formulation above, with the partial derivatives of the action taking the place of the generalized momenta. Using the Schwarzschild metric gμν, this equation becomes
where we again orient the spherical coordinate system with the plane of the orbit. The time t and azimuthal angle φ are cyclic coordinates, so that the solution for Hamilton's principal function S can be written
where and are the constant generalized momenta. The Hamilton–Jacobi equation gives an integral solution for the radial part
Taking the derivative of Hamilton's principal function S with respect to the conserved momentum pφ yields
which equals
Taking an infinitesimal variation in φ and r yields the fundamental orbital equation
where the conserved length-scales a and b are defined by the conserved momenta by the equations
Hamilton's principle
The action integral for a particle affected only by gravity is
where is the proper time and is any smooth parameterization of the particle's world line. If one applies the calculus of variations to this, one again gets the equations for a geodesic. To simplify the calculations, one first takes the variation of the square of the integrand. For the metric and coordinates of this case and assuming that the particle is moving in the equatorial plane , that square is
Taking variation of this gives
Motion in longitude
Vary with respect to longitude only to get
Divide by to get the variation of the integrand itself
Thus
Integrating by parts gives
The variation of the longitude is assumed to be zero at the end points, so the first term disappears. The integral can be made nonzero by a perverse choice of unless the other factor inside is zero everywhere. So the equation of motion is
Motion in time
Vary with respect to time only to get
Divide by to get the variation of the integrand itself
Thus
Integrating by parts gives
So the equation of motion is
Conserved momenta
Integrate these equations of motion to determine the constants of integration getting
These two equations for the constants of motion (angular momentum) and (energy) can be combined to form one equation that is true even for photons and other massless particles for which the proper time along a geodesic is zero.
Radial motion
Substituting
and
into the metric equation (and using ) gives
from which one can derive
which is the equation of motion for . The dependence of on can be found by dividing this by
to get
which is true even for particles without mass. If length scales are defined by
and
then the dependence of on simplifies to
See also
Classical central-force problem
Frame fields in general relativity
Kepler problem
Two-body problem in general relativity
Notes
References
Bibliography
Schwarzschild, K. (1916). Über das Gravitationsfeld eines Massenpunktes nach der Einstein'schen Theorie. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1, 189–196.
scan of the original paper
text of the original paper, in Wikisource
translation by Antoci and Loinger
a commentary on the paper, giving a simpler derivation
Schwarzschild, K. (1916). Über das Gravitationsfeld einer Kugel aus inkompressibler Flüssigkeit. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1, 424-?.
(See Gravitation (book).)
External links
Excerpt from Reflections on Relativity by Kevin Brown.
Exact solutions in general relativity | 0.776655 | 0.982474 | 0.763043 |
Axes conventions | In ballistics and flight dynamics, axes conventions are standardized ways of establishing the location and orientation of coordinate axes for use as a frame of reference. Mobile objects are normally tracked from an external frame considered fixed. Other frames can be defined on those mobile objects to deal with relative positions for other objects. Finally, attitudes or orientations can be described by a relationship between the external frame and the one defined over the mobile object.
The orientation of a vehicle is normally referred to as attitude. It is described normally by the orientation of a frame fixed in the body relative to a fixed reference frame. The attitude is described by attitude coordinates, and consists of at least three coordinates.
While from a geometrical point of view the different methods to describe orientations are defined using only some reference frames, in engineering applications it is important also to describe how these frames are attached to the lab and the body in motion.
Due to the special importance of international conventions in air vehicles, several organizations have published standards to be followed. For example, German DIN has published the DIN 9300 norm for aircraft (adopted by ISO as ISO 1151–2:1985).
Earth bounded axes conventions
World reference frames: ENU and NED
Basically, as lab frame or reference frame, there are two kinds of conventions for the frames:
East, North, Up (ENU), used in geography
North, East, Down (NED), used specially in aerospace
This frame referenced w.r.t. Global Reference frames like Earth Center Earth Fixed (ECEF) non-inertial system.
World reference frames for attitude description
To establish a standard convention to describe attitudes, it is required to establish at least the axes of the reference system and the axes of the rigid body or vehicle. When an ambiguous notation system is used (such as Euler angles) the convention used should also be stated. Nevertheless, most used notations (matrices and quaternions) are unambiguous.
Tait–Bryan angles are often used to describe a vehicle's attitude with respect to a chosen reference frame, though any other notation can be used. The positive x-axis in vehicles points always in the direction of movement. For positive y- and z-axis, we have to face two different conventions:
In case of land vehicles like cars, tanks etc., which use the ENU-system (East-North-Up) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its left, and the positive z- or yaw axis always points up. World frame's origin is fixed at the center of gravity of the vehicle.
By contrast, in case of air and sea vehicles like submarines, ships, airplanes etc., which use the NED-system (North-East-Down) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its right, and its positive z- or yaw axis always points down. World frame's origin is fixed at the center of gravity of the vehicle.
Finally, in case of space vehicles like the Space Shuttle etc., a modification of the latter convention is used, where the vehicle's (body's) positive y- or pitch axis again always points to its right, and its positive z- or yaw axis always points down, but “down” now may have two different meanings: If a so-called local frame is used as external reference, its positive z-axis points “down” to the center of the Earth as it does in case of the earlier mentioned NED-system, but if the inertial frame is used as reference, its positive z-axis will point now to the north celestial pole, and its positive x-axis to the Vernal Equinox or some other reference meridian.
Frames mounted on vehicles
Specially for aircraft, these frames do not need to agree with the earth-bound frames in the up-down line. It must be agreed what ENU and NED mean in this context.
Conventions for land vehicles
For land vehicles it is rare to describe their complete orientation, except when speaking about electronic stability control or satellite navigation. In this case, the convention is normally the one of the adjacent drawing, where RPY stands for roll-pitch-yaw.
Conventions for sea vehicles
As well as aircraft, the same terminology is used for the motion of ships and boats. Some words commonly used were introduced in maritime navigation. For example, the yaw angle or heading, has a nautical origin, with the meaning of "bending out of the course". Etymologically, it is related with the verb 'to go'. It is related to the concept of bearing. It is typically assigned the shorthand notation .
Conventions for aircraft local reference frames
Coordinates to describe an aircraft attitude (Heading, Elevation and Bank) are normally given relative to a reference control frame located in a control tower, and therefore ENU, relative to the position of the control tower on the earth surface.
Coordinates to describe observations made from an aircraft are normally given relative to its intrinsic axes, but normally using as positive the coordinate pointing downwards, where the interesting points are located. Therefore, they are normally NED.
These axes are normally taken so that X axis is the longitudinal axis pointing ahead, Z axis is the vertical axis pointing downwards, and the Y axis is the lateral one, pointing in such a way that the frame is right-handed.
The motion of an aircraft is often described in terms of rotation about these axes, so rotation about the X-axis is called rolling, rotation about the Y-axis is called pitching, and rotation about the Z-axis is called yawing.
Frames for space navigation
For satellites orbiting the Earth it is normal to use the Equatorial coordinate system. The projection of the Earth's equator onto the celestial sphere is called the celestial equator. Similarly, the projections of the Earth's north and south geographic poles become the north and south celestial poles, respectively.
Deep space satellites use other Celestial coordinate system, like the Ecliptic coordinate system.
Local conventions for space ships as satellites
If the goal is to keep the shuttle during its orbits in a constant attitude with respect to the sky, e.g. in order to perform certain astronomical observations, the preferred reference is the inertial frame, and the RPY angle vector (0|0|0) describes an attitude then, where the shuttle's wings are kept permanently parallel to the Earth's equator, its nose points permanently to the vernal equinox, and its belly towards the northern polar star (see picture). (Note that rockets and missiles more commonly follow the conventions for aircraft where the RPY angle vector (0|0|0) points north, rather than toward the vernal equinox).
On the other hand, if the goal is to keep the shuttle during its orbits in a constant attitude with respect to the surface of the Earth, the preferred reference will be the local frame, with the RPY angle vector (0|0|0) describing an attitude where the shuttle's wings are parallel to the Earth's surface, its nose points to its heading, and its belly down towards the centre of the Earth (see picture).
Frames used to describe attitudes
Normally the frames used to describe a vehicle's local observations are the same frames used to describe its attitude with respect to the ground tracking stations. i.e. if an ENU frame is used in a tracking station, also ENU frames are used onboard and these frames are also used to refer local observations.
An important case in which this does not apply is aircraft. Aircraft observations are performed downwards and therefore normally NED axes convention applies. Nevertheless, when attitudes with respect to ground stations are given, a relationship between the local earth-bound frame and the onboard ENU frame is used.
See also
Attitude dynamics and control (spacecraft)
Euler's rotation theorem
Gyroscope
Triad Method
Rotation formalisms in three dimensions
Geographic coordinate system
Astronomical coordinate systems
References
Euclidean symmetries
Rotation in three dimensions | 0.778132 | 0.98058 | 0.763021 |
Equations for a falling body | A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth’s gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth’s gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories.
History
Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water.
The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.)
The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures.
Overview
Near the surface of the Earth, the acceleration due to gravity = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approximately. A coherent set of units for , , and is essential. Assuming SI units, is measured in metres per second squared, so must be measured in metres, in seconds and in metres per second.
In all cases, the body is assumed to start from rest, and air resistance is neglected. Generally, in Earth's atmosphere, all results below will therefore be quite inaccurate after only 5 seconds of fall (at which time an object's velocity will be a little less than the vacuum value of 49 m/s (9.8 m/s2 × 5 s) due to air resistance). Air resistance induces a drag force on any body that falls through any atmosphere other than a perfect vacuum, and this drag force increases with velocity until it equals the gravitational force, leaving the object to fall at a constant terminal velocity.
Terminal velocity depends on atmospheric drag, the coefficient of drag for the object, the (instantaneous) velocity of the object, and the area presented to the airflow.
Apart from the last formula, these formulas also assume that negligibly varies with height during the fall (that is, they assume constant acceleration). The last equation is more accurate where significant changes in fractional distance from the centre of the planet during the fall cause significant changes in . This equation occurs in many applications of basic physics.
The following equations start from the general equations of linear motion:
and equation for universal gravitation (r+d= distance of object above the ground from the center of mass of planet):
Equations
Example
The first equation shows that, after one second, an object will have fallen a distance of 1/2 × 9.8 × 12 = 4.9 m. After two seconds it will have fallen 1/2 × 9.8 × 22 = 19.6 m; and so on. On the other hand, the penultimate equation becomes grossly inaccurate at great distances. If an object fell 10000 m to Earth, then the results of both equations differ by only 0.08%; however, if it fell from geosynchronous orbit, which is 42164 km, then the difference changes to almost 64%.
Based on wind resistance, for example, the terminal velocity of a skydiver in a belly-to-earth (i.e., face down) free-fall position is about 195 km/h (122 mph or 54 m/s). This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached. In this example, a speed of 50% of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99% and so on.
Higher speeds can be attained if the skydiver pulls in his or her limbs (see also freeflying). In this case, the terminal velocity increases to about 320 km/h (200 mph or 90 m/s), which is almost the terminal velocity of the peregrine falcon diving down on its prey. The same terminal velocity is reached for a typical .30-06 bullet dropping downwards—when it is returning to earth having been fired upwards, or dropped from a tower—according to a 1920 U.S. Army Ordnance study.
For astronomical bodies other than Earth, and for short distances of fall at other than "ground" level, in the above equations may be replaced by where is the gravitational constant, is the mass of the astronomical body, is the mass of the falling body, and is the radius from the falling object to the center of the astronomical body.
Removing the simplifying assumption of uniform gravitational acceleration provides more accurate results. We find from the formula for radial elliptic trajectories:
The time taken for an object to fall from a height to a height , measured from the centers of the two bodies, is given by:
where is the sum of the standard gravitational parameters of the two bodies. This equation should be used whenever there is a significant difference in the gravitational acceleration during the fall.
Note that when this equation gives , as expected; and when it gives , which is the time to collision.
Acceleration relative to the rotating Earth
Centripetal force causes the acceleration measured on the rotating surface of the Earth to differ from the acceleration that is measured for a free-falling body: the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north-south axis of the Earth, corresponding to staying stationary in that frame of reference.
See also
De motu antiquiora and Two New Sciences (the earliest modern investigations of the motion of falling bodies)
Equations of motion
Free fall
Gravity
Mean speed theorem, the foundation of the law of falling bodies
Radial trajectory
Notes
References
External links
Falling body equations calculator
Gravity
Equations
Falling | 0.771802 | 0.988601 | 0.763004 |
Extreme sport | Action sports, adventure sports or extreme sports are activities perceived as involving a high degree of risk of injury or death. These activities often involve speed, height, a high level of physical exertion and highly specialized gear. Extreme tourism overlaps with extreme sport. The two share the same main attraction, "adrenaline rush" caused by an element of risk, and differ mostly in the degree of engagement and professionalism.
Definition
There is no precise definition of an 'extreme sport' and the origin of the term is unclear but it gained popularity in the 1990s when it was picked up by marketing companies to promote the X Games and when the Extreme Sports Channel and Extreme International launched. More recently, the commonly used definition from research is "a competitive (comparison or self-evaluative) activity within which the participant is subjected to natural or unusual physical and mental challenges such as speed, height, depth or natural forces and where fast and accurate cognitive perceptual processing may be required for a successful outcome" by Dr. Rhonda Cohen (2012).
While the use of the term "extreme sport" has spread everywhere to describe a multitude of different activities, exactly which sports are considered 'extreme' is debatable. There are, however, several characteristics common to most extreme sports. While they are not the exclusive domain of youth, extreme sports tend to have a younger-than-average target demographic. Extreme sports are also rarely sanctioned by schools for their physical education curriculum. Extreme sports tend to be more solitary than many of the popular traditional sports (rafting and paintballing are notable exceptions, as they are done in teams).
Activities categorized by media as extreme sports differ from traditional sports due to the higher number of inherently uncontrollable variables. These environmental variables are frequently weather and terrain-related, including wind, snow, water and mountains. Because these natural phenomena cannot be controlled, they inevitably affect the outcome of the given activity or event.
In a traditional sporting event, athletes compete against each other under controlled circumstances. While it is possible to create a controlled sporting event such as X Games, there are environmental variables that cannot be held constant for all athletes. Examples include changing snow conditions for snowboarders, rock and ice quality for climbers, and wave height and shape for surfers.
Whilst traditional sporting judgment criteria may be adopted when assessing performance (distance, time, score, etc.), extreme sports performers are often evaluated on more subjective and aesthetic criteria. This results in a tendency to reject unified judging methods, with different sports employing their own ideals and indeed having the ability to evolve their assessment standards with new trends or developments in the sports.
History
The origin of the divergence of the term "extreme sports" from "sports" may date to the 1950s in the appearance of a phrase usually, but wrongly, attributed to Ernest Hemingway. The phrase is;
There are only three sports: bullfighting, motor racing, and mountaineering; all the rest are merely games.
The implication of the phrase was that the word "sport" defined an activity in which one might be killed, other activities being termed "games." The phrase may have been invented by either writer Barnaby Conrad or automotive author Ken Purdy.The Dangerous Sports Club of Oxford University, England was founded by David Kirke, Chris Baker, Ed Hulton and Alan Weston. They first came to wide public attention by inventing modern day bungee jumping, by making the first modern jumps on 1 April 1979, from the Clifton Suspension Bridge, Bristol, England. They followed the Clifton Bridge effort with a jump from the Golden Gate Bridge in San Francisco, California (including the first female bungee jump by Jane Wilmot), and with a televised leap from the Royal Gorge Suspension Bridge in Colorado, sponsored by and televised on the popular American television program That's Incredible! Bungee jumping was treated as a novelty for a few years, then became a craze for young people, and is now an established industry for thrill seekers.
The club also pioneered a surrealist form of skiing, holding three events at St. Moritz, Switzerland, in which competitors were required to devise a sculpture mounted on skis and ride it down a mountain. The event reached its limits when the Club arrived in St. Moritz with a London double-decker bus, wanting to send it down the ski slopes, and the Swiss resort managers refused.
Other Club activities included expedition hang gliding from active volcanoes; the launching of giant (20 m) plastic spheres with pilots suspended in the centre (zorbing); microlight flying; and BASE jumping (in the early days of this sport).
In recent decades the term extreme sport was further promoted after the Extreme Sports Channel, Extremesportscompany.com launched and then the X Games, a multi-sport event was created and developed by ESPN. The first X Games (known as 1995 Extreme Games) were held in Newport, Providence, Mount Snow, and Vermont in the United States.
Certain extreme sports clearly trace back to other extreme sports, or combinations thereof. For example, windsurfing was conceived as a result of efforts to equip a surfboard with a sailing boat's propulsion system (mast and sail). Kitesurfing on the other hand was conceived by combining the propulsion system of kite buggying (a parafoil) with the bi-directional boards used for wakeboarding. Wakeboarding is in turn derived from snowboarding and waterskiing.
Commercialisation
Some contend that the distinction between an extreme sport and a conventional one has as much to do with marketing as with the level of danger involved or the adrenaline generated. For example, rugby union is both dangerous and adrenaline-inducing but is not considered an extreme sport due to its traditional image, and because it does not involve high speed or an intention to perform stunts (the aesthetic criteria mentioned above) and also it does not have changing environmental variables for the athletes.
Motivation
A feature of such activities in the view of some is their alleged capacity to induce an adrenaline rush in participants. However, the medical view is that the rush or high associated with the activity is not due to adrenaline being released as a response to fear, but due to increased levels of dopamine, endorphins and serotonin because of the high level of physical exertion. Furthermore, recent studies suggest that the link to adrenaline and 'true' extreme sports is tentative. Brymer and Gray's study defined 'true' extreme sports as a leisure or recreation activity where the most likely outcome of a mismanaged accident or mistake was death. This definition was designed to separate the marketing hype from the activity.Eric Brymer also found that the potential of various extraordinary human experiences, many of which parallel those found in activities such as meditation, was an important part of the extreme sport experience. Those experiences put the participants outside their comfort zone and are often done in conjunction with adventure travel.
Some of the sports have existed for decades and their proponents span generations, some going on to become well known personalities. Rock climbing and ice climbing have spawned publicly recognizable names such as Edmund Hillary, Chris Bonington, Wolfgang Güllich and more recently Joe Simpson. Another example is surfing, invented centuries ago by the inhabitants of Polynesia, it will become national sport of Hawaii.
Disabled people participate in extreme sports. Nonprofit organizations such as Adaptive Action Sports seek to increase awareness of the participation in action sports by members of the disabled community, as well as increase access to the adaptive technologies that make participation possible and to competitions such as The X Games.
Mortality, health, and thrill
Extreme sports may be perceived as extremely dangerous, conducive to fatalities, near-fatalities and other serious injuries. The perceived risk in an extreme sport has been considered a somewhat necessary part of its appeal, which is partially a result of pressure for athletes to make more money and provide maximum entertainment.
Extreme sports is a sub-category of sports that are described as any kind of sport "of a character or kind farthest removed from the ordinary or average". These kinds of sports often carry out the potential risk of serious and permanent physical injury and even death. However, these sports also have the potential to produce drastic benefits on mental and physical health and provide opportunity for individuals to engage fully with life.
Extreme sports trigger the release of the hormone adrenaline, which can facilitate performance of stunts. It is believed that the implementation of extreme sports on mental health patients improves their perspective and recognition of aspects of life.
In outdoor adventure sports, participants get to experience the emotion of intense thrill, usually associated with the extreme sports. Even though some extreme sports present a higher level of risk, people still choose to embark in the experience of extreme sports for the sake of the adrenaline. According to Sigmund Freud, we have an instinctual 'death wish', which is a subconscious inbuilt desire to destroy ourselves, proving that in the seek for the thrill, danger is considered pleasurable.
List of extreme and adventure sports
Adventure sports
Bungee jumping
Canyoning
Cave diving
Extreme pogo
Extreme skiing
Alpine ski racing
Flowriding
Freediving
Freeride biking
Freerunning
Freeskiing
Freestyle scootering
Freestyle skiing
Hang gliding
Ice canoeing
Ice climbing
Ice diving
Ice yachting
Inline skating
Ironman Triathlon
Extreme ironing
Foiling
Jetskiing
Kitesurfing
Land windsurfing
Longboarding
Motocross
Motorcycle sport
Mountainboarding
Mountaineering
Mountain biking
Paragliding
Parkour
Rallying
Rock climbing
Scuba diving
Skateboarding
Ski jumping
Skydiving<ref
name="brit" />
Skysurfing
Slacklining
Snorkeling
Snowboarding
Snowmobiling (Snocross)
Street luge
Surfing
Technical Diving
Volcano Boarding
Wakeboarding
Water skiing
Waveski
Whitewater kayaking
Windsurfing
Winging
Extreme sports
Air racing
BASE jumping
BMX
Bobsleigh
Bodyboarding
Cliff jumping
Canyoning
Cave diving
Extreme pogo
Extreme skiing
Freeride biking
Freerunning
Hang gliding<ref name="enc"
Ice canoeing
Ice climbing
Ice diving
Ice yachting
Inline skating
Ironman Triathlon
Kitesurfing
Land windsurfing
Longboarding
Motocross
Motorcycle sport
Mountainboarding
Mountaineering
Mountain biking
Parkour
Rallying
Rock climbing
Sandboarding
Skateboarding
Ski jumping
Skysurfing
Slacklining
Snowmobiling (Snocross)
Street luge
Technical Diving
Volcano Boarding
Wakeboarding
Waveski
Wingsuiting
Whitewater kayaking
See also
Extreme Sports Channel
Extreme tourism and Adventure travel
Extreme Games
References
Further reading
External links
Lifestyles
Sports by type
Adventure | 0.764259 | 0.998356 | 0.763003 |
Flying and gliding animals | A number of animals are capable of aerial locomotion, either by powered flight or by gliding. This trait has appeared by evolution many times, without any single common ancestor. Flight has evolved at least four times in separate animals: insects, pterosaurs, birds, and bats. Gliding has evolved on many more occasions. Usually the development is to aid canopy animals in getting from tree to tree, although there are other possibilities. Gliding, in particular, has evolved among rainforest animals, especially in the rainforests in Asia (most especially Borneo) where the trees are tall and widely spaced. Several species of aquatic animals, and a few amphibians and reptiles have also evolved this gliding flight ability, typically as a means of evading predators.
Types
Animal aerial locomotion can be divided into two categories: powered and unpowered. In unpowered modes of locomotion, the animal uses aerodynamic forces exerted on the body due to wind or falling through the air. In powered flight, the animal uses muscular power to generate aerodynamic forces to climb or to maintain steady, level flight. Those who can find air that is rising faster than they are falling can gain altitude by soaring.
Unpowered
These modes of locomotion typically require an animal start from a raised location, converting that potential energy into kinetic energy and using aerodynamic forces to control trajectory and angle of descent. Energy is continually lost to drag without being replaced, thus these methods of locomotion have limited range and duration.
Falling: decreasing altitude under the force of gravity, using no adaptations to increase drag or provide lift.
Parachuting: falling at an angle greater than 45° from the horizontal with adaptations to increase drag forces. Very small animals may be carried up by the wind. Some gliding animals may use their gliding membranes for drag rather than lift, to safely descend.
Gliding flight: falling at an angle less than 45° from the horizontal with lift from adapted aerofoil membranes. This allows slowly falling directed horizontal movement, with streamlining to decrease drag forces for aerofoil efficiency and often with some maneuverability in air. Gliding animals have a lower aspect ratio (wing length/breadth) than true flyers.
Powered flight
Powered flight has evolved at least four times: first in the insects, then in pterosaurs, next in birds, and last in bats. Studies on theropod dinosaurs do suggest multiple (at least 3) independent acquisitions of powered flight however, and a recent study proposes independent acquisitions amidst the different bat clades as well. Powered flight uses muscles to generate aerodynamic force, which allows the animal to produce lift and thrust. The animal may ascend without the aid of rising air.
Externally powered
Ballooning and soaring are not powered by muscle, but rather by external aerodynamic sources of energy: the wind and rising thermals, respectively. Both can continue as long as the source of external power is present. Soaring is typically only seen in species capable of powered flight, as it requires extremely large wings.
Ballooning: being carried up into the air from the aerodynamic effect on long strands of silk in the wind. Certain silk-producing arthropods, mostly small or young spiders, secrete a special light-weight gossamer silk for ballooning, sometimes traveling great distances at high altitude.
Soaring: gliding in rising or otherwise moving air that requires specific physiological and morphological adaptations that can sustain the animal aloft without flapping its wings. The rising air is due to thermals, ridge lift or other meteorological features. Under the right conditions, soaring creates a gain of altitude without expending energy. Large wingspans are needed for efficient soaring.
Many species will use multiple of these modes at various times; a hawk will use powered flight to rise, then soar on thermals, then descend via free-fall to catch its prey.
Evolution and ecology
Gliding and parachuting
While gliding occurs independently from powered flight, it has some ecological advantages of its own as it is the simplest form of flight. Gliding is a very energy-efficient way of travelling from tree to tree. Although moving through the canopy running along the branches may be less energetically demanding, the faster transition between trees allows for greater foraging rates in a particular patch. Glide ratios can be dependent on size and current behavior. Higher foraging rates are supported by low glide ratios as smaller foraging patches require less gliding time over shorter distances and greater amounts of food can be acquired in a shorter time period. Low ratios are not as energy efficient as the higher ratios, but an argument made is that many gliding animals eat low energy foods such as leaves and are restricted to gliding because of this, whereas flying animals eat more high energy foods such as fruits, nectar, and insects. Mammals tend to rely on lower glide ratios to increase the amount of time foraging for lower energy food. An equilibrium glide, achieving a constant airspeed and glide angle, is harder to obtain as animal size increases. Larger animals need to glide from much higher heights and longer distances to make it energetically beneficial. Gliding is also very suitable for predator avoidance, allowing for controlled targeted landings to safer areas. In contrast to flight, gliding has evolved independently many times (more than a dozen times among extant vertebrates); however these groups have not radiated nearly as much as have groups of flying animals.
Worldwide, the distribution of gliding animals is uneven, as most inhabit rain forests in Southeast Asia. (Despite seemingly suitable rain forest habitats, few gliders are found in India or New Guinea and none in Madagascar.) Additionally, a variety of gliding vertebrates are found in Africa, a family of hylids (flying frogs) lives in South America and several species of gliding squirrels are found in the forests of northern Asia and North America. Various factors produce these disparities. In the forests of Southeast Asia, the dominant canopy trees (usually dipterocarps) are taller than the canopy trees of the other forests. Forest structure and distance between trees are influential in the development of gliding within varying species. A higher start provides a competitive advantage of further glides and farther travel. Gliding predators may more efficiently search for prey. The lower abundance of insect and small vertebrate prey for carnivorous animals (such as lizards) in Asian forests may be a factor. In Australia, many mammals (and all mammalian gliders) possess, to some extent, prehensile tails. Globally, smaller gliding species tend to have feather-like tails and larger species have fur covered round bushy tails, but smaller animals tend to rely on parachuting rather than developing gliding membranes. The gliding membranes, patagium, are classified in the 4 groups of propatagium, digipatagium, plagiopatagium and uropatagium. These membranes consist of two tightly bounded layers of skin connected by muscles and connective tissue between the fore and hind limbs.
Powered flight evolution
Powered flight has evolved unambiguously only four times—birds, bats, pterosaurs, and insects (though see above for possible independent acquisitions within bird and bat groups). In contrast to gliding, which has evolved more frequently but typically gives rise to only a handful of species, all three extant groups of powered flyers have a huge number of species, suggesting that flight is a very successful strategy once evolved. Bats, after rodents, have the most species of any mammalian order, about 20% of all mammalian species. Birds have the most species of any class of terrestrial vertebrates. Finally, insects (most of which fly at some point in their life cycle) have more species than all other animal groups combined.
The evolution of flight is one of the most striking and demanding in animal evolution, and has attracted the attention of many prominent scientists and generated many theories. Additionally, because flying animals tend to be small and have a low mass (both of which increase the surface-area-to-mass ratio), they tend to fossilize infrequently and poorly compared to the larger, heavier-boned terrestrial species they share habitat with. Fossils of flying animals tend to be confined to exceptional fossil deposits formed under highly specific circumstances, resulting in a generally poor fossil record, and a particular lack of transitional forms. Furthermore, as fossils do not preserve behavior or muscle, it can be difficult to discriminate between a poor flyer and a good glider.
Insects were the first to evolve flight, approximately 350 million years ago. The developmental origin of the insect wing remains in dispute, as does the purpose prior to true flight. One suggestion is that wings initially evolved from tracheal gill structures and were used to catch the wind for small insects that live on the surface of the water, while another is that they evolved from paranotal lobes or leg structures and gradually progressed from parachuting, to gliding, to flight for originally arboreal insects.
Pterosaurs were the next to evolve flight, approximately 228 million years ago. These reptiles were close relatives of the dinosaurs, and reached enormous sizes, with some of the last forms being the largest flying animals ever to inhabit the Earth, having wingspans of over 9.1 m (30 ft). However, they spanned a large range of sizes, down to a 250 mm (10 in) wingspan in Nemicolopterus.
Birds have an extensive fossil record, along with many forms documenting both their evolution from small theropod dinosaurs and the numerous bird-like forms of theropod which did not survive the mass extinction at the end of the Cretaceous. Indeed, Archaeopteryx is arguably the most famous transitional fossil in the world, both due to its mix of reptilian and avian anatomy and the luck of being discovered only two years after Darwin's publication of On the Origin of Species. However, the ecology of this transition is considerably more contentious, with various scientists supporting either a "trees down" origin (in which an arboreal ancestor evolved gliding, then flight) or a "ground up" origin (in which a fast-running terrestrial ancestor used wings for a speed boost and to help catch prey). It may also have been a non-linear process, as several non-avian dinosaurs seem to have independently acquired powered flight.
Bats are the most recent to evolve (about 60 million years ago), most likely from a fluttering ancestor, though their poor fossil record has hindered more detailed study.
Only a few animals are known to have specialised in soaring: the larger of the extinct pterosaurs, and some large birds. Powered flight is very energetically expensive for large animals, but for soaring their size is an advantage, as it allows them a low wing loading, that is a large wing area relative to their weight, which maximizes lift. Soaring is very energetically efficient.
Biomechanics
Gliding and parachuting
During a free-fall with no aerodynamic forces, the object accelerates due to gravity, resulting in increasing velocity as the object descends. During parachuting, animals use the aerodynamic forces on their body to counteract the force or gravity. Any object moving through air experiences a drag force that is proportion to surface area and to velocity squared, and this force will partially counter the force of gravity, slowing the animal's descent to a safer speed. If this drag is oriented at an angle to the vertical, the animal's trajectory will gradually become more horizontal, and it will cover horizontal as well as vertical distance. Smaller adjustments can allow turning or other maneuvers. This can allow a parachuting animal to move from a high location on one tree to a lower location on another tree nearby. Specifically in gliding mammals, there are 3 types of gliding paths respectively being S glide, J glide, and "straight-shaped" glides where species either gain altitude post launch then descend, rapidly decrease height before gliding, and maintaining a constant angled descent.
During gliding, lift plays an increased role. Like drag, lift is proportional to velocity squared. Gliding animals will typically leap or drop from high locations such as trees, just as in parachuting, and as gravitational acceleration increases their speed, the aerodynamic forces also increase. Because the animal can utilize lift and drag to generate greater aerodynamic force, it can glide at a shallower angle than parachuting animals, allowing it to cover greater horizontal distance in the same loss of altitude, and reach trees further away. Successful flights for gliding animals are achieved through 5 steps: preparation, launch, glide, braking, and landing. Gliding species are better able to control themselves mid-air, with the tail acting as a rudder, making it capable to pull off banking movements or U-turns during flight. During landing, arboreal mammals will extend their fore and hind limbs in front of itself to brace for landing and to trap air in order to maximize air resistance and lower impact speed.
Powered flight
Unlike most air vehicles, in which the objects that generate lift (wings) and thrust (engine or propeller) are separate and the wings remain fixed, flying animals use their wings to generate both lift and thrust by moving them relative to the body. This has made the flight of organisms considerably harder to understand than that of vehicles, as it involves varying speeds, angles, orientations, areas, and flow patterns over the wings.
A bird or bat flying through the air at a constant speed moves its wings up and down (usually with some fore-aft movement as well). Because the animal is in motion, there is some airflow relative to its body which, combined with the velocity of its wings, generates a faster airflow moving over the wing. This will generate lift force vector pointing forwards and upwards, and a drag force vector pointing rearwards and upwards. The upwards components of these counteract gravity, keeping the body in the air, while the forward component provides thrust to counteract both the drag from the wing and from the body as a whole. Pterosaur flight likely worked in a similar manner, though no living pterosaurs remain for study.
Insect flight is considerably different, due to their small size, rigid wings, and other anatomical differences. Turbulence and vortices play a much larger role in insect flight, making it even more complex and difficult to study than the flight of vertebrates. There are two basic aerodynamic models of insect flight. Most insects use a method that creates a spiralling leading edge vortex. Some very small insects use the fling-and-clap or Weis-Fogh mechanism in which the wings clap together above the insect's body and then fling apart. As they fling open, the air gets sucked in and creates a vortex over each wing. This bound vortex then moves across the wing and, in the clap, acts as the starting vortex for the other wing. Circulation and lift are increased, at the price of wear and tear on the wings.
Limits and extremes
Flying and soaring
Largest. The largest known flying animal was formerly thought to be Pteranodon, a pterosaur with a wingspan of up to . However, the more recently discovered azhdarchid pterosaur Quetzalcoatlus is much larger, with estimates of the wingspan ranging from . Some other recently discovered azhdarchid pterosaur species, such as Hatzegopteryx, may have also wingspans of a similar size or even slightly larger. Although it is widely thought that Quetzalcoatlus reached the size limit of a flying animal, the same was once said of Pteranodon. The heaviest living flying animals are the kori bustard and the great bustard with males reaching . The wandering albatross has the greatest wingspan of any living flying animal at . Among living animals which fly over land, the Andean condor and the marabou stork have the largest wingspan at . Studies have shown that it is physically possible for flying animals to reach wingspans, but there is no firm evidence that any flying animal, not even the azhdarchid pterosaurs, got that large.
Smallest. There is no minimum size for getting airborne. Indeed, there are many bacteria floating in the atmosphere that constitute part of the aeroplankton. However, to move about under one's own power and not be overly affected by the wind requires a certain amount of size. The smallest flying vertebrates are the bee hummingbird and the bumblebee bat, both of which may weigh less than . They are thought to represent the lower size limit for endotherm flight. The smallest flying invertebrate is a fairyfly wasp species, Kikiki huna, at (150 μm).
Fastest. The fastest of all known flying animals is the peregrine falcon, which when diving travels at or faster. The fastest animal in flapping horizontal flight may be the Mexican free-tailed bat, said to attain about based on ground speed by an aircraft tracking device; that measurement does not separate any contribution from wind speed, so the observations could be caused by strong tailwinds.
Slowest. Most flying animals need to travel forward to stay aloft. However, some creatures can stay in the same spot, known as hovering, either by rapidly flapping the wings, as do hummingbirds, hoverflies, dragonflies, and some others, or carefully using thermals, as do some birds of prey. The slowest flying non-hovering bird recorded is the American woodcock, at .
Highest flying. There are records of a Rüppell's vulture Gyps rueppelli, a large vulture, being sucked into a jet engine above Côte d'Ivoire in West Africa. The animal that flies highest most regularly is the bar-headed goose Anser indicus, which migrates directly over the Himalayas between its nesting grounds in Tibet and its winter quarters in India. They are sometimes seen flying well above the peak of Mount Everest at .
Gliding and parachuting
Most efficient glider. This can be taken as the animal that moves most horizontal distance per metre fallen. Flying squirrels are known to glide up to , but have measured glide ratio of about 2. Flying fish have been observed to glide for hundreds of metres on the drafts on the edge of waves with only their initial leap from the water to provide height, but may be obtaining additional lift from wave motion. On the other hand, albatrosses have measured lift–drag ratios of 20, and thus fall just 1 meter for every 20 in still air.
Most maneuverable glider. Many gliding animals have some ability to turn, but which is the most maneuverable is difficult to assess. Even paradise tree snakes, Chinese gliding frogs, and gliding ants have been observed as having considerable capacity to turn in the air.
Flying animals
Extant
Insects
Pterygota: The first of all animals to evolve flight, they are also the only invertebrates that have evolved flight. As they comprise almost all insects, the species are too numerous to list here. Insect flight is an active research field.
Birds
Birds (flying, soaring) – Most of the approximately 10,000 living species can fly (flightless birds are the exception). Bird flight is one of the most studied forms of aerial locomotion in animals. See List of soaring birds for birds that can soar as well as fly.
Mammals
Bats. There are approximately 1,240 bat species, representing about 20% of all classified mammal species. Most bats are nocturnal and many feed on insects while flying at night, using echolocation to home in on their prey.
Extinct
Pterosaurs
Pterosaurs were the first flying vertebrates, and are generally agreed to have been sophisticated flyers. They had large wings formed by a patagium stretching from the torso to a dramatically lengthened fourth finger. There were hundreds of species, most of which are thought to have been intermittent flappers, and many soarers. The largest known flying animals are pterosaurs.
Non-avian dinosaurs
Theropods (gliding and flying). There were several species of theropod dinosaur thought to be capable of gliding or flying, that are not classified as birds (though they are closely related). Some species (Microraptor gui, Microraptor zhaoianus, and Changyuraptor) have been found that were fully feathered on all four limbs, giving them four 'wings' that they are believed to have used for gliding or flying. A recent study indicates that flight may have been acquired independently in various different lineages though it may have only evolved in theropods of the Avialae.
Gliding animals
Extant
Insects
Gliding bristletails. Directed aerial gliding descent is found in some tropical arboreal bristletails, an ancestrally wingless sister taxa to the winged insects. The bristletails median caudal filament is important for the glide ratio and gliding control
Gliding ants. The flightless workers of these insects have secondarily gained some capacity to move through the air. Gliding has evolved independently in a number of arboreal ant species from the groups Cephalotini, Pseudomyrmecinae, and Formicinae (mostly Camponotus). All arboreal dolichoderines and non-cephalotine myrmicines except Daceton armigerum do not glide. Living in the rainforest canopy like many other gliders, gliding ants use their gliding to return to the trunk of the tree they live on should they fall or be knocked off a branch. Gliding was first discovered for Cephalotes atreus in the Peruvian rainforest. Cephalotes atreus can make 180 degree turns, and locate the trunk using visual cues, succeeding in landing 80% of the time. Unique among gliding animals, Cephalotini and Pseudomyrmecinae ants glide abdomen first, the Forminicae however glide in the more conventional head first manner.
Gliding immature insects. The wingless immature stages of some insect species that have wings as adults may also show a capacity to glide. These include some species of cockroach, mantis, katydid, stick insect and true bug.
Spiders
Ballooning spiders (parachuting). The young of some species of spiders travel through the air by using silk draglines to catch the wind, as may some smaller species of adult spider, such as the money spider family. This behavior is commonly known as "ballooning". Ballooning spiders make up part of the aeroplankton.
Gliding spiders. Some species of arboreal spider of the genus Selenops can glide back to the trunk of a tree should they fall. Skydiving spiders discovered in South America
Molluscs
Flying squid. Several oceanic squids of the family Ommastrephidae, such as the Pacific flying squid, will leap out of the water to escape predators, an adaptation similar to that of flying fish. Smaller squids will fly in shoals, and have been observed to cover distances as long as . Small fins towards the back of the mantle do not produce much lift, but do help stabilize the motion of flight. They exit the water by expelling water out of their funnel, indeed some squid have been observed to continue jetting water while airborne providing thrust even after leaving the water. This may make flying squid the only animals with jet-propelled aerial locomotion. The neon flying squid has been observed to glide for distances over , at speeds of up to .
Fish
Flying fish. There are over 50 species of flying fish belonging to the family Exocoetidae. They are mostly marine fishes of small to medium size. The largest flying fish can reach lengths of but most species measure less than in length. They can be divided into two-winged varieties and four-winged varieties. Before the fish leaves the water it increases its speed to around 30 body lengths per second and as it breaks the surface and is freed from the drag of the water it can be traveling at around . The glides are usually up to in length, but some have been observed soaring for hundreds of metres using the updraft on the leading edges of waves. The fish can also make a series of glides, each time dipping the tail into the water to produce forward thrust. The longest recorded series of glides, with the fish only periodically dipping its tail in the water, was for 45 seconds (Video here). It has been suggested that the genus Exocoetus is on an evolutionary borderline between flight and gliding. It flaps its large pectoral fins while gliding, but does not use a power strike like flying animals. It has been found that some flying fish can glide as effectively as some flying birds.
live bearers
Halfbeaks. A group related to the Exocoetidae, one or two hemirhamphid species possess enlarged pectoral fins and show true gliding flight rather than simple leaps. Marshall (1965) reports that Euleptorhamphus viridis can cover in two separate hops.
Trinidadian guppies have been observed exhibiting a gliding response to escape predator's
Freshwater butterflyfish (possibly gliding). Pantodon buchholzi has the ability to jump and possibly glide a short distance. It can move through the air several times the length of its body. While it does this, the fish flaps its large pectoral fins, giving it its common name. However, it is debated whether the freshwater butterfly fish can truly glide, Saidel et al. (2004) argue that it cannot.
Freshwater hatchetfish. In the wild, they have been observed jumping out of the water and gliding (although reports of them achieving powered flight has been brought up many times).
Amphibians
Gliding has evolved independently in two families of tree frogs, the Old World Rhacophoridae and the New World Hylidae. Within each lineage there are a range of gliding abilities from non-gliding, to parachuting, to full gliding.
Rhacophoridae flying frogs. A number of the Rhacophoridae, such as Wallace's flying frog (Rhacophorus nigropalmatus), have adaptations for gliding, the main feature being enlarged toe membranes. For example, the Malayan flying frog Rhacophorus prominanus glides using the membranes between the toes of its limbs, and small membranes located at the heel, the base of the leg, and the forearm. Some of the frogs are quite accomplished gliders, for example, the Chinese flying frog Rhacophorus dennysi can maneuver in the air, making two kinds of turn, either rolling into the turn (a banked turn) or yawing into the turn (a crabbed turn).
Hylidae flying frogs. The other frog family that contains gliders.
Reptiles
Several lizards and snakes are capable of gliding:
Draco lizards. There are 28 species of lizard of the genus Draco, found in Sri Lanka, India, and Southeast Asia. They live in trees, feeding on tree ants, but nest on the forest floor. They can glide for up to and over this distance they lose only in height. Unusually, their patagium (gliding membrane) is supported on elongated ribs rather than the more common situation among gliding vertebrates of having the patagium attached to the limbs. When extended, the ribs form a semicircle on either side the lizard's body and can be folded to the body like a folding fan.
Gliding lacertids. There are two species of gliding lacertid, of the genus Holaspis, found in Africa. They have fringed toes and tail sides and can flatten their bodies for gliding or parachuting.
Ptychozoon flying geckos. There are six species of gliding gecko, of the genus Ptychozoon, from Southeast Asia. These lizards have small flaps of skin along their limbs, torso, tail, and head that catch the air and enable them to glide.
Lupersaurus flying geckos. A possible sister-taxon to Ptychozoon which has similar flaps and folds and also glides.
Thecadactylus flying geckos. At least some species of Thecadactylus, such as T. rapicauda, are known to glide.
Cosymbotus flying gecko. Similar adaptations to Ptychozoon are found in the two species of the gecko genus Cosymbotus.
Chrysopelea snakes. Five species of snake from Southeast Asia, Melanesia, and India. The paradise tree snake of southern Thailand, Malaysia, Borneo, Philippines, and Sulawesi is the most capable glider of those snakes studied. It glides by stretching out its body sideways and opening its ribs so the belly is concave, and by making lateral slithering movements. It can remarkably glide up to and make 90 degree turns.
Mammals
Bats are the only freely flying mammals. A few other mammals can glide or parachute; the best known are flying squirrels and flying lemurs.
Flying squirrels (subfamily Petauristinae). There are more than 40 living species divided between 14 genera of flying squirrel. Flying squirrels are found in Asia (most species), North America (genus Glaucomys) and Europe (Siberian flying squirrel). They inhabit tropical, temperate, and Subarctic environments, with the Glaucomys preferring boreal and montane coniferous forests, specifically landing on red spruce (Picea rubens) trees as landing sites; they are known to rapidly climb trees, but take some time to locate a good landing spot. They tend to be nocturnal and are highly sensitive to light and noise. When a flying squirrel wishes to cross to a tree that is further away than the distance possible by jumping, it extends the cartilage spur on its elbow or wrist. This opens out the flap of furry skin (the patagium) that stretches from its wrist to its ankle. It glides spread-eagle and with its tail fluffed out like a parachute, and grips the tree with its claws when it lands. Flying squirrels have been reported to glide over .
Anomalures or scaly-tailed flying squirrels (family Anomaluridae). These brightly coloured African rodents are not squirrels but have evolved to a resemble flying squirrels by convergent evolution. There are seven species, divided in three genera. All but one species have gliding membranes between their front and hind legs. The genus Idiurus contains two particularly small species known as flying mice, but similarly they are not true mice.
Colugos or "flying lemurs" (order Dermoptera). There are two species of colugo. Despite their common name, colugos are not lemurs; true lemurs are primates. Molecular evidence suggests that colugos are a sister group to primates; however, some mammalogists suggest they are a sister group to bats. Found in Southeast Asia, the colugo is probably the mammal most adapted for gliding, with a patagium that is as large as geometrically possible. They can glide as far as with minimal loss of height. They have the most developed propatagium out of any gliding mammal with a mean launch velocity of approximately 3.7 m/s; the Mayan Colugo has been known to initiate glides without jumping.
Sifaka, a type of lemur, and possibly some other primates (possible limited gliding or parachuting). A number of primates have been suggested to have adaptations that allow limited gliding or parachuting: sifakas, indris, galagos and saki monkeys. Most notably, the sifaka, a type of lemur, has thick hairs on its forearms that have been argued to provide drag, and a small membrane under its arms that has been suggested to provide lift by having aerofoil properties.
Flying phalangers or wrist-winged gliders (subfamily Petaurinae). Possums found in Australia, and New Guinea. The gliding membranes are hardly noticeable until they jump. On jumping, the animal extends all four legs and stretches the loose folds of skin. The subfamily contains seven species. Of the six species in the genus Petaurus, the sugar glider and the Biak glider are the most common species. The lone species in the genus Gymnobelideus, Leadbeater's possum has only a vestigial gliding membrane.
Greater glider (Petauroides volans). The only species of the genus Petauroides of the family Pseudocheiridae. This marsupial is found in Australia, and was originally classed with the flying phalangers, but is now recognised as separate. Its flying membrane only extends to the elbow, rather than to the wrist as in Petaurinae. It has elongated limbs compared to its non-gliding relatives.
Feather-tailed possums (family Acrobatidae). This family of marsupials contains two genera, each with one species. The feathertail glider (Acrobates pygmaeus), found in Australia is the size of a very small mouse and is the smallest mammalian glider. The feathertail possum (Distoechurus pennatus) is found in New Guinea, but does not glide. Both species have a stiff-haired feather-like tail.
Extinct
Reptiles
Extinct reptiles similar to Draco. There are a number of unrelated extinct lizard-like reptiles with similar "wings" to the Draco lizards. These include the Late Permian Weigeltisauridae, the Triassic Kuehneosauridae and Mecistotrachelos, and the Cretaceous lizard Xianglong. The largest of these, Kuehneosaurus, has a wingspan of , and was estimated to be able to glide about .
Sharovipterygidae. These strange reptiles from the Upper Triassic of Kyrgyzstan and Poland unusually had a membrane on their elongated hind limbs, extending their otherwise normal, flying-squirrel-like patagia significantly. The forelimbs are in contrast much smaller.
Hypuronector. This bizarre drepanosaur displays limb proportions, particularly the elongated forelimbs, that are consistent with a flying or gliding animal with patagia.
Non-avian dinosaurs
Scansoriopterygidae is unique among dinosaurs for the development of membranous wings, unlike the feathered airfoils of other theropods. Much like modern anomalures it developed a bony rod to help support the wing, albeit on the wrist and not the elbow.
Fish
Thoracopteridae is a lineage of Triassic flying fish-like Perleidiformes, having converted their pectoral and pelvic fins into broad wings very similar to those of their modern counterparts. The Ladinian genus Potanichthys is the oldest member of this clade, suggesting that these fish began exploring aerial niches soon after the Permian-Triassic extinction event.
Mammals
Volaticotherium antiquum. A gliding eutriconodont, long considered the earliest gliding mammal until the discovery of contemporary gliding haramiyidans. It lived around 164 million years ago and used a fur-covered skin membrane to glide through the air; it lived around 165 million years ago, during the Middle-Late Jurassic of what is now China. The closely related Argentoconodon is also thought to have been able to glide, based on postcranial similarities.
The haramiyidans Vilevolodon, Xianshou, Maiopatagium and Arboroharamiya known from the Middle-Late Jurassic of China had extensive patagia, highly convergent with those of colugos.
A gliding metatherian (possibly a marsupial) is known from the Paleocene of Itaboraí, Brazil.
A gliding rodent belonging to the extinct family Eomyidae, Eomys quercyi is known from the late Oligocene of Germany.
See also
Animal locomotion
Flying mythological creatures
Insect thermoregulation
Organisms at high altitude
Aerial locomotion in marine animals
References
Further reading
The Pterosaurs: From Deep Time by David Unwin
External links
Canopy Locomotion from Mongabay online magazine
Learn the Secrets of Flight from Vertebrate Flight Exhibit at UCMP
Canopy life
Insect flight, photographs of flying insects – Rolf Nagels
Map of Life - "Gliding mammals" – University of Cambridge
Ethology
Evolution of animals
Natural history | 0.766923 | 0.994876 | 0.762993 |
Treadmill with Vibration Isolation Stabilization | The Treadmill with Vibration Isolation Stabilization System, commonly abbreviated as TVIS, is a treadmill for use on board the International Space Station and is designed to allow astronauts to run without vibrating delicate microgravity science experiments in adjacent labs. International Space Station treadmills, not necessarily described here, have included the original treadmill, the original TVIS, the БД-2, the Combined Operational Load-Bearing External Resistance Treadmill (COLBERT), and the Treadmill 2 (abbreviated as T2). Some share a name, some a design, some a function, some use different (passive) vibration-suppression systems, some it is unclear how they differ.
The name for the treadmill (COLBERT) came about due to a naming contest that NASA held for what became the Tranquility module. Comedian and TV personality Stephen Colbert used his show The Colbert Report to encourage his viewers to write in votes to use "Colbert" during the contest. After the results of the contest were announced, NASA decided to use Colbert's name for the new treadmill in place of naming the Tranquility module after him.
Exercise
Following the advent of space stations that can be inhabited for long periods of time, exposure to weightlessness has been demonstrated to have some deleterious effects on human health. Humans are well-adapted to the physical conditions at the surface of the Earth. In response to an extended period of weightlessness, various physiological systems begin to change and atrophy. Though these changes are usually temporary, long-term health issues can result.
The most common problem experienced by humans in the initial hours of weightlessness is known as space adaptation syndrome or SAS, commonly referred to as space sickness. Symptoms of SAS include nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. The first case of SAS was reported by cosmonaut Gherman Titov in 1961. Since then, roughly some 45% to 75% of all people who have flown in space have suffered from this condition. The duration of space sickness varies, but in no case has it lasted for more than 72 hours, after which the body adjusts to the new environment.
The most significant adverse effects of long-term weightlessness are muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. These effects can be minimized through a regimen of exercise. Other significant effects include fluid redistribution, a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, excess flatulence, and puffiness of the face. These effects begin to reverse quickly upon return to the Earth.
To prevent some of the effects associated with weightlessness, a treadmill with vibration isolation and stabilization designed for the International Space Station (ISS) was first evaluated during STS-81. Three crew members ran and walked on the device, which floats freely in the micro-gravity experienced during orbit. For the majority of the more than 2 hours of locomotion studied, the treadmill operated well, and vibration transmitted to the vehicle was within the micro-gravity allocation limits that are defined for the ISS. Refinements to the treadmill and harness system, which ultimately led to development of the COLBERT model, were studied after this first flight. One goal of the treadmill design is to offer the possibility of generating 1 g-like loads on the lower extremities while preserving the micro-gravity environment of the ISS for structural safety and vibration free experimental conditions.
The treadmills are intended to help astronauts stay fit, fighting off the bone loss (spaceflight osteopenia) and muscle decay that otherwise comes with space travel. Astronauts use bungee cords to strap themselves to the treadmill in order to remain in contact with the equipment while in micro-gravity. Researchers believe that exercise is a good countermeasure for the bone and muscle density loss that occurs when humans live for a long time without gravity.
Maintenance
The original Treadmill with Vibration Isolation Stabilization (TVIS) that was recessed into the floor of the Zvezda Service Module was decommissioned in June 2013, disposed of on the Russian Progress (50P) in July 2013 and replaced by the Russian БД-2.
Expedition 20 flight engineers Michael Barratt and Koichi Wakata have performed a complete overhaul of that treadmill to extend its life. Both treadmills will continue to be used, which will nearly double the availability of these critical work-out devices for space station crews.
Treadmill with Vibration Isolation Stabilization system (TVIS) also required repair in 2002, during Expedition 5 while STS-112 was docked. Valery Korzun spent an entire day performing maintenance on the unit.
A design flaw with the COLBERT power system was discovered in September 2010, within 10 months of being commissioned. A multiple day IFM was required in October in order to remove COLBERT from its rack and replace key power components.
Naming COLBERT
In early 2009, NASA held an online poll to name what became the Tranquility module. On the 3 March 2009 episode of The Colbert Report, host Stephen Colbert instructed his viewers to suggest "Colbert" as the name for Node 3 in the online poll. On 23 March 2009, it was announced that "Colbert" had garnered the most votes, but NASA did not immediately commit to using the name.
Congressman Chaka Fattah had pledged to use congressional power to ensure that democratic voting is honored in outer space as well as on planet Earth, in response to the possibility that NASA would overrule the voting. On the 14 April 2009 episode of The Colbert Report, astronaut Sunita Williams appeared on the show to announce that NASA decided to name the node 3 "Tranquility", the eighth most popular response in the census, and announced that they would name a new treadmill on the station after the comedian – Combined Operational Load-Bearing External Resistance Treadmill (COLBERT). Colbert was invited to Houston to test the treadmill, and later to Florida for its launch. The treadmill was taken to the ISS in August 2009, aboard STS-128 and was installed in the Tranquility module after the node arrived at the station in February 2010.
NASA poked fun at itself in a humorous press release included in the STS-128 flight day 6 execute package report which claimed that Jon Stewart demanded to be honored similarly but turned down the agency's offer to name the ISS Urine Processor "Space Toilet Environmental Waste Accumulator/Recycling Thingy" (STEWART).
Development
NASA Engineers started development with a Woodway medical treadmill design which is available to anyone on Earth, and they asked Woodway to nickel plate the parts and make some other modifications, but it is fundamentally the same running-in-place device as the commercially available model. The structures which support the treadmill have been adapted for use in space. Without gravity to hold the runner to the surface of the treadmill, designers added elastic straps that fit around the shoulders and waist in order to keep the runner from rocketing across the space station with the first hard step. Designers also had to work out a way to keep the treadmill from shaking the whole station with every step. Preventing vibrations is relatively easy to do on Earth, but the station is floating just like the astronauts are, and it wants to react against any movement that is made inside of it. Even small actions can shake up delicate microgravity experiments taking place inside the station's laboratories. Developing a system to stop the vibrations was the biggest challenge, Wiederhoeft said.
The first station treadmill, which was brought to the Space Station aboard STS-98, relied on a powered system of gyroscopes and mechanisms to reduce vibrations. COLBERT's Vibration Isolation System was designed to work without power, and also to be more reliable than its predecessor. COLBERT will rest on springs that are hooked to dampers, which are then connected to a standard-sized rack that has been extensively reinforced in order to handle the power produced by COLBERT users. The rack alone weighs 2,200 pounds, which is its contractual design limit, and is also louder than the first treadmill which is a trade-off Wiederhoeft said is necessary to increase its reliability. "Noise and reliability are fighting against each other here," Wiederhoeft said. "With a lot more time we could have had both quiet and reliable. We went for reliable, and did what we could with noise."
Development of the treadmills was also utilized in order to further development of commercial products. Possible secondary effects of development include improved vibration and acoustic isolation applications in sensitive equipment such as equipment used in optical, microelectronic and precision manufacturing.
COLBERT delivery
A team of engineers was required in order to prepare COLBERT to survive the rigorous vibrations of the launch process. COLBERT had to be disassembled into scores of parts, separated into more than six bags and strapped to racks inside the Leonardo cargo module, which flew to the International Space Station aboard STS-128. COLBERT was delivered to the Space Station in 2010, and resided first inside the Harmony module, before later being moved to the Tranquility module.
The packing team set out to make sure everything that is launched reaches the station in good working order. "If it's the COLBERT, or if it's something else, it's still not going to be useful in orbit if it's broken," said Pete Gauthier, packing engineer for United Space Alliance. "The difference with something like this is that it's big and it's heavy, so we have to use our biggest bag," he said. "It's easier for the crew if you have all the pieces in one bag, but when you have six bags, you just can't do that." The astronauts on the station are expected to spend about 20 hours putting the whole thing together, including the vibration system. After assembly, the only care COLBERT should need is an occasional greasing of its bearings.
References
External links
Science facilities on the International Space Station
Exercise equipment | 0.780105 | 0.978046 | 0.762979 |
The Information: A History, a Theory, a Flood | The Information: A History, a Theory, a Flood is a book by science history writer James Gleick, published in March 2011, which covers the genesis of the current Information Age. It was on The New York Times best-seller list for three weeks following its debut.
The Information has also been published in ebook formats by Fourth Estate and Random House, and as an audiobook by Random House Audio.
Synopsis
Gleick begins with the tale of colonial European explorers and their fascination with African talking drums and their observed use to send complex widely understood messages back and forth between villages, and over even longer distances by relay. Gleick transitions from the information implications of such drum signaling to the impact of the arrival of long-distance telegraph and then telephone communication to the commercial and social prospects of the Industrial Revolution west. Research to improve these technologies ultimately led to our understanding the essentially digital nature of information, quantized down to the unit of the bit (or qubit).
Starting with the development of symbolic written language (and the eventual perceived need for a dictionary), Gleick examines the history of intellectual insights central to information theory, detailing the key figures responsible such as Claude Shannon, Charles Babbage, Ada Byron, Samuel Morse, Alan Turing, Stephen Hawking, Richard Dawkins and John Archibald Wheeler. The author also delves into how digital information is now being understood in relation to physics and genetics. Following the circulation of Claude Shannon's A Mathematical Theory of Communication and Norbert Wiener's Cybernetics many disciplines attempted to jump on the information theory bandwagon to varying success. Information theory concepts of data compression and error correction became especially important to the computer and electronics industries.
Gleick finally discusses Wikipedia as an emerging internet-based Library of Babel, investigating the implications of its expansive user-generated content, including the ongoing struggle between inclusionists, deletionists, and vandals. Gleick uses the Jimmy Wales-created article for the Cape Town butchery restaurant Mzoli's as a case study of this struggle. The flood of information that humanity is now exposed to presents new challenges, Gleick says. He argues that because we retain more of our information now than at any previous point in human history, it takes much more effort to delete or remove unwanted information than to accumulate it. This is the ultimate entropy cost of generating additional information and the answer to slay Maxwell's Demon.
Reception
In addition to winning major awards for science writing and history, The Information received mostly positive reviews. In The New York Times, Janet Maslin said it is "so ambitious, illuminating and sexily theoretical that it will amount to aspirational reading for many of those who have the mettle to tackle it." Other admirers were Nicholas Carr for The Daily Beast and physicist Freeman Dyson for The New York Review of Books. Science fiction author Cory Doctorow in his BoingBoing review called Gleick "one of the great science writers of all time", "Not a biographer of scientists... but a biographer of the idea itself." Tim Wu for Slate praised "a mind-bending explanation of theory" but wished Gleick had examined the economic importance of information more deeply. Ian Pindar writing for The Guardian complained that The Information does not fully address the relationship between social control of information (censorship, propaganda) and access to political power.
Awards and honors
2012 Royal Society Winton Prize for Science Books, winner.
2012 PEN/E. O. Wilson Literary Science Writing Award, winner.
2012 Andrew Carnegie Medal for Excellence in Nonfiction, finalist.
2012 Hessell-Tiltman Prize, winner.
2011 National Book Critics Circle Award, finalist (Nonfiction).
2011 Salon Book Award (Nonfiction).
2011 New York Times Bestseller
2011 Time Magazine's Best Books of the Year
See also
Decoding Reality: The Universe as Quantum Information, 2010 book by Vlatko Vedral
Decoding the Universe, 2007 book by Charles Seife
References
External links
Bits in the ether – Author Page
The Information Palace – Essay by James Gleick on origin and evolving meaning of the word 'information'.
Wiki is not paper – Essay
After Words interview with Gleick on The Information, June 18, 2011, C-SPAN
2011 non-fiction books
Popular science books
Works about information
Pantheon Books books
Fourth Estate books | 0.780359 | 0.977727 | 0.762978 |
Plasma parameters | Plasma parameters define various characteristics of a plasma, an electrically conductive collection of charged and neutral particles of various species (electrons and ions) that responds collectively to electromagnetic forces. Such particle systems can be studied statistically, i.e., their behaviour can be described based on a limited number of global parameters instead of tracking each particle separately.
Fundamental
The fundamental plasma parameters in a steady state are
the number density of each particle species present in the plasma,
the temperature of each species,
the mass of each species,
the charge of each species,
and the magnetic flux density .
Using these parameters and physical constants, other plasma parameters can be derived.
Other
All quantities are in Gaussian (cgs) units except energy and temperature which are in electronvolts. For the sake of simplicity, a single ionic species is assumed. The ion mass is expressed in units of the proton mass, and the ion charge in units of the elementary charge , (in the case of a fully ionized atom, equals to the respective atomic number). The other physical quantities used are the Boltzmann constant, speed of light, and the Coulomb logarithm.
Frequencies
Lengths
Velocities
Dimensionless
number of particles in a Debye sphere
Alfvén speed to speed of light ratio
electron plasma frequency to gyrofrequency ratio
ion plasma frequency to gyrofrequency ratio
thermal pressure to magnetic pressure ratio, or beta, β
magnetic field energy to ion rest energy ratio
Collisionality
In the study of tokamaks, collisionality is a dimensionless parameter which expresses the ratio of the electron-ion collision frequency to the banana orbit frequency.
The plasma collisionality is defined as
where denotes the electron-ion collision frequency, is the major radius of the plasma, is the inverse aspect-ratio, and is the safety factor. The plasma parameters and denote, respectively, the mass and temperature of the ions, and is the Boltzmann constant.
Electron temperature
Temperature is a statistical quantity whose formal definition is
or the change in internal energy with respect to entropy, holding volume and particle number constant. A practical definition comes from the fact that the atoms, molecules, or whatever particles in a system have an average kinetic energy. The average means to average over the kinetic energy of all the particles in a system.
If the velocities of a group of electrons, e.g., in a plasma, follow a Maxwell–Boltzmann distribution, then the electron temperature is defined as the temperature of that distribution. For other distributions, not assumed to be in equilibrium or have a temperature, two-thirds of the average energy is often referred to as the temperature, since for a Maxwell–Boltzmann distribution with three degrees of freedom, .
The SI unit of temperature is the kelvin (K), but using the above relation the electron temperature is often expressed in terms of the energy unit electronvolt (eV). Each kelvin (1 K) corresponds to ; this factor is the ratio of the Boltzmann constant to the elementary charge. Each eV is equivalent to 11,605 kelvins, which can be calculated by the relation .
The electron temperature of a plasma can be several orders of magnitude higher than the temperature of the neutral species or of the ions. This is a result of two facts. Firstly, many plasma sources heat the electrons more strongly than the ions. Secondly, atoms and ions are much heavier than electrons, and energy transfer in a two-body collision is much more efficient if the masses are similar. Therefore, equilibration of the temperature happens very slowly, and is not achieved during the time range of the observation.
See also
Ball-pen probe
Langmuir probe
References
NRL Plasma Formulary – Naval Research Laboratory (2018)
Plasma parameters
Astrophysics | 0.772687 | 0.98743 | 0.762974 |
Landau–Lifshitz–Gilbert equation | In physics, the Landau–Lifshitz–Gilbert equation (usually abbreviated as LLG equation), named for Lev Landau, Evgeny Lifshitz, and T. L. Gilbert, is a name used for a differential equation describing the dynamics (typically the precessional motion) of magnetization in a solid. It is a modified version by Gilbert of the original equation of Landau and Lifshitz. The LLG equation is similar to the Bloch equation, but they differ in the form of the damping term. The LLG equation describes a more general scenario of magnetization dynamics beyond the simple Larmor precession. In particular, the effective field driving the precessional motion of is not restricted to real magnetic fields; it incorporates a wide range of mechanisms including magnetic anisotropy, exchange interaction, and so on.
The various forms of the LLG equation are commonly used in micromagnetics to model the effects of a magnetic field and other magnetic interactions on ferromagnetic materials. It provides a practical way to model the time-domain behavior of magnetic elements. Recent developments generalizes the LLG equation to include the influence of spin-polarized currents in the form of spin-transfer torque.
Landau–Lifshitz equation
In a ferromagnet, the magnitude of the magnetization at each spacetime point is approximated by the saturation magnetization (although it can be smaller when averaged over a chunk of volume). The Landau-Lifshitz equation, a precursor of the LLG equation, phenomenologically describes the rotation of the magnetization in response to the effective field which accounts for not only a real magnetic field but also internal magnetic interactions such as exchange and anisotropy. An earlier, but equivalent, equation (the Landau–Lifshitz equation) was introduced by :
where is the electron gyromagnetic ratio and is a phenomenological damping parameter, often replaced by
where is a dimensionless constant called the damping factor. The effective field is a combination of the external magnetic field, the demagnetizing field, and various internal magnetic interactions involving quantum mechanical effects, which is typically defined as the functional derivative of the magnetic free energy with respect to the local magnetization . To solve this equation, additional conditions for the demagnetizing field must be included to accommodate the geometry of the material.
Landau–Lifshitz–Gilbert equation
In 1955 Gilbert replaced the damping term in the Landau–Lifshitz (LL) equation by one that depends on the time derivative of the magnetization:
This is the Landau–Lifshitz–Gilbert (LLG) equation, where is the damping parameter, which is characteristic of the material. It can be transformed into the Landau–Lifshitz equation:
where
In this form of the LL equation, the precessional term depends on the damping term. This better represents the behavior of real ferromagnets when the damping is large.
Landau–Lifshitz–Gilbert–Slonczewski equation
In 1996 John Slonczewski expanded the model to account for the spin-transfer torque, i.e. the torque induced upon the magnetization by spin-polarized current flowing through the ferromagnet. This is commonly written in terms of the unit moment defined by :
where is the dimensionless damping parameter, and are driving torques, and is the unit vector along the polarization of the current.
References and footnotes
Further reading
This is only an abstract; the full report is "Armor Research Foundation Project No. A059, Supplementary Report, May 1, 1956", but was never published. A description of the work is given in
External links
Magnetization dynamics applet
Eponymous equations of physics
Magnetic ordering
Partial differential equations
Lev Landau | 0.776146 | 0.983029 | 0.762974 |
Wheeler–Feynman absorber theory | The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as
where .
The absorber theory is invariant under time-reversal transformation, consistent with the lack of any physical basis for microscopic time-reversal symmetry breaking. Another key principle resulting from this interpretation, and somewhat reminiscent of Mach's principle and the work of Hugo Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of electron self-energy giving an infinity in the energy of an electromagnetic field.
Motivation
Wheeler and Feynman begin by observing that classical electromagnetic field theory was designed before the discovery of electrons: charge is a continuous substance in the theory. An electron particle does not naturally fit in to the theory: should a point charge see the effect of its own field? They reconsider the fundamental problem of a collection of point charges, taking up a field-free action at a distance theory developed separately by Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker. Unlike instantaneous action at a distance theories of the early 1800s these "direct interaction" theories are based on interaction propagation at the speed of light. They differ from the classical field theory in three ways 1) no independent field is postulated; 2) the point charges do not act upon themselves; 3) the equations are time symmetric. Wheeler and Feynman propose to develop these equations into a relativistically correct generalization of electromagnetism based on Newtonian mechanics.
Problems with previous direct-interaction theories
The Tetrode-Fokker work left unsolved two major problems. First, in a non-instantaneous action at a distance theory, the equal action-reaction of Newton's laws of motion conflicts with causality. If an action propagates forward in time, the reaction would necessarily propagate backwards in time. Second, existing explanations of radiation reaction force or radiation resistance depended upon accelerating electrons interacting with their own field; the direct interaction models explicitly omit self-interaction.
Absorber and radiation resistance
Wheeler and Feynman postulate the "universe" of all other electrons as an absorber of radiation to overcome these issues and extend the direct interaction theories.
Rather than considering an unphysical isolated point charge, they model all charges in the universe with a uniform absorber in a shell around a charge. As the charge moves relative to the absorber, it radiates into the absorber which "pushes back", causing the radiation resistance.
Key result
Feynman and Wheeler obtained their result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is
Then they observed that if the relation
holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field
The total field is then the observed pure retarded field.
The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves.
Arrow of time ambiguity
The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling. Wheeler and Feynman claimed that thermodynamics picked the observed direction; cosmological selections have also been proposed.
The requirement of time-reversal symmetry, in general, is difficult to reconcile with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves.
In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.
Alternatively, the way that Wheeler/Feynman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction.
New interpretation of radiation damping
One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Hendrik Lorentz and Max Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Paul Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position:
However, Dirac did not propose any physical explanation of this interpretation.
A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then
If we sum the free-field term of this expression, we obtain
and, thanks to Dirac's result,
Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac.
Developments since original formulation
Gravity theory
Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding.
Transactional interpretation of quantum mechanics
Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality.
Attempted resolution of causality
T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea.
The Lagrangian describing a particle under the influence of the time-symmetric potential generated by another particle is
where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is
It was originally demonstrated with computer algebra and then proven analytically that
is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore
The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Albert Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaotic-like solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory.
This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein.
Alternative Lamb shift calculation
As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization.
This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality.
Relationship to quantum field theory
This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in quantum field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In quantum field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.
See also
Abraham–Lorentz force
Causality
Paradox of radiation of charged particles in a gravitational field
Retrocausality
Symmetry in physics and T-symmetry
Transactional interpretation
Two-state vector formalism
Notes
Sources
Electromagnetism
Richard Feynman | 0.774096 | 0.985626 | 0.76297 |
Solar luminosity | The solar luminosity is a unit of radiant flux (power emitted in the form of photons) conventionally used by astronomers to measure the luminosity of stars, galaxies and other celestial objects in terms of the output of the Sun.
One nominal solar luminosity is defined by the International Astronomical Union to be . The Sun is a weakly variable star, and its actual luminosity therefore fluctuates. The major fluctuation is the eleven-year solar cycle (sunspot cycle) that causes a quasi-periodic variation of about ±0.1%. Other variations over the last 200–300 years are thought to be much smaller than this.
Determination
Solar luminosity is related to solar irradiance (the solar constant). Solar irradiance is responsible for the orbital forcing that causes the Milankovitch cycles, which determine Earthly glacial cycles. The mean irradiance at the top of the Earth's atmosphere is sometimes known as the solar constant, . Irradiance is defined as power per unit area, so the solar luminosity (total power emitted by the Sun) is the irradiance received at the Earth (solar constant) multiplied by the area of the sphere whose radius is the mean distance between the Earth and the Sun:
where is the unit distance (the value of the astronomical unit in metres) and is a constant (whose value is very close to one) that reflects the fact that the mean distance from the Earth to the Sun is not exactly one astronomical unit.
See also
Sun
Solar mass
Solar radius
Nuclear fusion
Active region
Triple-alpha process
References
Further reading
External links
LISIRD: LASP Interactive Solar Irradiance Datacenter
Stellar Luminosity Calculator
Solar Luminosity
Variation of Solar Luminosity
Luminosity
Stellar astronomy
Units of power
Units of measurement in astronomy | 0.770091 | 0.990747 | 0.762966 |
Date A Live | is a Japanese light novel series written by Kōshi Tachibana and illustrated by Tsunako. Fujimi Shobo published 22 volumes from March 2011 to March 2020 under their Fujimi Fantasia Bunko imprint. Yen Press holds the license to publish the light novel in English.
Five manga were published by Kadokawa Shoten and Fujimi Shobo in Monthly Shōnen Ace and Monthly Dragon Age. An anime television series adaptation produced by AIC Plus+ aired between April and June 2013. A second season by Production IMS aired between April and June 2014. An original anime film, Date A Live: Mayuri Judgement, was released in August 2015. A spin-off light novel series, Date A Live Fragment: Date A Bullet, began publication in March 2017. A third season by J.C.Staff aired between January and March 2019. A fourth season by Geek Toys aired from April to June 2022. A fifth season aired from April to June 2024.
Plot
The series begins with a strange phenomenon called a "spatial quake" devastating the center of Eurasia, resulting in at least 150 million casualties. For the next 30 years, smaller spatial quakes plague the world on an irregular basis. In the present, Shido Itsuka, a seemingly ordinary high school student, comes across a mysterious girl at the ground zero of a spatial quake. He learns from his adoptive sister Kotori the girl is one of the "Spirits" from different dimensions who are the real cause of the spatial quakes, which occur when Spirits manifest themselves in the real world. He also learns Kotori is the commander of the airship Fraxinus, crewed by the organization Ratatoskr and its parent company Asgard Electronics.
Shido is recruited by Ratatoskr to make use of his mysterious ability to seal the Spirits' powers thus stopping them from being a threat to mankind. However, there is a catch: to seal a Spirit's power, he must make each Spirit fall in love with him and make her kiss him. Moreover, Shido and his companions face the opposition of the AST (Anti-Spirit Team), a special unit designed to suppress the threat posed by Spirits by eliminating them, which is backed by DEM (Deus Ex Machina) Industries, a conglomerate led by Sir Isaac Ray Pelham Westcott who intends to exploit the powers of the Spirits for his own agenda. As Shido successfully keeps sealing more and more Spirits, he gains allies to help him with his dates with other Spirits but also increases the competition among them for his attention and affection, much to his chagrin.
Media
Light novels
Date A Live began as a light novel series written by Koushi Tachibana with illustrations by Tsunako. The first volume was published on March 19, 2011, under Fujimi Shobo's Fujimi Fantasia Bunko. Twenty-two volumes have been released in Japan. During their panel at the 2020 Crunchyroll Expo, Yen Press announced that they have licensed the light novel.
Manga
The series received a total of five manga adaptions, all of which were published by Kadokawa Shoten and Fujimi Shobo in Monthly Shōnen Ace and Monthly Dragon Age.
Anime
The anime adaptation was directed by Keitaro Motonaga and produced by AIC Plus+. The series was streamed in lower quality on Niconico, with each episode available a week before its TV premiere. The first episode was streamed on March 31 and aired on Tokyo MX on April 6, 2013. The final episode was streamed on Niconico on June 16 and aired on Tokyo MX on June 22. The opening theme is titled sung by sweet ARMS, a vocal group consisting of Iori Nomizu, Misuzu Togashi, Kaori Sadohara, and Misato. The series makes use of four ending themes: "Hatsukoi Winding Road", by Kayoko Tsumita, Risako Murai and Midori Tsukimiya; "Save The World", "Save My Heart" and , all three by Nomizu.
Following the TV broadcast of the final episode of the first season, a second season was announced, which was set to air in April 2014. The opening theme is sung by sweet ARMS titled "Trust in You" and the ending theme is sung by Kaori Sadohara titled "Day to Story". The animation production was held by Production IMS. An unaired episode was bundled with the third volume of the Date A Live Encore short story collection was released on December 9, 2014.
The first and second season have been licensed by Funimation for streaming and home video release in North America and by Madman Entertainment in Australia.
In his Twitter account, Tachibana announced Date A Live would get a third new anime series. Animation production was held by J.C.Staff, with the cast and staff reprising their respective roles from the previous seasons. The series aired from January 11 to March 29, 2019. The opening theme is sung by sweet ARMS titled "I Swear", and the ending theme is sung by Erii Yamazaki titled "Last Promise". The third season ran for 12 episodes. Crunchyroll simulcast the third season, while Funimation produced a simuldub. In Australia and New Zealand, AnimeLab simulcast the third season.
On September 17, 2019, a new anime project was announced. It was later announced to be an anime adaptation of the Date A Live Fragment: Date A Bullet spin-off novels.
On March 16, 2020, it was announced that the series would get a fourth season. The season is produced by Geek Toys and was scheduled to premiere in October 2021, but was delayed to 2022 for "various reasons". Jun Nakagawa directed the fourth season, with Fumihiko Shimo writing the series' scripts, Naoto Nakamura designing the characters, and Go Sakabe returning to compose the series' music. It aired from April 8 to June 24, 2022. The opening theme is sung by Miyu Tomita titled "OveR" and the ending theme is sung by sweet ARMS titled "S.O.S".
Following Sony's acquisition of Crunchyroll, the series was moved from Funimation to Crunchyroll. On April 21, 2022, Crunchyroll announced that the season would receive an English dub, which premiered the following day.
After the conclusion of the fourth season, a fifth season was announced. The main cast and staff of the fourth season returned. It aired from April 10 to June 26, 2024. The opening theme is sung by Miyu Tomita titled "Paradoxes" and the ending theme is sung by sweet ARMS titled "Hitohira". Crunchyroll also licensed the season.
Theatrical film
An animated theatrical film was announced via the official Twitter account of the television series as the airing of the second television season concluded. On the event of "Date A Live II", the staff unveiled the film's title and the premiere date of August 22, 2015, with an original story supervised by the original light novel author, Koushi Tachibana. Nobunaga Shimazaki, the voice actor of Shido Itsuka, introduced a silhouette of the new title character, named . During the events of "Tohka's Birthday" on 10 April, Sora Amamiya was confirmed to be voicing Mayuri.
Video games
A video game named produced by Compile Heart and Sting Entertainment released on June 27, 2013, for the PlayStation 3. A promotional video was shown at Anime Contents Expo 2013. The game features a new original character named , voiced by Kana Hanazawa. A PlayStation Vita version of the game was released in on July 30, 2015, and features new characters and scenarios.
Another video game, titled , was released on June 26, 2014, for the PlayStation 3, featuring another new character named , voiced by Suzuko Mimori. A new installment for both past games, named produced by Compile Heart and Sting Entertainment was released on July 30, 2015, for the PlayStation Vita. It is a de facto sequel with new characters and new scenarios. The game features the Yamai Sisters, Miku Izayoi, Rinne Sonogami, Maria Arusu, as well as Marina Arusu, and a new original character named , voiced by Ayane Sakura. A promotional video was shown at the events of Date A Fes II. An English version of Date A Live: Rio Reincarnation has been released on PlayStation 4 and Steam platforms on July 23, 2019. Two CGs were modified in the English PlayStation 4 version of the game.
A fourth video game produced again by Compile Heart, titled , was scheduled to be released on July 18, 2019, for the PlayStation 4, in Japan. The limited edition of the game includes a Tsunako-designed box, special books (Koushi Tachibana-written short story, etc.), and a drama CD. Due to various reasons, the release date had been pushed back to September 24, 2020.
A free-to-play mobile game titled Date A Live: Spirit Pledge was released in China on September 21, 2018, for Android and iOS. A beta test of a global version started on July 26, 2020.
Reception
The first volume of the first anime season placed eighth place amongst Blu-ray sales in Japan during its debut week within the Oricon charts. The PS3 game Date A Live: Rinne Utopia sold 23,340 physical retail copies within the first week of release in Japan. By October 2015, the series as a whole had sold over four million copies.
On June 12, 2015, the Chinese Ministry of Culture listed Date A Live II among 38 anime and manga titles banned in China.
See also
King's Proposal, another light novel series written by Kōshi Tachibana and illustrated by Tsunako
References
External links
Date A Live at Fujimi Shobo
2013 video games
2014 anime television series debuts
2014 video games
2015 video games
2019 anime television series debuts
2022 anime television series debuts
2024 anime television series debuts
Anime International Company
Anime and manga based on light novels
Anime and manga set in schools
Book series introduced in 2011
Censored television series
Cross-dressing in anime and manga
Crunchyroll anime
Dystopian anime and manga
Dystopian novels
Fujimi Fantasia Bunko
Fujimi Shobo manga
Funimation
Geek Toys
Harem anime and manga
J.C.Staff
Japan-exclusive video games
Japanese science fiction novels
Japanese science fiction television series
Kadokawa Dwango franchises
Light novels
Muse Communication
PlayStation 3 games
PlayStation Vita games
Production IMS
Romantic comedy anime and manga
Science fantasy anime and manga
Shōnen manga
Television censorship in China
Television shows based on light novels
Tokyo MX original programming
Video games developed in Japan
Works banned in China
Yen Press titles | 0.765129 | 0.997173 | 0.762966 |
Fresnel equations | The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface.
Overview
When light strikes the interface between a medium with refractive index and a second medium with refractive index , both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface.
The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations.
S and P polarizations
There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations.
The s polarization refers to polarization of a wave's electric field normal to the plane of incidence (the direction in the derivation below); then the magnetic field is in the plane of incidence. The p polarization refers to polarization of the electric field in the plane of incidence (the plane in the derivation below); then the magnetic field is normal to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence).
Although the reflection and transmission are dependent on polarization, at normal incidence there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true).
Configuration
In the diagram on the right, an incident plane wave in the direction of the ray strikes the interface between two media of refractive indices and at point . Part of the wave is reflected in the direction , and part refracted in the direction . The angles that the incident, reflected and refracted rays make to the normal of the interface are given as , and , respectively.
The relationship between these angles is given by the law of reflection:
and Snell's law:
The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine power coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude.
Power (intensity) reflection and transmission coefficients
We call the fraction of the incident power that is reflected from the interface the reflectance (or reflectivity, or power reflection coefficient) , and the fraction that is refracted into the second medium is called the transmittance (or transmissivity, or power transmission coefficient) . Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection.
The reflectance for s-polarized light is
while the reflectance for p-polarized light is
where and are the wave impedances of media 1 and 2, respectively.
We assume that the media are non-magnetic (i.e., ), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices and :
where is the impedance of free space and . Making this substitution, we obtain equations using the refractive indices:
The second form of each equation is derived from the first by eliminating using Snell's law and trigonometric identities.
As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:
and
Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances in the direction of an incident or reflected wave (given by the magnitude of a wave's Poynting vector) multiplied by for a wave at an angle to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since , so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface.
Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the s and p polarizations, so that the effective reflectivity of the material is just the average of the two reflectivities:
For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used.
Special cases
Normal incidence
For the case of normal incidence, , and there is no distinction between s and p polarization. Thus, the reflectance simplifies to
For common glass surrounded by air, the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane.
Brewster's angle
At a dielectric interface from to , there is a particular angle of incidence at which goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for and (typical glass).
Total internal reflection
When light travelling in a denser medium strikes the surface of a less dense medium (i.e., ), beyond a particular incidence angle known as the critical angle, all light is reflected and . This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact for all real ). For glass with surrounded by air, the critical angle is approximately 42°.
45° incidence
Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence, it follows algebraically from the above equations that equals the square of :
This can be used to either verify the consistency of the measurements of and , or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required.
Measurements of and at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of and , and then averaging these two averages again arithmetically, gives a value for with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of and on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam.
Complex amplitude reflection and transmission coefficients
The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case and (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, of both media to be equal to the permeability of free space as is essentially true of all dielectrics at optical frequencies.
In the following equations and graphs, we adopt the following conventions. For s polarization, the reflection coefficient is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for p polarization is the ratio of the waves complex magnetic field amplitudes (or equivalently, the negative of the ratio of their electric field amplitudes). The transmission coefficient is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients and are generally different between the s and p polarizations, and even at normal incidence (where the designations s and p do not even apply!) the sign of is reversed depending on whether the wave is considered to be s or p polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence).
The equations consider a plane wave incident on a plane interface at angle of incidence , a wave reflected at angle , and a wave transmitted at angle . In the case of an interface into an absorbing material (where is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle.
Using this convention,
One can see that and . One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional.
Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient is just the squared magnitude of :
On the other hand, calculation of the power transmission coefficient is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude divided by the characteristic impedance of the medium (or by the square of the magnetic field multiplied by the characteristic impedance). This results in:
using the above definition of . The introduced factor of is the reciprocal of the ratio of the media's wave impedances. The factors adjust the waves' powers so they are reckoned in the direction normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to .
In the case of total internal reflection where the power transmission is zero, nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus ) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of and (whose magnitudes are unity in this case). These phase shifts are different for s and p waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations.
Alternative forms
In the above formula for , if we put (Snell's law) and multiply the numerator and denominator by , we obtain
If we do likewise with the formula for , the result is easily shown to be equivalent to
These formulas are known respectively as Fresnel's sine law and Fresnel's tangent law. Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as .
Multiple surfaces
When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser.
An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference.
The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems.
History
In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like one of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term polarization to describe this behavior. In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the reason for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write:
In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were purely transverse.
Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875.
In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ( and ) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved
calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions),
subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and
checking that the final polarization was circular.
Thus he finally had a quantitative theory for what we now call the Fresnel rhomb — a device that he had been using in experiments, in one form or another, since 1817 (see Fresnel rhomb §History).
The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index.
Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization, and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance.
Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see Augustin-Jean Fresnel).
Derivation
Here we systematically derive the above relations from electromagnetic premises.
Material parameters
In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors are related by
where and are scalars, known respectively as the (electric) permittivity and the (magnetic) permeability of the medium. For a vacuum, these have the values and , respectively. Hence we define the relative permittivity (or dielectric constant) , and the relative permeability .
In optics it is common to assume that the medium is non-magnetic, so that . For ferromagnetic materials at radio/microwave frequencies, larger values of must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), is indeed very close to 1; that is, .
In optics, one usually knows the refractive index of the medium, which is the ratio of the speed of light in a vacuum to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance , which is the ratio of the amplitude of to the amplitude of . It is therefore desirable to express and in terms of and , and thence to relate to . The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave admittance , which is the reciprocal of the wave impedance .
In the case of uniform plane sinusoidal waves, the wave impedance or admittance is known as the intrinsic impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived.
Electromagnetic plane waves
In a uniform plane sinusoidal electromagnetic wave, the electric field has the form
where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts.
To advance the phase by the angle ϕ, we replace by (that is, we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when the field is factored as , where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by .
If ℓ is the component of in the direction of , the field can be written . If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity . This in turn is equal to Solving for gives
As usual, we drop the time-dependent factor , which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent phasor
For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to
Putting and , as above, we can eliminate and to obtain equations in only and :
If the material parameters and are real (as in a lossless dielectric), these equations show that form a right-handed orthogonal triad, so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from, we obtain
where and are the magnitudes of and . Multiplying the last two equations gives
Dividing (or cross-multiplying) the same two equations gives , where
This is the intrinsic admittance.
From we obtain the phase velocity For a vacuum this reduces to Dividing the second result by the first gives
For a non-magnetic medium (the usual case), this becomes .
Taking the reciprocal of, we find that the intrinsic impedance is In a vacuum this takes the value known as the impedance of free space. By division, For a non-magnetic medium, this becomes
Wave vectors
In Cartesian coordinates , let the region have refractive index , intrinsic admittance , etc., and let the region have refractive index , intrinsic admittance , etc. Then the plane is the interface, and the axis is normal to the interface (see diagram). Let and (in bold roman type) be the unit vectors in the and directions, respectively. Let the plane of incidence be the plane (the plane of the page), with the angle of incidence measured from towards . Let the angle of refraction, measured in the same sense, be , where the subscript stands for transmitted (reserving for reflected).
In the absence of Doppler shifts, ω does not change on reflection or refraction. Hence, by, the magnitude of the wave vector is proportional to the refractive index.
So, for a given , if we redefine as the magnitude of the wave vector in the reference medium (for which ), then the wave vector has magnitude in the first medium (region in the diagram) and magnitude in the second medium. From the magnitudes and the geometry, we find that the wave vectors are
where the last step uses Snell's law. The corresponding dot products in the phasor form are
Hence:
The s components
For the s polarization, the field is parallel to the axis and may therefore be described by its component in the direction. Let the reflection and transmission coefficients be and , respectively. Then, if the incident field is taken to have unit amplitude, the phasor form of its -component is
and the reflected and transmitted fields, in the same form, are
Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the transverse field, meaning (in this context) the field normal to the plane of incidence. For the s polarization, that means the field. If the incident, reflected, and transmitted fields (in the above equations) are in the -direction ("out of the page"), then the respective fields are in the directions of the red arrows, since form a right-handed orthogonal triad. The fields may therefore be described by their components in the directions of those arrows, denoted by . Then, since ,
At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the and fields must be continuous; that is,
When we substitute from equations to and then from, the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations
which are easily solved for and , yielding
and
At normal incidence , indicated by an additional subscript 0, these results become
and
At grazing incidence , we have , hence and .
The p components
For the p polarization, the incident, reflected, and transmitted fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be (redefining the symbols for the new context). Let the reflection and transmission coefficients be and . Then, if the incident field is taken to have unit amplitude, we have
If the fields are in the directions of the red arrows, then, in order for to form a right-handed orthogonal triad, the respective fields must be in the direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field the field in the case of the p polarization. The agreement of the other field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission.
So, for the incident, reflected, and transmitted fields, let the respective components in the direction be . Then, since ,
At the interface, the tangential components of the and fields must be continuous; that is,
When we substitute from equations and and then from, the exponential factors again cancel out, so that the interface conditions reduce to
Solving for and , we find
and
At normal incidence indicated by an additional subscript 0, these results become
and
At , we again have , hence and .
Comparing and with and, we see that at normal incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at grazing incidence.
Power ratios (reflectivity and transmissivity)
The Poynting vector for a wave is a vector whose component in any direction is the irradiance (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is , where and are due only to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), and are in phase, and at right angles to each other and to the wave vector ; so, for s polarization, using the and components of and respectively (or for p polarization, using the and components of and ), the irradiance in the direction of is given simply by , which is in a medium of intrinsic impedance . To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the component (rather than the full component) of or or, equivalently, simply multiply by the proper geometric factor, obtaining .
From equations and, taking squared magnitudes, we find that the reflectivity (ratio of reflected power to incident power) is
for the s polarization, and
for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cosθ, the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power transmission (below), these factors must be taken into account.
The simplest way to obtain the power transmission coefficient (transmissivity, the ratio of transmitted power to incident power in the direction normal to the interface, i.e. the direction) is to use (conservation of energy). In this way we find
for the s polarization, and
for the p polarization.
In the case of an interface between two lossless media (for which ϵ and μ are real and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations and. But, for given amplitude (as noted above), the component of the Poynting vector in the direction is proportional to the geometric factor and inversely proportional to the wave impedance . Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient:
for the s polarization, and
for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, ).
For unpolarized light:
where .
Equal refractive indices
From equations and, we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have (that is, the transmitted ray is undeviated), so that the cosines in equations,,,, and to cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering.
Non-magnetic media
Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing by) yields
For non-magnetic media we can substitute the vacuum permeability for , so that
that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations to and equations to, the factor cμ0 cancels out. For the amplitude coefficients we obtain:
For the case of normal incidence these reduce to:
The power reflection coefficients become:
The power transmissions can then be found from .
Brewster's angle
For equal permeabilities (e.g., non-magnetic media), if and are complementary, we can substitute for , and for , so that the numerator in equation becomes , which is zero (by Snell's law). Hence and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain
for Brewster's angle.
Equal permittivities
Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations and, if is fixed instead of , then becomes inversely proportional to , with the result that the subscripts 1 and 2 in equations to are interchanged (due to the additional step of multiplying the numerator and denominator by ). Hence, in and, the expressions for and in terms of refractive indices will be interchanged, so that Brewster's angle will give instead of , and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization.
This switch of polarizations has an analog in the old mechanical theory of light waves (see §History, above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest.
See also
Jones calculus
Polarization mixing
Index-matching material
Field and power quantities
Fresnel rhomb, Fresnel's apparatus to produce circularly polarised light
Reflection loss
Specular reflection
Schlick's approximation
Snell's window
X-ray reflectivity
Plane of incidence
Reflections of signals on conducting lines
Notes
References
Sources
M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press.
J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, .
R.E. Collin, 1966, Foundations for Microwave Engineering, Tokyo: McGraw-Hill.
O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, .
A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.1 (1866).
E. Hecht, 1987, Optics, 2nd Ed., Addison Wesley, .
E. Hecht, 2002, Optics, 4th Ed., Addison Wesley, .
F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, .
H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp.295–413.
W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol.2.
E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co.
Further reading
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
External links
Fresnel Equations – Wolfram.
Fresnel equations calculator
FreeSnell – Free software computes the optical properties of multilayer materials.
Thinfilm – Web interface for calculating optical properties of thin films and multilayer materials (reflection & transmission coefficients, ellipsometric parameters Psi & Delta).
Simple web interface for calculating single-interface reflection and refraction angles and strengths.
Reflection and transmittance for two dielectrics – Mathematica interactive webpage that shows the relations between index of refraction and reflection.
A self-contained first-principles derivation of the transmission and reflection probabilities from a multilayer with complex indices of refraction.
Eponymous equations of physics
Light
Geometrical optics
Physical optics
Polarization (waves)
History of physics | 0.764929 | 0.997421 | 0.762956 |
Beam emittance | In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space.
Each particle in a beam can be described by its position and momentum along each of three orthogonal axes, for a total of six position and momentum coordinates. When the position and momentum for a single axis are plotted on a two dimensional graph, the average spread of the coordinates on this plot are the emittance. As such, a beam will have three emittances, one along each axis, which can be described independently. As particle momentum along an axis is usually described as an angle relative to that axis, an area on a position-momentum plot will have dimensions of length × angle (for example, millimeters × milliradian).
Emittance is important for analysis of particle beams. As long as the beam is only subjected to conservative forces, Liouville's theorem shows that emittance is a conserved quantity. If the distribution over phase space is represented as a cloud in a plot (see figure), emittance is the area of the cloud. A variety of more exact definitions handle the fuzzy borders of the cloud and the case of a cloud that does not have an elliptical shape. In addition, the emittance along each axis is independent unless the beam passes through beamline elements (such as solenoid magnets) which correlate them.
A low-emittance particle beam is a beam where the particles are confined to a small distance and have nearly the same momentum, which is a desirable property for ensuring that the entire beam is transported to its destination. In a colliding beam accelerator, keeping the emittance small means that the likelihood of particle interactions will be greater resulting in higher luminosity. In a synchrotron light source, low emittance means that the resulting x-ray beam will be small, and result in higher brightness.
Definitions
The coordinate system used to describe the motion of particles in an accelerator has three orthogonal axes, but rather than being centered on a fixed point in space, they are oriented with respect to the trajectory of an "ideal" particle moving through the accelerator with no deviation from the intended speed, position, or direction. Motion along this design trajectory is referred to as the longitudinal axis, and the two axes perpendicular to this trajectory (usually oriented horizontally and vertically) are referred to as transverse axes. The most common convention is for the longitudinal axis to be labelled and the transverse axes to be labelled and .
Emittance has units of length, but is usually referred to as "length × angle", for example, "millimeter × milliradians". It can be measured in all three spatial dimensions.
Geometric transverse emittance
When a particle moves through a circular accelerator or storage ring, the position and angle of the particle in the x direction will trace an ellipse in phase space. (All of this section applies equivalently to and ) This ellipse can be described by the following equation:
where x and are the position and angle of the particle, and are the Courant–Snyder (Twiss) parameters, calculated from the shape of the ellipse.
The emittance is given by , and has units of length × angle. However, many sources will move the factor of into the units of emittance rather than including the specific value, giving units of "length × angle × ."
This formula is the single particle emittance, which describes the area enclosed by the trajectory of a single particle in phase space. However, emittance is more useful as a description of the collective properties of the particles in a beam, rather than of a single particle. Since beam particles are not necessarily distributed uniformly in phase space, definitions of emittance for an entire beam will be based on the area of the ellipse required to enclose a specific fraction of the beam particles.
If the beam is distributed in phase space with a Gaussian distribution, the emittance of the beam may be specified in terms of the root mean square value of and the fraction of the beam to be included in the emittance.
The equation for the emittance of a Gaussian beam is:
where is the root mean square width of the beam, is the Courant-Snyder , and is the fraction of the beam to be enclosed in the ellipse, given as a number between 0 and 1. Here the factor of is shown on the right of the equation, and would often be included in the units of emittance, rather than being multiplied in to the computed value.
The value chosen for will depend on the application and the author, and a number of different choices exist in the literature. Some common choices and their equivalent definition of emittance are:
{| class="wikitable"
|-
! !!
|-
| || 0.15
|-
| || 0.39
|-
| || 0.87
|-
| || 0.95
|}
While the x and y axes are generally equivalent mathematically, in horizontal rings where the x coordinate represents the plane of the ring, consideration of dispersion can be added to the equation of the emittance. Because the magnetic force of a bending magnet is dependent on the energy of the particle being bent, particles of different energies will be bent along different trajectories through the magnet, even if their initial position and angle are the same. The effect of this dispersion on the beam emittance is given by:
where is the dispersion at location s, is the ideal particle momentum, and is the root mean square of the momentum difference of the particles in the beam from the ideal momentum. (This definition assumes F=0.15)
Longitudinal emittance
The geometrical definition of longitudinal emittance is more complex than that of transverse emittance. While the and coordinates represent deviation from a reference trajectory which remains static, the coordinate represents deviation from a reference particle, which is itself moving with a specified energy. This deviation can be expressed in terms of distance along the reference trajectory, time of flight along the reference trajectory (how "early" or "late" the particle is compared to the reference), or phase (for a specified reference frequency).
In turn, the coordinate is generally not expressed as an angle. Since represents the change in z over time, it corresponds to the forward motion of the particle. This can be given in absolute terms, as a velocity, momentum, or energy, or in relative terms, as a fraction of the position, momentum, or energy of the reference particle.
However, the fundamental concept of emittance is the same—the positions of the particles in a beam are plotted along one axis of a phase space plot, the rate of change of those positions over time is plotted on the other axis, and the emittance is a measure of the area occupied on that plot.
One possible definition of longitudinal emittance is given by:
where the integral is taken along a path which tightly encloses the beam particles in phase space. Here is the reference frequency and the longitudinal coordinate is the phase of the particles relative to a reference particle. Longitudinal equations such as this one often must be solved numerically, rather than analytically.
RMS emittance
The geometric definition of emittance assumes that the distribution of particles in phase space can be reasonably well characterized by an ellipse. In addition, the definitions using the root mean square of the particle distribution assume a Gaussian particle distribution.
In cases where these assumptions do not hold, it is still possible to define a beam emittance using the moments of the distribution. Here, the RMS emittance is defined to be,
where is the variance of the particle's position, is the variance of the angle a particle makes with the direction of travel in the accelerator ( with along the direction of travel), and represents an angle-position correlation of particles in the beam. This definition is equivalent to the geometric emittance in the case of an elliptical particle distribution in phase space.
The emittance may also be expressed as the determinant of the variance-covariance matrix of the beam's phase space coordinates where it becomes clear that quantity describes an effective area occupied by the beam in terms of its second order statistics.
Depending on context, some definitions of RMS emittance will add a scaling factor to correspond to a fraction of the total distribution, to facilitate comparison with geometric emittances using the same fraction.
RMS emittance in higher dimensions
It is sometimes useful to talk about phase space area for either four dimensional transverse phase space (IE , , , ) or the full six dimensional phase space of particles (IE , , , , , ). The RMS emittance generalizes to full three dimensional space as shown:
In the absences of correlations between different axes in the particle accelerator, most of these matrix elements become zero and we are left with a product of the emittance along each axis.
Normalized emittance
Although the previous definitions of emittance remain constant for linear beam transport, they do change when the particles undergo acceleration (an effect called adiabatic damping). In some applications, such as for linear accelerators, photoinjectors, and the accelerating sections of larger systems, it becomes important to compare beam quality across different energies. Normalized emittance, which is invariant under acceleration, is used for this purpose.
Normalized emittance in one dimension is given by:
The angle in the prior definition has been replaced with the normalized transverse momentum , where is the Lorentz factor and is the normalized transverse velocity.
Normalized emittance is related to the previous definitions of emittance through and the normalized velocity in the direction of the beam's travel:
The normalized emittance does not change as a function of energy and so can be used to indicate beam degradation if the particles are accelerated. For speeds close to the speed of light, where is close to one, the emittance is approximately inversely proportional to the energy. In this case, the physical width of the beam will vary inversely with the square root of the energy.
Higher dimensional versions of the normalized emittance can be defined in analogy to the RMS version by replacing all angles with their corresponding momenta.
Measurement
Quadrupole scan technique
One of the most fundamental methods of measuring beam emittance is the quadrupole scan method. The emittance of the beam for a particular plane of interest (i.e., horizontal or vertical) can be obtained by varying the field strength of a quadrupole (or quadrupoles) upstream of a monitor (i.e., a wire or a screen).
The properties of a beam can be described as the following beam matrix.
where is the derivative of x with respect to the longitudinal coordinate. The forces experienced by the beam as it travels down the beam line and passes through the quadrupole(s) are described using the transfer matrix (referenced to transfer maps page) of the beam line, including the quadrupole(s) and other beam line components such as drifts:
Here is the transfer matrix between the original beam position and the quadrupole(s), is the transfer matrix of the quadrupole(s), and is the transfer matrix between the quadrupole(s) and the monitor screen. During the quadrupole scan process, and stay constant, and changes with the field strength of the quadrupole(s).
The final beam when it reaches the monitor screen at distance s from its original position can be described as another beam matrix :
The final beam matrix can be calculated from the original beam matrix by doing matrix multiplications with the beam line transfer matrix :
Where is the transpose of .
Now, focusing on the (1,1) element of the final beam matrix throughout the matrix multiplications, we get the equation:
Here the middle term has a factor of 2 because .
Now divide both sides of the above equation by , the equation becomes:
Which is a quadratic equation of the variable . Since the RMS emittance RMS is defined to be the following.
The RMS emittance of the original beam can be calculated using its beam matrix elements:
To obtain the emittance measurement, the following procedure is employed:
For each value (or value combination) of the quadrupole(s), the beam line transfer transfer matrix is calculated to determine values of and .
The beam propagates through the varied beam line, and is observed at the monitor screen, where the beam size is measured.
Repeat step 1 and 2 to obtain a series of values for and , fit the results with a parabola .
Equate parabola fit parameters with original beam matrix elements: , , .
Calculate RMS emittance of the original beam:
If the length of the quadrupole is short compared to its focal length , where is the field strength of the quadrupole, its transfer matrix can be approximated by the thin lens approximation:
Then the RMS emittance can be calculated by fitting a parabola to values of measured beam size versus quadrupole strength .
By adding additional quadrupoles, this technique can be extended to a full 4-D reconstruction.
Mask-based reconstruction
Another fundamental method for measuring emittance is by using a predefined mask to imprint a pattern on the beam and sample the remaining beam at a screen downstream. Two such masks are pepper pots and TEM grids. A schematic of the TEM grid measurement is shown below.
By using the knowledge of the spacing of the features in the mask one can extract information about the beam size at the mask plane. By measuring the spacing between the same features on the sampled beam downstream, one can extract information about the angles in the beam. The quantities of merit can be extracted as described in Marx et al.
The choice of mask is generally dependent on the charge of the beam; low-charge beams are better suited to the TEM grid mask over the pepper pot, as more of the beam is transmitted.
Emittance of electrons versus heavy particles
To understand why the RMS emittance takes on a particular value in a storage ring, one needs to distinguish between electron storage rings and storage rings with heavier particles (such as protons). In an electron storage ring, radiation is an important effect, whereas when other particles are stored, it is typically a small effect. When radiation is important, the particles undergo radiation damping (which slowly decreases emittance turn after turn) and quantum excitation causing diffusion which leads to an equilibrium emittance. When no radiation is present, the emittances remain constant (apart from impedance effects and intrabeam scattering). In this case, the emittance is determined by the initial particle distribution. In particular if one injects a "small" emittance, it remains small, whereas if one injects a "large" emittance, it remains large.
Acceptance
The acceptance, also called admittance, is the maximum emittance that a beam transport system or analyzing system is able to transmit. This is the size of the chamber transformed into phase space and does not suffer from the ambiguities of the definition of beam emittance.
Conservation of emittance
Lenses can focus a beam, reducing its size in one transverse dimension while increasing its angular spread, but cannot change the total emittance. This is a result of Liouville's theorem. Ways of reducing the beam emittance include radiation damping, stochastic cooling, and electron cooling.
Emittance and brightness
Emittance is also related to the brightness of the beam. In microscopy brightness is very often used, because it includes the current in the beam and most systems are circularly symmetric. Consider the brightness of the incident beam at the sample,
where indicates the beam current and represents the total emittance of the incident beam and the wavelength of the incident electron.
The intrinsic emittance , describing a normal distribution in the initial phase space, is diffused by the emittance introduced by aberrations . The total emittance is approximately the sum in quadrature. Under the assumption of uniform illumination of the aperture with current per unit angle , we have the following emittance-brightness relation,
See also
Accelerator physics
Etendue
Mean transverse energy
References
Accelerator physics | 0.782732 | 0.974722 | 0.762946 |
Thermodynamic cycle | A thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump. If at every point in the cycle the system is in thermodynamic equilibrium, the cycle is reversible. Whether carried out reversible or irreversibly, the net entropy change of the system is zero, as entropy is a state function.
During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies:
The above states that there is no change of the internal energy of the system over the cycle. represents the total work and heat input during the cycle and would be the total work and heat output during the cycle. The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device.
Heat and work
Two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate as power or heat pump cycles by controlling the process direction. On a pressure–volume (PV) diagram or temperature–entropy diagram, the clockwise and counterclockwise directions indicate power and heat pump cycles, respectively.
Relationship to work
Because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a PV diagram. A PV diagram's Y axis shows pressure (P) and X axis shows volume (V). The area enclosed by the loop is the work (W) done by the process:
This work is equal to the balance of heat (Q) transferred into the system:
Equation (2) is consistent with the First Law; even though the internal energy changes during the course of the cyclic process, when the cyclic process finishes the system's internal energy is the same as the energy it had when the process began.
If the cyclic process moves clockwise around the loop, then W will be positive, and it represents a heat engine. If it moves counterclockwise, then W will be negative, and it represents a heat pump.
A list of thermodynamic processes
The following processes are often used to describe different stages of a thermodynamic cycle:
Adiabatic : No energy transfer as heat (Q) during that part of the cycle. Energy transfer is considered as work done by the system only.
Isothermal : The process is at a constant temperature during that part of the cycle (T=constant, ). Energy transfer is considered as heat removed from or work done by the system.
Isobaric : Pressure in that part of the cycle will remain constant. (P=constant, ). Energy transfer is considered as heat removed from or work done by the system.
Isochoric : The process is constant volume (V=constant, ). Energy transfer is considered as heat removed from the system, as the work done by the system is zero.
Isentropic : The process is one of constant entropy (S=constant, ). It is adiabatic (no heat nor mass exchange) and reversible.
Isenthalpic : The process that proceeds without any change in enthalpy or specific enthalpy.
Polytropic : The process that obeys the relation .
Reversible : The process where the net entropy production is zero; .
Example: The Otto cycle
The Otto Cycle is an example of a reversible thermodynamic cycle.
1→2: Isentropic / adiabatic expansion: Constant entropy (s), Decrease in pressure (P), Increase in volume (v), Decrease in temperature (T)
2→3: Isochoric cooling: Constant volume(v), Decrease in pressure (P), Decrease in entropy (S), Decrease in temperature (T)
3→4: Isentropic / adiabatic compression: Constant entropy (s), Increase in pressure (P), Decrease in volume (v), Increase in temperature (T)
4→1: Isochoric heating: Constant volume (v), Increase in pressure (P), Increase in entropy (S), Increase in temperature (T)
Power cycles
Thermodynamic power cycles are the basis for the operation of heat engines, which supply most of the world's electric power and run the vast majority of motor vehicles. Power cycles can be organized into two categories: real cycles and ideal cycles. Cycles encountered in real world devices (real cycles) are difficult to analyze because of the presence of complicating effects (friction), and the absence of sufficient time for the establishment of equilibrium conditions. For the purpose of analysis and design, idealized models (ideal cycles) are created; these ideal models allow engineers to study the effects of major parameters that dominate the cycle without having to spend significant time working out intricate details present in the real cycle model.
Power cycles can also be divided according to the type of heat engine they seek to model. The most common cycles used to model internal combustion engines are the Otto cycle, which models gasoline engines, and the Diesel cycle, which models diesel engines. Cycles that model external combustion engines include the Brayton cycle, which models gas turbines, the Rankine cycle, which models steam turbines, the Stirling cycle, which models hot air engines, and the Ericsson cycle, which also models hot air engines.
For example :--the pressure-volume mechanical work output from the ideal Stirling cycle (net work out), consisting of 4 thermodynamic processes, is:
For the ideal Stirling cycle, no volume change happens in process 4-1 and 2-3, thus equation (3) simplifies to:
Heat pump cycles
Thermodynamic heat pump cycles are the models for household heat pumps and refrigerators. There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the household heat pump is intended to warm or cool a house. Both work by moving heat from a cold space to a warm space. The most common refrigeration cycle is the vapor compression cycle, which models systems using refrigerants that change phase. The absorption refrigeration cycle is an alternative that absorbs the refrigerant in a liquid solution rather than evaporating it. Gas refrigeration cycles include the reversed Brayton cycle and the Hampson–Linde cycle. Multiple compression and expansion cycles allow gas refrigeration systems to liquify gases.
Modeling real systems
Thermodynamic cycles may be used to model real devices and systems, typically by making a series of assumptions. simplifying assumptions are often necessary to reduce the problem to a more manageable form. For example, as shown in the figure, devices such a gas turbine or jet engine can be modeled as a Brayton cycle. The actual device is made up of a series of stages, each of which is itself modeled as an idealized thermodynamic process. Although each stage which acts on the working fluid is a complex real device, they may be modelled as idealized processes which approximate their real behavior. If energy is added by means other than combustion, then a further assumption is that the exhaust gases would be passed from the exhaust to a heat exchanger that would sink the waste heat to the environment and the working gas would be reused at the inlet stage.
The difference between an idealized cycle and actual performance may be significant. For example, the following images illustrate the differences in work output predicted by an ideal Stirling cycle and the actual performance of a Stirling engine:
As the net work output for a cycle is represented by the interior of the cycle, there is a significant difference between the predicted work output of the ideal cycle and the actual work output shown by a real engine. It may also be observed that the real individual processes diverge from their idealized counterparts; e.g., isochoric expansion (process 1-2) occurs with some actual volume change.
Well-known thermodynamic cycles
In practice, simple idealized thermodynamic cycles are usually made out of four thermodynamic processes. Any thermodynamic processes may be used. However, when idealized cycles are modeled, often processes where one state variable is kept constant, such as:
adiabatic (constant heat)
isothermal (constant temperature)
isobaric (constant pressure)
isochoric (constant volume)
isentropic (constant entropy)
isenthalpic (constant enthalpy)
Some example thermodynamic cycles and their constituent processes are as follows:
Ideal cycle
An ideal cycle is simple to analyze and consists of:
TOP (A) and BOTTOM (C) of the loop: a pair of parallel isobaric processes
RIGHT (B) and LEFT (D) of the loop: a pair of parallel isochoric processes
If the working substance is a perfect gas, is only a function of for a closed system since its internal pressure vanishes. Therefore, the internal energy changes of a perfect gas undergoing various processes connecting initial state to final state are always given by the formula
Assuming that is constant, for any process undergone by a perfect gas.
Under this set of assumptions, for processes A and C we have and , whereas for processes B and D we have and .
The total work done per cycle is , which is just the area of the rectangle. If the total heat flow per cycle is required, this is easily obtained. Since , we have .
Thus, the total heat flow per cycle is calculated without knowing the heat capacities and temperature changes for each step (although this information would be needed to assess the thermodynamic efficiency of the cycle).
Carnot cycle
The Carnot cycle is a cycle composed of the totally reversible processes of isentropic compression and expansion and isothermal heat addition and rejection. The thermal efficiency of a Carnot cycle depends only on the absolute temperatures of the two reservoirs in which heat transfer takes place, and for a power cycle is:
where is the lowest cycle temperature and the highest. For Carnot power cycles the coefficient of performance for a heat pump is:
and for a refrigerator the coefficient of performance is:
The second law of thermodynamics limits the efficiency and COP for all cyclic devices to levels at or below the Carnot efficiency. The Stirling cycle and Ericsson cycle are two other reversible cycles that use regeneration to obtain isothermal heat transfer.
Stirling cycle
A Stirling cycle is like an Otto cycle, except that the adiabats are replaced by isotherms. It is also the same as an Ericsson cycle with the isobaric processes substituted for constant volume processes.
TOP and BOTTOM of the loop: a pair of quasi-parallel isothermal processes
LEFT and RIGHT sides of the loop: a pair of parallel isochoric processes
Heat flows into the loop through the top isotherm and the left isochore, and some of this heat flows back out through the bottom isotherm and the right isochore, but most of the heat flow is through the pair of isotherms. This makes sense since all the work done by the cycle is done by the pair of isothermal processes, which are described by Q=W. This suggests that all the net heat comes in through the top isotherm. In fact, all of the heat which comes in through the left isochore comes out through the right isochore: since the top isotherm is all at the same warmer temperature and the bottom isotherm is all at the same cooler temperature , and since change in energy for an isochore is proportional to change in temperature, then all of the heat coming in through the left isochore is cancelled out exactly by the heat going out the right isochore.
State functions and entropy
If Z is a state function then the balance of Z remains unchanged during a cyclic process:
.
Entropy is a state function and is defined in an absolute sense through the Third Law of Thermodynamics as
where a reversible path is chosen from absolute zero to the final state, so that for an isothermal reversible process
.
In general, for any cyclic process the state points can be connected by reversible paths, so that
meaning that the net entropy change of the working fluid over a cycle is zero.
See also
Entropy
Economizer
Thermogravitational cycle
References
Further reading
Halliday, Resnick & Walker. Fundamentals of Physics, 5th edition. John Wiley & Sons, 1997. Chapter 21, Entropy and the Second Law of Thermodynamics.
Çengel, Yunus A., and Michael A. Boles. Thermodynamics: An Engineering Approach, 7th ed. New York: McGraw-Hill, 2011. Print.
Hill and Peterson. "Mechanics and Thermodynamics of Propulsion", 2nd ed. Prentice Hall, 1991. 760 pp.
External links
Equilibrium chemistry
Thermodynamic processes
Thermodynamic systems
Cycle | 0.770803 | 0.989758 | 0.762908 |
Animal locomotion | In ethology, animal locomotion is any of a variety of methods that animals use to move from one place to another. Some modes of locomotion are (initially) self-propelled, e.g., running, swimming, jumping, flying, hopping, soaring and gliding. There are also many animal species that depend on their environment for transportation, a type of mobility called passive locomotion, e.g., sailing (some jellyfish), kiting (spiders), rolling (some beetles and spiders) or riding other animals (phoresis).
Animals move for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators. For many animals, the ability to move is essential for survival and, as a result, natural selection has shaped the locomotion methods and mechanisms used by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators are likely to have energetically costly, but very fast, locomotion.
The anatomical structures that animals use for movement, including cilia, legs, wings, arms, fins, or tails are sometimes referred to as locomotory organs or locomotory structures.
Etymology
The term "locomotion" is formed in English from Latin loco "from a place" (ablative of locus "place") + motio "motion, a moving".
The movement of whole body is called locomotion
Aquatic
Swimming
In water, staying afloat is possible using buoyancy. If an animal's body is less dense than water, it can stay afloat. This requires little energy to maintain a vertical position, but requires more energy for locomotion in the horizontal plane compared to less buoyant animals. The drag encountered in water is much greater than in air. Morphology is therefore important for efficient locomotion, which is in most cases essential for basic functions such as catching prey. A fusiform, torpedo-like body form is seen in many aquatic animals, though the mechanisms they use for locomotion are diverse.
The primary means by which fish generate thrust is by oscillating the body from side-to-side, the resulting wave motion ending at a large tail fin. Finer control, such as for slow movements, is often achieved with thrust from pectoral fins (or front limbs in marine mammals). Some fish, e.g. the spotted ratfish (Hydrolagus colliei) and batiform fish (electric rays, sawfishes, guitarfishes, skates and stingrays) use their pectoral fins as the primary means of locomotion, sometimes termed labriform swimming. Marine mammals oscillate their body in an up-and-down (dorso-ventral) direction.
Other animals, e.g. penguins, diving ducks, move underwater in a manner which has been termed "aquatic flying". Some fish propel themselves without a wave motion of the body, as in the slow-moving seahorses and Gymnotus.
Other animals, such as cephalopods, use jet propulsion to travel fast, taking in water then squirting it back out in an explosive burst. Other swimming animals may rely predominantly on their limbs, much as humans do when swimming. Though life on land originated from the seas, terrestrial animals have returned to an aquatic lifestyle on several occasions, such as the fully aquatic cetaceans, now very distinct from their terrestrial ancestors.
Dolphins sometimes ride on the bow waves created by boats or surf on naturally breaking waves.
Benthic
Benthic locomotion is movement by animals that live on, in, or near the bottom of aquatic environments. In the sea, many animals walk over the seabed. Echinoderms primarily use their tube feet to move about. The tube feet typically have a tip shaped like a suction pad that can create a vacuum through contraction of muscles. This, along with some stickiness from the secretion of mucus, provides adhesion. Waves of tube feet contractions and relaxations move along the adherent surface and the animal moves slowly along. Some sea urchins also use their spines for benthic locomotion.
Crabs typically walk sideways (a behaviour that gives us the word crabwise). This is because of the articulation of the legs, which makes a sidelong gait more efficient. However, some crabs walk forwards or backwards, including raninids, Libinia emarginata and Mictyris platycheles. Some crabs, notably the Portunidae and Matutidae, are also capable of swimming, the Portunidae especially so as their last pair of walking legs are flattened into swimming paddles.
A stomatopod, Nannosquilla decemspinosa, can escape by rolling itself into a self-propelled wheel and somersault backwards at a speed of 72 rpm. They can travel more than 2 m using this unusual method of locomotion.
Aquatic Surface
Velella, the by-the-wind sailor, is a cnidarian with no means of propulsion other than sailing. A small rigid sail projects into the air and catches the wind. Velella sails always align along the direction of the wind where the sail may act as an aerofoil, so that the animals tend to sail downwind at a small angle to the wind.
While larger animals such as ducks can move on water by floating, some small animals move across it without breaking through the surface. This surface locomotion takes advantage of the surface tension of water. Animals that move in such a way include the water strider. Water striders have legs that are hydrophobic, preventing them from interfering with the structure of water. Another form of locomotion (in which the surface layer is broken) is used by the basilisk lizard.
Aerial
Active flight
Gravity is the primary obstacle to flight. Because it is impossible for any organism to have a density as low as that of air, flying animals must generate enough lift to ascend and remain airborne. One way to achieve this is with wings, which when moved through the air generate an upward lift force on the animal's body. Flying animals must be very light to achieve flight, the largest living flying animals being birds of around 20 kilograms. Other structural adaptations of flying animals include reduced and redistributed body weight, fusiform shape and powerful flight muscles; there may also be physiological adaptations. Active flight has independently evolved at least four times, in the insects, pterosaurs, birds, and bats. Insects were the first taxon to evolve flight, approximately 400 million years ago (mya), followed by pterosaurs approximately 220 mya, birds approximately 160 mya, then bats about 60 mya.
Gliding
Rather than active flight, some (semi-) arboreal animals reduce their rate of falling by gliding. Gliding is heavier-than-air flight without the use of thrust; the term "volplaning" also refers to this mode of flight in animals. This mode of flight involves flying a greater distance horizontally than vertically and therefore can be distinguished from a simple descent like a parachute. Gliding has evolved on more occasions than active flight. There are examples of gliding animals in several major taxonomic classes such as the invertebrates (e.g., gliding ants), reptiles (e.g., banded flying snake), amphibians (e.g., flying frog), mammals (e.g., sugar glider, squirrel glider).
Some aquatic animals also regularly use gliding, for example, flying fish, octopus and squid. The flights of flying fish are typically around 50 meters (160 ft), though they can use updrafts at the leading edge of waves to cover distances of up to . To glide upward out of the water, a flying fish moves its tail up to 70 times per second.
Several oceanic squid, such as the Pacific flying squid, leap out of the water to escape predators, an adaptation similar to that of flying fish. Smaller squids fly in shoals, and have been observed to cover distances as long as 50 m. Small fins towards the back of the mantle help stabilize the motion of flight. They exit the water by expelling water out of their funnel, indeed some squid have been observed to continue jetting water while airborne providing thrust even after leaving the water. This may make flying squid the only animals with jet-propelled aerial locomotion. The neon flying squid has been observed to glide for distances over , at speeds of up to .
Soaring
Soaring birds can maintain flight without wing flapping, using rising air currents. Many gliding birds are able to "lock" their extended wings by means of a specialized tendon. Soaring birds may alternate glides with periods of soaring in rising air. Five principal types of lift are used: thermals, ridge lift, lee waves, convergences and dynamic soaring.
Examples of soaring flight by birds are the use of:
Thermals and convergences by raptors such as vultures
Ridge lift by gulls near cliffs
Wave lift by migrating birds
Dynamic effects near the surface of the sea by albatrosses
Ballooning
Ballooning is a method of locomotion used by spiders. Certain silk-producing arthropods, mostly small or young spiders, secrete a special light-weight gossamer silk for ballooning, sometimes traveling great distances at high altitude.
Terrestrial
Forms of locomotion on land include walking, running, hopping or jumping, dragging and crawling or slithering. Here friction and buoyancy are no longer an issue, but a strong skeletal and muscular framework are required in most terrestrial animals for structural support. Each step also requires much energy to overcome inertia, and animals can store elastic potential energy in their tendons to help overcome this. Balance is also required for movement on land. Human infants learn to crawl first before they are able to stand on two feet, which requires good coordination as well as physical development. Humans are bipedal animals, standing on two feet and keeping one on the ground at all times while walking. When running, only one foot is on the ground at any one time at most, and both leave the ground briefly. At higher speeds momentum helps keep the body upright, so more energy can be used in movement.
Jumping
Jumping (saltation) can be distinguished from running, galloping, and other gaits where the entire body is temporarily airborne by the relatively long duration of the aerial phase and high angle of initial launch. Many terrestrial animals use jumping (including hopping or leaping) to escape predators or catch prey—however, relatively few animals use this as a primary mode of locomotion. Those that do include the kangaroo and other macropods, rabbit, hare, jerboa, hopping mouse, and kangaroo rat. Kangaroo rats often leap 2 m and reportedly up to 2.75 m at speeds up to almost . They can quickly change their direction between jumps. The rapid locomotion of the banner-tailed kangaroo rat may minimize energy cost and predation risk. Its use of a "move-freeze" mode may also make it less conspicuous to nocturnal predators. Frogs are, relative to their size, the best jumpers of all vertebrates. The Australian rocket frog, Litoria nasuta, can leap over , more than fifty times its body length.
Peristalsis and looping
Other animals move in terrestrial habitats without the aid of legs. Earthworms crawl by a peristalsis, the same rhythmic contractions that propel food through the digestive tract.
Leeches and geometer moth caterpillars move by looping or inching (measuring off a length with each movement), using their paired circular and longitudinal muscles (as for peristalsis) along with the ability to attach to a surface at both anterior and posterior ends. One end is attached, often the thicker end, and the other end, often thinner, is projected forward peristaltically until it touches down, as far as it can reach; then the first end is released, pulled forward, and reattached; and the cycle repeats. In the case of leeches, attachment is by a sucker at each end of the body.
Sliding
Due to its low coefficient of friction, ice provides the opportunity for other modes of locomotion. Penguins either waddle on their feet or slide on their bellies across the snow, a movement called tobogganing, which conserves energy while moving quickly. Some pinnipeds perform a similar behaviour called sledding.
Climbing
Some animals are specialized for moving on non-horizontal surfaces. One common habitat for such climbing animals is in trees; for example, the gibbon is specialized for arboreal movement, travelling rapidly by brachiation (see below).
Others living on rock faces such as in mountains move on steep or even near-vertical surfaces by careful balancing and leaping. Perhaps the most exceptional are the various types of mountain-dwelling caprids (e.g., Barbary sheep, yak, ibex, rocky mountain goat, etc.), whose adaptations can include a soft rubbery pad between their hooves for grip, hooves with sharp keratin rims for lodging in small footholds, and prominent dew claws. Another case is the snow leopard, which being a predator of such caprids also has spectacular balance and leaping abilities, such as ability to leap up to 17m (50ft).
Some light animals are able to climb up smooth sheer surfaces or hang upside down by adhesion using suckers. Many insects can do this, though much larger animals such as geckos can also perform similar feats.
Walking and running
Species have different numbers of legs resulting in large differences in locomotion.
Modern birds, though classified as tetrapods, usually have only two functional legs, which some (e.g., ostrich, emu, kiwi) use as their primary, Bipedal, mode of locomotion. A few modern mammalian species are habitual bipeds, i.e., whose normal method of locomotion is two-legged. These include the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and homininan apes. Bipedalism is rarely found outside terrestrial animals—though at least two types of octopus walk bipedally on the sea floor using two of their arms, so they can use the remaining arms to camouflage themselves as a mat of algae or floating coconut.
There are no three-legged animals—though some macropods, such as kangaroos, that alternate between resting their weight on their muscular tails and their two hind legs could be looked at as an example of tripedal locomotion in animals.
Many familiar animals are quadrupedal, walking or running on four legs. A few birds use quadrupedal movement in some circumstances. For example, the shoebill sometimes uses its wings to right itself after lunging at prey. The newly hatched hoatzin bird has claws on its thumb and first finger enabling it to dexterously climb tree branches until its wings are strong enough for sustained flight. These claws are gone by the time the bird reaches adulthood.
A relatively few animals use five limbs for locomotion. Prehensile quadrupeds may use their tail to assist in locomotion and when grazing, the kangaroos and other macropods use their tail to propel themselves forward with the four legs used to maintain balance.
Insects generally walk with six legs—though some insects such as nymphalid butterflies do not use the front legs for walking.
Arachnids have eight legs. Most arachnids lack extensor muscles in the distal joints of their appendages. Spiders and whipscorpions extend their limbs hydraulically using the pressure of their hemolymph. Solifuges and some harvestmen extend their knees by the use of highly elastic thickenings in the joint cuticle. Scorpions, pseudoscorpions and some harvestmen have evolved muscles that extend two leg joints (the femur-patella and patella-tibia joints) at once.
The scorpion Hadrurus arizonensis walks by using two groups of legs (left 1, right 2, Left 3, Right 4 and Right 1, Left 2, Right 3, Left 4) in a reciprocating fashion. This alternating tetrapod coordination is used over all walking speeds.
Centipedes and millipedes have many sets of legs that move in metachronal rhythm. Some echinoderms locomote using the many tube feet on the underside of their arms. Although the tube feet resemble suction cups in appearance, the gripping action is a function of adhesive chemicals rather than suction. Other chemicals and relaxation of the ampullae allow for release from the substrate. The tube feet latch on to surfaces and move in a wave, with one arm section attaching to the surface as another releases. Some multi-armed, fast-moving starfish such as the sunflower seastar (Pycnopodia helianthoides) pull themselves along with some of their arms while letting others trail behind. Other starfish turn up the tips of their arms while moving, which exposes the sensory tube feet and eyespot to external stimuli. Most starfish cannot move quickly, a typical speed being that of the leather star (Dermasterias imbricata), which can manage just in a minute. Some burrowing species from the genera Astropecten and Luidia have points rather than suckers on their long tube feet and are capable of much more rapid motion, "gliding" across the ocean floor. The sand star (Luidia foliolata) can travel at a speed of per minute. Sunflower starfish are quick, efficient hunters, moving at a speed of using 15,000 tube feet.
Many animals temporarily change the number of legs they use for locomotion in different circumstances. For example, many quadrupedal animals switch to bipedalism to reach low-level browse on trees. The genus of Basiliscus are arboreal lizards that usually use quadrupedalism in the trees. When frightened, they can drop to water below and run across the surface on their hind limbs at about 1.5 m/s for a distance of approximately before they sink to all fours and swim. They can also sustain themselves on all fours while "water-walking" to increase the distance travelled above the surface by about 1.3 m. When cockroaches run rapidly, they rear up on their two hind legs like bipedal humans; this allows them to run at speeds up to 50 body lengths per second, equivalent to a "couple hundred miles per hour, if you scale up to the size of humans." When grazing, kangaroos use a form of pentapedalism (four legs plus the tail) but switch to hopping (bipedalism) when they wish to move at a greater speed.
Powered cartwheeling
The Moroccan flic-flac spider (Cebrennus rechenbergi) uses a series of rapid, acrobatic flic-flac movements of its legs similar to those used by gymnasts, to actively propel itself off the ground, allowing it to move both down and uphill, even at a 40 percent incline. This behaviour is different than other huntsman spiders, such as Carparachne aureoflava from the Namib Desert, which uses passive cartwheeling as a form of locomotion. The flic-flac spider can reach speeds of up to 2 m/s using forward or back flips to evade threats.
Subterranean
Some animals move through solids such as soil by burrowing using peristalsis, as in earthworms, or other methods. In loose solids such as sand some animals, such as the golden mole, marsupial mole, and the pink fairy armadillo, are able to move more rapidly, "swimming" through the loose substrate. Burrowing animals include moles, ground squirrels, naked mole-rats, tilefish, and mole crickets.
Arboreal locomotion
Arboreal locomotion is the locomotion of animals in trees. Some animals may only scale trees occasionally, while others are exclusively arboreal. These habitats pose numerous mechanical challenges to animals moving through them, leading to a variety of anatomical, behavioural and ecological consequences as well as variations throughout different species. Furthermore, many of these same principles may be applied to climbing without trees, such as on rock piles or mountains. The earliest known tetrapod with specializations that adapted it for climbing trees was Suminia, a synapsid of the late Permian, about 260 million years ago. Some invertebrate animals are exclusively arboreal in habitat, for example, the tree snail.
Brachiation (from brachium, Latin for "arm") is a form of arboreal locomotion in which primates swing from tree limb to tree limb using only their arms. During brachiation, the body is alternately supported under each forelimb. This is the primary means of locomotion for the small gibbons and siamangs of southeast Asia. Some New World monkeys such as spider monkeys and muriquis are "semibrachiators" and move through the trees with a combination of leaping and brachiation. Some New World species also practice suspensory behaviors by using their prehensile tail, which acts as a fifth grasping hand.
Pandas are known to swig their heads laterally as they ascend vertical surfaces astonishingly utilizing their head as a propulsive limb in a anatomical way that was thought to only be practiced by certain species of birds.
Energetics
Animal locomotion requires energy to overcome various forces including friction, drag, inertia and gravity, although the influence of these depends on the circumstances. In terrestrial environments, gravity must be overcome whereas the drag of air has little influence. In aqueous environments, friction (or drag) becomes the major energetic challenge with gravity being less of an influence. Remaining in the aqueous environment, animals with natural buoyancy expend little energy to maintain a vertical position in a water column. Others naturally sink, and must spend energy to remain afloat. Drag is also an energetic influence in flight, and the aerodynamically efficient body shapes of flying birds indicate how they have evolved to cope with this. Limbless organisms moving on land must energetically overcome surface friction, however, they do not usually need to expend significant energy to counteract gravity.
Newton's third law of motion is widely used in the study of animal locomotion: if at rest, to move forwards an animal must push something backwards. Terrestrial animals must push the solid ground, swimming and flying animals must push against a fluid (either water or air). The effect of forces during locomotion on the design of the skeletal system is also important, as is the interaction between locomotion and muscle physiology, in determining how the structures and effectors of locomotion enable or limit animal movement. The energetics of locomotion involves the energy expenditure by animals in moving. Energy consumed in locomotion is not available for other efforts, so animals typically have evolved to use the minimum energy possible during movement. However, in the case of certain behaviors, such as locomotion to escape a predator, performance (such as speed or maneuverability) is more crucial, and such movements may be energetically expensive. Furthermore, animals may use energetically expensive methods of locomotion when environmental conditions (such as being within a burrow) preclude other modes.
The most common metric of energy use during locomotion is the net (also termed "incremental") cost of transport, defined as the amount of energy (e.g., Joules) needed above baseline metabolic rate to move a given distance. For aerobic locomotion, most animals have a nearly constant cost of transport—moving a given distance requires the same caloric expenditure, regardless of speed. This constancy is usually accomplished by changes in gait. The net cost of transport of swimming is lowest, followed by flight, with terrestrial limbed locomotion being the most expensive per unit distance. However, because of the speeds involved, flight requires the most energy per unit time. This does not mean that an animal that normally moves by running would be a more efficient swimmer; however, these comparisons assume an animal is specialized for that form of motion. Another consideration here is body mass—heavier animals, though using more total energy, require less energy per unit mass to move. Physiologists generally measure energy use by the amount of oxygen consumed, or the amount of carbon dioxide produced, in an animal's respiration. In terrestrial animals, the cost of transport is typically measured while they walk or run on a motorized treadmill, either wearing a mask to capture gas exchange or with the entire treadmill enclosed in a metabolic chamber. For small rodents, such as deer mice, the cost of transport has also been measured during voluntary wheel running.
Energetics is important for explaining the evolution of foraging economic decisions in organisms; for example, a study of the African honey bee, A. m. scutellata, has shown that honey bees may trade the high sucrose content of viscous nectar off for the energetic benefits of warmer, less concentrated nectar, which also reduces their consumption and flight time.
Passive locomotion
Passive locomotion in animals is a type of mobility in which the animal depends on their environment for transportation; such animals are vagile but not motile.
Hydrozoans
The Portuguese man o' war (Physalia physalis) lives at the surface of the ocean. The gas-filled bladder, or pneumatophore (sometimes called a "sail"), remains at the surface, while the remainder is submerged. Because the Portuguese man o' war has no means of propulsion, it is moved by a combination of winds, currents, and tides. The sail is equipped with a siphon. In the event of a surface attack, the sail can be deflated, allowing the organism to briefly submerge.
Mollusca
The violet sea-snail (Janthina janthina) uses a buoyant foam raft stabilized by amphiphilic mucins to float at the sea surface.
Arachnids
The wheel spider (Carparachne aureoflava) is a huntsman spider approximately 20 mm in size and native to the Namib Desert of Southern Africa. The spider escapes parasitic pompilid wasps by flipping onto its side and cartwheeling down sand dunes at speeds of up to 44 turns per second. If the spider is on a sloped dune, its rolling speed may be 1 metre per second.
A spider (usually limited to individuals of a small species), or spiderling after hatching, climbs as high as it can, stands on raised legs with its abdomen pointed upwards ("tiptoeing"), and then releases several silk threads from its spinnerets into the air. These form a triangle-shaped parachute that carries the spider on updrafts of winds, where even the slightest breeze transports it. The Earth's static electric field may also provide lift in windless conditions.
Insects
The larva of Cicindela dorsalis, the eastern beach tiger beetle, is notable for its ability to leap into the air, loop its body into a rotating wheel and roll along the sand at a high speed using wind to propel itself. If the wind is strong enough, the larva can cover up to in this manner. This remarkable ability may have evolved to help the larva escape predators such as the thynnid wasp Methocha.
Members of the largest subfamily of cuckoo wasps, Chrysidinae, are generally kleptoparasites, laying their eggs in host nests, where their larvae consume the host egg or larva while it is still young. Chrysidines are distinguished from the members of other subfamilies in that most have flattened or concave lower abdomens and can curl into a defensive ball when attacked by a potential host, a process known as conglobation. Protected by hard chitin in this position, they are expelled from the nest without injury and can search for a less hostile host.
Fleas can jump vertically up to 18 cm and horizontally up to 33 cm; however, although this form of locomotion is initiated by the flea, it has little control of the jump—they always jump in the same direction, with very little variation in the trajectory between individual jumps.
Crustaceans
Although stomatopods typically display the standard locomotion types as seen in true shrimp and lobsters, one species, Nannosquilla decemspinosa, has been observed flipping itself into a crude wheel. The species lives in shallow, sandy areas. At low tides, N. decemspinosa is often stranded by its short rear legs, which are sufficient for locomotion when the body is supported by water, but not on dry land. The mantis shrimp then performs a forward flip in an attempt to roll towards the next tide pool. N. decemspinosa has been observed to roll repeatedly for , but they typically travel less than . Again, the animal initiates the movement but has little control during its locomotion.
Animal transport
Some animals change location because they are attached to, or reside on, another animal or moving structure. This is arguably more accurately termed "animal transport".
Remoras
Remoras are a family (Echeneidae) of ray-finned fish. They grow to long, and their distinctive first dorsal fins take the form of a modified oval, sucker-like organ with slat-like structures that open and close to create suction and take a firm hold against the skin of larger marine animals. By sliding backward, the remora can increase the suction, or it can release itself by swimming forward. Remoras sometimes attach to small boats. They swim well on their own, with a sinuous, or curved, motion. When the remora reaches about , the disc is fully formed and the remora can then attach to other animals. The remora's lower jaw projects beyond the upper, and the animal lacks a swim bladder. Some remoras associate primarily with specific host species. They are commonly found attached to sharks, manta rays, whales, turtles, and dugongs. Smaller remoras also fasten onto fish such as tuna and swordfish, and some small remoras travel in the mouths or gills of large manta rays, ocean sunfish, swordfish, and sailfish. The remora benefits by using the host as transport and protection, and also feeds on materials dropped by the host.
Angler fish
In some species of anglerfish, when a male finds a female, he bites into her skin, and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. The male becomes dependent on the female host for survival by receiving nutrients via their shared circulatory system, and provides sperm to the female in return. After fusing, males increase in volume and become much larger relative to free-living males of the species. They live and remain reproductively functional as long as the female lives, and can take part in multiple spawnings. This extreme sexual dimorphism ensures, when the female is ready to spawn, she has a mate immediately available. Multiple males can be incorporated into a single individual female with up to eight males in some species, though some taxa appear to have a one male per female rule.
Parasites
Many parasites are transported by their hosts. For example, endoparasites such as tapeworms live in the alimentary tracts of other animals, and depend on the host's ability to move to distribute their eggs. Ectoparasites such as fleas can move around on the body of their host, but are transported much longer distances by the host's locomotion. Some ectoparasites such as lice can opportunistically hitch a ride on a fly (phoresis) and attempt to find a new host.
Changes between media
Some animals locomote between different media, e.g., from aquatic to aerial. This often requires different modes of locomotion in the different media and may require a distinct transitional locomotor behaviour.
There are a large number of semi-aquatic animals (animals that spend part of their life cycle in water, or generally have part of their anatomy underwater). These represent the major taxa of mammals (e.g., beaver, otter, polar bear), birds (e.g., penguins, ducks), reptiles (e.g., anaconda, bog turtle, marine iguana) and amphibians (e.g., salamanders, frogs, newts).
Fish
Some fish use multiple modes of locomotion. Walking fish may swim freely or at other times "walk" along the ocean or river floor, but not on land (e.g., the flying gurnard—which does not actually fly—and batfishes of the family Ogcocephalidae). Amphibious fish, are fish that are able to leave water for extended periods of time. These fish use a range of terrestrial locomotory modes, such as lateral undulation, tripod-like walking (using paired fins and tail), and jumping. Many of these locomotory modes incorporate multiple combinations of pectoral, pelvic and tail fin movement. Examples include eels, mudskippers and the walking catfish. Flying fish can make powerful, self-propelled leaps out of water into air, where their long, wing-like fins enable gliding flight for considerable distances above the water's surface. This uncommon ability is a natural defence mechanism to evade predators. The flights of flying fish are typically around 50 m, though they can use updrafts at the leading edge of waves to cover distances of up to . They can travel at speeds of more than . Maximum altitude is above the surface of the sea. Some accounts have them landing on ships' decks.
Marine mammals
When swimming, several marine mammals such as dolphins, porpoises and pinnipeds, frequently leap above the water surface whilst maintaining horizontal locomotion. This is done for various reasons. When travelling, jumping can save dolphins and porpoises energy as there is less friction while in the air. This type of travel is known as "porpoising". Other reasons for dolphins and porpoises performing porpoising include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites. In pinnipeds, two types of porpoising have been identified. "High porpoising" is most often near (within 100 m) the shore and is often followed by minor course changes; this may help seals get their bearings on beaching or rafting sites. "Low porpoising" is typically observed relatively far (more than 100 m) from shore and often aborted in favour of anti-predator movements; this may be a way for seals to maximize sub-surface vigilance and thereby reduce their vulnerability to sharks
Some whales raise their (entire) body vertically out of the water in a behaviour known as "breaching".
Birds
Some semi-aquatic birds use terrestrial locomotion, surface swimming, underwater swimming and flying (e.g., ducks, swans). Diving birds also use diving locomotion (e.g., dippers, auks). Some birds (e.g., ratites) have lost the primary locomotion of flight. The largest of these, ostriches, when being pursued by a predator, have been known to reach speeds over , and can maintain a steady speed of , which makes the ostrich the world's fastest two-legged animal: Ostriches can also locomote by swimming. Penguins either waddle on their feet or slide on their bellies across the snow, a movement called tobogganing, which conserves energy while moving quickly. They also jump with both feet together if they want to move more quickly or cross steep or rocky terrain. To get onto land, penguins sometimes propel themselves upwards at a great speed to leap out the water.
Changes during the life-cycle
An animal's mode of locomotion may change considerably during its life-cycle. Barnacles are exclusively marine and tend to live in shallow and tidal waters. They have two nektonic (active swimming) larval stages, but as adults, they are sessile (non-motile) suspension feeders. Frequently, adults are found attached to moving objects such as whales and ships, and are thereby transported (passive locomotion) around the oceans.
Function
Animals locomote for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators.
Food procurement
Animals use locomotion in a wide variety of ways to procure food. Terrestrial methods include ambush predation, social predation and grazing. Aquatic methods include filterfeeding, grazing, ram feeding, suction feeding, protrusion and pivot feeding. Other methods include parasitism and parasitoidism.
Quantifying body and limb movement
The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. It is an application of kinematics, used to understand how the movements of animal limbs relate to the motion of the whole animal, for instance when walking or flying.
Galleries
See also
Animal migration
Animal navigation
Bird feet and legs
Feather
Joint
Kinesis (biology)
Microswimmer
Movement of Animals (book)
Role of skin in locomotion
Sessile
Taxis
References
Further reading
McNeill Alexander, Robert. (2003) Principles of Animal Locomotion. Princeton University Press, Princeton, N.J.
External links
Beetle Orientation
Unified Physics Theory Explains Animals' Running, Flying And Swimming
Ethology
Zoology
Articles containing video clips | 0.766383 | 0.995456 | 0.7629 |
Pair production | Pair production is the creation of a subatomic particle and its antiparticle from a neutral boson. Examples include creating an electron and a positron, a muon and an antimuon, or a proton and an antiproton. Pair production often refers specifically to a photon creating an electron–positron pair near a nucleus. As energy must be conserved, for pair production to occur, the incoming energy of the photon must be above a threshold of at least the total rest mass energy of the two particles created. (As the electron is the lightest, hence, lowest mass/energy, elementary particle, it requires the least energetic photons of all possible pair-production processes.) Conservation of energy and momentum are the principal constraints on the process.
All other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, or if one particle has strangeness of +1 then another one must have strangeness of −1.
The probability of pair production in photon–matter interactions increases with photon energy and also increases approximately as the square of atomic number of (hence, number of protons in) the nearby atom.
Photon to electron and positron
For photons with high photon energy (MeV scale and higher), pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blackett's counter-controlled cloud chamber, leading to the 1948 Nobel Prize in Physics. If the photon is near an atomic nucleus, the energy of a photon can be converted into an electron–positron pair:
(Z+) → +
The photon's energy is converted to particle mass in accordance with Einstein's equation, ; where is energy, is mass and is the speed of light. The photon must have higher energy than the sum of the rest mass energies of an electron and positron (2 × 511 keV = 1.022 MeV, resulting in a photon wavelength of ) for the production to occur. (Thus, pair production does not occur in medical X-ray imaging because these X-rays only contain ~ 150 keV.)
The photon must be near a nucleus in order to satisfy conservation of momentum, as an electron–positron pair produced in free space cannot satisfy conservation of both energy and momentum. Because of this, when pair production occurs, the atomic nucleus receives some recoil. The reverse of this process is electron–positron annihilation.
Basic kinematics
These properties can be derived through the kinematics of the interaction. Using four vector notation, the conservation of energy–momentum before and after the interaction gives:
where is the recoil of the nucleus. Note the modulus of the four vector
is
which implies that for all cases and . We can square the conservation equation
However, in most cases the recoil of the nucleus is small compared to the energy of the photon and can be neglected. Taking this approximation of and expanding the remaining relation
Therefore, this approximation can only be satisfied if the electron and positron are emitted in very nearly the same direction, that is, .
This derivation is a semi-classical approximation. An exact derivation of the kinematics can be done taking into account the full quantum mechanical scattering of photon and nucleus.
Energy transfer
The energy transfer to electron and positron in pair production interactions is given by
where is the Planck constant, is the frequency of the photon and the is the combined rest mass of the electron–positron. In general the electron and positron can be emitted with different kinetic energies, but the average transferred to each (ignoring the recoil of the nucleus) is
Cross section
The exact analytic form for the cross section of pair production must be calculated through quantum electrodynamics in the form of Feynman diagrams and results in a complicated function. To simplify, the cross section can be written as:
where is the fine-structure constant, is the classical electron radius, is the atomic number of the material, and is some complex-valued function that depends on the energy and atomic number. Cross sections are tabulated for different materials and energies.
In 2008 the Titan laser, aimed at a 1 millimeter-thick gold target, was used to generate positron–electron pairs in large numbers.
Astronomy
Pair production is invoked in the heuristic explanation of hypothetical Hawking radiation. According to quantum mechanics, particle pairs are constantly appearing and disappearing as a quantum foam. In a region of strong gravitational tidal forces, the two particles in a pair may sometimes be wrenched apart before they have a chance to mutually annihilate. When this happens in the region around a black hole, one particle may escape while its antiparticle partner is captured by the black hole.
Pair production is also the mechanism behind the hypothesized pair-instability supernova type of stellar explosion, where pair production suddenly lowers the pressure inside a supergiant star, leading to a partial implosion, and then explosive thermonuclear burning. Supernova SN 2006gy is hypothesized to have been a pair production type supernova.
See also
Breit–Wheeler process
Dirac equation
Matter creation
Meitner–Hupfeld effect
Landau–Pomeranchuk–Migdal effect
Two-photon physics
References
External links
Theory of photon-impact bound-free pair production
Particle physics
Nuclear physics
Antimatter | 0.768098 | 0.993233 | 0.7629 |
X-ray crystallography | X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds, crystallographic disorder, and other information.
X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys. The method has also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community.
History
Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal. René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive.
Wilhelm Röntgen discovered X-rays in 1895. Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms.
The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite (Mn(OH)2) and, by extension, brucite (Mg(OH)2) in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure was determined in 1920.
The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium.
Contributions in different areas
Chemistry
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was rapidly followed by several studies of different long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.
In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry.
Materials science and mineralogy
The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography.
On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes.
Biological macromolecular crystallography
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 130,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals.
Methods
Overview
Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology.
The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.
Crystallization
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded.
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture.
It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter).
Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.
Data collection
Mounting the crystal
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error.
The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.
Recording the reflections
The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point.
One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space.
Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.
Crystal symmetry, unit cell, and image scaling
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled usingpeaks appear in two or more images (merging) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data.
Initial phasing
The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.
Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.
Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.
Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine.
Model building and phase refinement
Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as
where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps.
It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
Disorder
A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered.
Applied computational data analysis
The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths).
Deposition of the structure
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.
Contribution of women to X-ray crystallography
A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science.
Kathleen Lonsdale was a research student of William Henry Bragg, who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London. Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men.... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers.
In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography. She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject.
Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix, that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses.
Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right.
Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography, which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge, founded and ran the Cambridge Crystallographic Data Centre, an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker, a British scientist, co-authored Crystal Structure Analysis: A Primer, first published in 1971 and as of 2010 in its third edition. Eleanor Dodson, an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4, the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide.
Nobel Prizes involving X-ray crystallography
See also
Beevers–Lipson strip
Bragg diffraction
Crystallographic database
Crystallographic point groups
Difference density map
Electron diffraction
Energy-dispersive X-ray diffraction
Flack parameter
Grazing incidence diffraction
Henderson limit
International Year of Crystallography
Multipole density formalism
Neutron diffraction
Powder diffraction
Ptychography
Scherrer equation
Small angle X-ray scattering (SAXS)
Structure determination
Ultrafast x-ray
Wide angle X-ray scattering (WAXS)
X-ray diffraction
Notes
References
Further reading
International Tables for Crystallography
Bound collections of articles
Textbooks
Applied computational data analysis
Historical
External links
Tutorials
Learning Crystallography
Simple, non technical introduction
The Crystallography Collection, video series from the Royal Institution
"Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website
International Union of Crystallography
Crystallography 101
Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal.
Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D.
Lecture notes on X-ray crystallography and structure determination
Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi
Interactive Crystallography Timeline from the Royal Institution
Primary databases
Crystallography Open Database (COD)
Protein Data Bank (PDB)
Nucleic Acid Databank (NDB)
Cambridge Structural Database (CSD)
Inorganic Crystal Structure Database (ICSD)
Biological Macromolecule Crystallization Database (BMCD)
Derivative databases
PDBsum
Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules
RNABase
HIC-Up database of PDB ligands
Structural Classification of Proteins database
CATH Protein Structure Classification
List of transmembrane proteins with known 3D structure
Orientations of Proteins in Membranes database
Structural validation
MolProbity structural validation suite
ProSA-web
NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues)
DALI server (identifies proteins similar to a given protein)
Laboratory techniques in condensed matter physics
Crystallography
Diffraction
Materials science
Protein structure
Protein methods
Protein imaging
Synchrotron-related techniques
Articles containing video clips
Crystallography | 0.765038 | 0.9972 | 0.762896 |
Stability theory | In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.
In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied.
Overview in dynamical systems
Many parts of the qualitative theory of differential equations and dynamical systems deal with asymptotic properties of solutions and the trajectories—what happens with the system after a long period of time. The simplest kind of behavior is exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular orbit is well understood, it is natural to ask next whether a small change in the initial condition will lead to similar behavior. Stability theory addresses the following questions: Will a nearby orbit indefinitely stay close to a given orbit? Will it converge to the given orbit? In the former case, the orbit is called stable; in the latter case, it is called asymptotically stable and the given orbit is said to be attracting.
An equilibrium solution to an autonomous system of first order ordinary differential equations is called:
stable if for every (small) , there exists a such that every solution having initial conditions within distance i.e. of the equilibrium remains within distance i.e. for all .
asymptotically stable if it is stable and, in addition, there exists such that whenever then as .
Stability means that the trajectories do not change too much under small perturbations. The opposite situation, where a nearby orbit is getting repelled from the given orbit, is also of interest. In general, perturbing the initial state in some directions results in the trajectory asymptotically approaching the given one and in other directions to the trajectory getting away from it. There may also be directions for which the behavior of the perturbed orbit is more complicated (neither converging nor escaping completely), and then stability theory does not give sufficient information about the dynamics.
One of the key ideas in stability theory is that the qualitative behavior of an orbit under perturbations can be analyzed using the linearization of the system near the orbit. In particular, at each equilibrium of a smooth dynamical system with an n-dimensional phase space, there is a certain n×n matrix A whose eigenvalues characterize the behavior of the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues are negative real numbers or complex numbers with negative real parts then the point is a stable attracting fixed point, and the nearby points converge to it at an exponential rate, cf Lyapunov stability and exponential stability. If none of the eigenvalues are purely imaginary (or zero) then the attracting and repelling directions are related to the eigenspaces of the matrix A with eigenvalues whose real part is negative and, respectively, positive. Analogous statements are known for perturbations of more complicated orbits.
Stability of fixed points in 2D
The paradigmatic case is the stability of the origin under the linear autonomous differential equation where and is a 2-by-2 matrix.
We would sometimes perform change-of-basis by for some invertible matrix , which gives . We say is " in the new basis". Since and , we can classify the stability of origin using and , while freely using change-of-basis.
Classification of stability types
If , then the rank of is zero or one.
If the rank is zero, then , and there is no flow.
If the rank is one, then and are both one-dimensional.
If , then let span , and let be a preimage of , then in basis, , and so the flow is a shearing along the direction. In this case, .
If , then let span and let span , then in basis, for some nonzero real number .
If , then it is unstable, diverging at a rate of from along parallel translates of .
If , then it is stable, converging at a rate of to along parallel translates of .
If , we first find the Jordan normal form of the matrix, to obtain a basis in which is one of three possible forms:
where .
If , then . The origin is a source, with integral curves of form
Similarly for . The origin is a sink.
If or , then , and the origin is a saddle point. with integral curves of form .
where . This can be further simplified by a change-of-basis with , after which . We can explicitly solve for with . The solution is with . This case is called the "degenerate node". The integral curves in this basis are central dilations of , plus the x-axis.
If , then the origin is an degenerate source. Otherwise it is a degenerate sink.
In both cases,
where . In this case, .
If , then this is a spiral sink. In this case, . The integral lines are logarithmic spirals.
If , then this is a spiral source. In this case, . The integral lines are logarithmic spirals.
If , then this is a rotation ("neutral stability") at a rate of , moving neither towards nor away from origin. In this case, . The integral lines are circles.
The summary is shown in the stability diagram on the right. In each case, except the case of , the values allows unique classification of the type of flow.
For the special case of , there are two cases that cannot be distinguished by . In both cases, has only one eigenvalue, with algebraic multiplicity 2.
If the eigenvalue has a two-dimensional eigenspace (geometric multiplicity 2), then the system is a central node (sometimes called a "star", or "dicritical node") which is either a source (when ) or a sink (when ).
If it has a one-dimensional eigenspace (geometric multiplicity 1), then the system is a degenerate node (if ) or a shearing flow (if ).
Area-preserving flow
When , we have , so the flow is area-preserving. In this case, the type of flow is classified by .
If , then it is a rotation ("neutral stability") around the origin.
If , then it is a shearing flow.
If , then the origin is a saddle point.
Stability of fixed points
The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical system is in a stable equilibrium state then a small push will result in a localized motion, for example, small oscillations as in the case of a pendulum. In a system with damping, a stable equilibrium state is moreover asymptotically stable. On the other hand, for an unstable equilibrium, such as a ball resting on a top of a hill, certain small pushes will result in a motion with a large amplitude that may or may not converge to the original state.
There are useful tests of stability for the case of a linear system. Stability of a nonlinear system can often be inferred from the stability of its linearization.
Maps
Let be a continuously differentiable function with a fixed point , . Consider the dynamical system obtained by iterating the function :
The fixed point is stable if the absolute value of the derivative of at is strictly less than 1, and unstable if it is strictly greater than 1. This is because near the point , the function has a linear approximation with slope :
Thus
which means that the derivative measures the rate at which the successive iterates approach the fixed point or diverge from it. If the derivative at is exactly 1 or −1, then more information is needed in order to decide stability.
There is an analogous criterion for a continuously differentiable map with a fixed point , expressed in terms of its Jacobian matrix at , . If all eigenvalues of are real or complex numbers with absolute value strictly less than 1 then is a stable fixed point; if at least one of them has absolute value strictly greater than 1 then is unstable. Just as for =1, the case of the largest absolute value being 1 needs to be investigated further — the Jacobian matrix test is inconclusive. The same criterion holds more generally for diffeomorphisms of a smooth manifold.
Linear autonomous systems
The stability of fixed points of a system of constant coefficient linear differential equations of first order can be analyzed using the eigenvalues of the corresponding matrix.
An autonomous system
where and is an matrix with real entries, has a constant solution
(In a different language, the origin is an equilibrium point of the corresponding dynamical system.) This solution is asymptotically stable as ("in the future") if and only if for all eigenvalues of , . Similarly, it is asymptotically stable as ("in the past") if and only if for all eigenvalues of , . If there exists an eigenvalue of with then the solution is unstable for .
Application of this result in practice, in order to decide the stability of the origin for a linear system, is facilitated by the Routh–Hurwitz stability criterion. The eigenvalues of a matrix are the roots of its characteristic polynomial. A polynomial in one variable with real coefficients is called a Hurwitz polynomial if the real parts of all roots are strictly negative. The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by means of an algorithm that avoids computing the roots.
Non-linear autonomous systems
Asymptotic stability of fixed points of a non-linear system can often be established using the Hartman–Grobman theorem.
Suppose that is a -vector field in which vanishes at a point , . Then the corresponding autonomous system
has a constant solution
Let be the Jacobian matrix of the vector field at the point . If all eigenvalues of have strictly negative real part then the solution is asymptotically stable. This condition can be tested using the Routh–Hurwitz criterion.
Lyapunov function for general dynamical systems
A general way to establish Lyapunov stability or asymptotic stability of a dynamical system is by means of Lyapunov functions.
See also
Chaos theory
Lyapunov stability
Hyperstability
Linear stability
Orbital stability
Stability criterion
Stability radius
Structural stability
von Neumann stability analysis
References
External links
Stable Equilibria by Michael Schreiber, The Wolfram Demonstrations Project.
Limit sets
Mathematical and quantitative methods (economics) | 0.769208 | 0.991789 | 0.762893 |
Wuxing (Chinese philosophy) | (), usually translated as Five Phases or Five Agents, is a fivefold conceptual scheme used in many traditional Chinese fields of study to explain a wide array of phenomena, including cosmic cycles, the interactions between internal organs, the succession of political regimes, and the properties of herbal medicines.
The agents are Fire, Water, Wood, Metal, and Earth. The wuxing system has been in use since it was formulated in the second or first century BCE during the Han dynasty. It appears in many seemingly disparate fields of early Chinese thought, including music, feng shui, alchemy, astrology, martial arts, military strategy, I Ching divination, and traditional medicine, serving as a metaphysics based on cosmic analogy.
Etymology
Wuxing originally referred to the five major planets (Jupiter, Saturn, Mercury, Mars, Venus), which were with the combination of the Sun and the Moon, conceived as creating five forces of earthly life. This is why the word is composed of Chinese characters meaning "five" and "moving". "Moving" is shorthand for "planets", since the word for planets in Chinese literally translates as "moving stars". Some of the Mawangdui Silk Texts (before 168 BC) also connect the wuxing to the wude, the Five Virtues and Five Emotions. Scholars believe that various predecessors to the concept of wuxing were merged into one system with many interpretations during the Han dynasty.
Wuxing was first translated into English as "the Five Elements", drawing deliberate parallels with the Greek arrangement of the four elements. This translation is still in common use among practitioners of Traditional Chinese medicine, such as in the name of Five Element acupuncture. However, this analogy is misleading. The four elements are concerned with form, substance and quantity, whereas wuxing are "primarily concerned with process, change, and quality". For example, the wuxing element "Wood" is more accurately thought of as the "vital essence" of trees rather than the physical substance wood. This led sinologist Nathan Sivin to propose the alternative translation "five phases" in 1987. But "phase" also fails to capture the full meaning of wuxing. In some contexts, the wuxing are indeed associated with physical substances. Historian of Chinese medicine Manfred Porkert proposed the (somewhat unwieldy) term "Evolutive Phase". Perhaps the most widely accepted translation among modern scholars is "the five agents", proposed by Marc Kalinowski.
Cycles
In traditional doctrine, the five phases are connected in two cycles of interactions: a generating or creation ( shēng) cycle, also known as "mother-son"; and an overcoming or destructive ( kè) cycle, also known as "grandfather-grandson" (see diagram). Each of the two cycles can be analyzed going forward or reversed. There is also an "overacting" or excessive version of the destructive cycle.
Inter-promoting
The generating cycle ( xiāngshēng) is:
Wood feeds Fire
Fire produces Earth (ash, lava)
Earth bears Metal (geological processes produce minerals)
Metal collects Water (water vapor condenses on metal, for example)
Water nourishes Wood (Water flowers, plants and other changes in forest)
Weakening
The reverse generating cycle (/ xiāngxiè) is:
Wood depletes Water
Water rusts Metal
Metal impoverishes Earth (erosion, destructive mining of minerals)
Earth smothers Fire
Fire burns Wood (forest fires)
Inter-regulating
The destructive cycle ( xiāngkè) is:
Wood grasps (or stabilizes) Earth (roots of trees can prevent soil erosion)
Earth contains (or directs) Water (dams or river banks)
Water dampens (or regulates) Fire
Fire melts (or refines or shapes) Metal
Metal chops (or carves) Wood
Overacting
The excessive destructive cycle ( xiāngchéng) is:
Wood depletes Earth (depletion of nutrients in soil, over-farming, overcultivation)
Earth obstructs Water (over-damming)
Water extinguishes Fire
Fire melts Metal (affecting its integrity)
Metal makes Wood rigid to easily snap.
Counteracting
A reverse or deficient destructive cycle ( xiāngwǔ or xiānghào) is:
Wood dulls Metal
Metal de-energizes Fire (conducting heat away)
Fire evaporates Water
Water muddies (or destabilizes) Earth
Earth rots Wood (buried wood rots)
Celestial stem
Ming neiyin
In Ziwei divination, neiyin further classifies the Five Elements into 60 ming, or life orders, based on the ganzhi. Similar to the astrology zodiac, the ming is used by fortune-tellers to analyse individual personality and destiny.
Applications
The wuxing schema is applied to explain phenomena in various fields.
Phases of the Year
The five phases are around 73 days each and are usually used to describe the transformations of nature rather than their formative states.
Wood/Spring: a period of growth, expanding which generates abundant vitality, movement and wind.
Fire/Summer: a period of swollen, flowering, expanded with heat.
Earth can be seen as a period of stillness transitioning between the other phases or seasons or when relating to transformative seasonal periods it can be seen as late Summer. This period is associated with stability, leveling and dampness.
Metal/Autumn: a period of harvesting, transmuting, contracting, collecting and dryness.
Water/Winter: a period of retreat, stillness, consolidation and coolness.
Cosmology and feng shui
The art of feng shui (Chinese geomancy) is based on wuxing, with the structure of the cosmos mirroring the five phases, as well as the eight trigrams. Each phase has a complex network of associations with different aspects of nature (see table): colors, seasons and shapes all interact according to the cycles.
An interaction or energy flow can be expansive, destructive, or exhaustive, depending on the cycle to which it belongs. By understanding these energy flows, a feng shui practitioner attempts to rearrange energy to benefit the client.
Dynastic transitions
According to the Warring States period political philosopher Zou Yan ( BCE), each of the five elements possesses a personified virtue, which indicates the foreordained destiny of a dynasty; hence the cyclic succession of the elements also indicates dynastic transitions. Zou Yan claims that the Mandate of Heaven sanctions the legitimacy of a dynasty by sending self-manifesting auspicious signs in the ritual color (yellow, blue, white, red, and black) that matches the element of the new dynasty (Earth, Wood, Metal, Fire, and Water). From the Qin dynasty onward, most Chinese dynasties invoked the theory of the Five Elements to legitimize their reign.
Chinese medicine
The interdependence of zangfu networks in the body was said to be a circle of five things, and so mapped by the Chinese doctors onto the five phases.
In order to explain the integrity and complexity of the human body, Chinese medical scientists and physicians use the Five Elements theory to classify the human body's endogenous influences on organs, physiological activities, pathological reactions, and environmental or exogenous (external, environmental) influences. This diagnostic capacity is extensively used in traditional five phase acupuncture today, as opposed to the modern Confucian styled eight principles based Traditional Chinese medicine. Furthermore, in combination the two systems are a formative and functional study of postnatal and prenatal influencing on genetics, psychology, sociology and ecology.
Music
The Huainanzi and the Yueling chapter of the Book of Rites make the following correlations:
Qing is a Chinese color word used for both green and blue. Modern Mandarin has separate words for each, but like many other languages, older forms of Chinese did not distinguish between green and blue.
In most modern music, various five note or seven note scales (e.g., the major scale) are defined by selecting five or seven frequencies from the set of twelve semi-tones in the Equal tempered tuning. The Chinese shi'er lü system of tuning is closest to the ancient Greek tuning of Pythagoras.
Martial arts
Tai chi uses the five elements to designate different directions, positions or footwork patterns: forward, backward, left, right and centre, or three steps forward (attack) and two steps back (retreat).
The Five Steps:
Jinbu – forward step
Tuibu – backward step
Zuogu – left step
Youpan – right step
Zhongding – central position, balance, equilibrium
The martial art of xingyiquan uses the five elements metaphorically to represent five different states of combat.
Wuxing heqidao, Gogyo Aikido (五行合气道) is a life art with roots in Confucian, Taoists and Buddhist theory. It centers around applied peace and health studies rather than defence or physical action. It emphasizes the unification of mind, body and environment using the physiological theory of yin, yang and five-element Traditional Chinese medicine. Its movements, exercises, and teachings cultivate, direct, and harmonise the qi.
Gogyo
The Japanese term is gogyo (Japanese:五行, romanized: gogyō). During the 5th and 6th centuries (Kofun period), Japan adopted various philosophical disciplines such as Taoism, Chinese Buddhism and Confucianism through monks and physicians from China. As opposed to theory of Godai that is form based and was introduced to Japan through India and Tibetan Buddhism evolving the Onmyōdō system. In particular, wuxing was adapted into gogyo. These theories have been extensively practiced in Japanese acupuncture and traditional Kampo medicine.
See also
Acupuncture
Classical element
Color in Chinese culture
Flying Star Feng Shui
Humorism
Qi
Wuxing painting
Zangfu
Yin and yang
Notes
References
Further reading
Feng Youlan (Yu-lan Fung), A History of Chinese Philosophy, volume 2, p. 13
Joseph Needham, Science and Civilization in China, volume 2, pp. 262–23.
External links
Wuxing (Wu-hsing). The Internet Encyclopedia of Philosophy, .
Classical Chinese philosophy
Taoist cosmology
Eastern esotericism | 0.76376 | 0.998857 | 0.762888 |
Operator (physics) | An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.
Operators in classical mechanics
In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian or equivalently the Hamiltonian , a function of the generalized coordinates q, generalized velocities and its conjugate momenta:
If either L or H is independent of a generalized coordinate q, meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem, and the invariance of motion with respect to the coordinate q is a symmetry). Operators in classical mechanics are related to these symmetries.
More technically, when H is invariant under the action of a certain group of transformations G:
.
The elements of G are physical operators, which map physical states among themselves.
Table of classical mechanics operators
where is the rotation matrix about an axis defined by the unit vector and angle θ.
Generators
If the transformation is infinitesimal, the operator action should be of the form
where is the identity operator, is a parameter with a small value, and will depend on the transformation at hand, and is called a generator of the group. Again, as a simple example, we will derive the generator of the space translations on 1D functions.
As it was stated, . If is infinitesimal, then we may write
This formula may be rewritten as
where is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative.
The exponential map
The whole group may be recovered, under normal circumstances, from the generators, via the exponential map. In the case of the translations the idea works like this.
The translation for a finite value of may be obtained by repeated application of the infinitesimal translation:
with the standing for the application times. If is large, each of the factors may be considered to be infinitesimal:
But this limit may be rewritten as an exponential:
To be convinced of the validity of this formal expression, we may expand the exponential in a power series:
The right-hand side may be rewritten as
which is just the Taylor expansion of , which was our original value for .
The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand–Naimark theorem.
Operators in quantum mechanics
The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator.
Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space. Time evolution in this vector space is given by the application of the evolution operator.
Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian. The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details about Hermitian operators.
In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators.
In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction.
Wavefunction
The wavefunction must be square-integrable (see Lp spaces), meaning:
and normalizable, so that:
Two cases of eigenstates (and eigenvalues) are:
for discrete eigenstates forming a discrete basis, so any state is a sum where ci are complex numbers such that ci2 = ci*ci is the probability of measuring the state , and the corresponding set of eigenvalues ai is also discrete - either finite or countably infinite. In this case, the inner product of two eigenstates is given by , where denotes the Kronecker Delta. However,
for a continuum of eigenstates forming a continuous basis, any state is an integral where c(φ) is a complex function such that c(φ)2 = c(φ)*c(φ) is the probability of measuring the state , and there is an uncountably infinite set of eigenvalues a. In this case, the inner product of two eigenstates is defined as , where here denotes the Dirac Delta.
Linear operators in wave mechanics
Let be the wavefunction for a quantum system, and be any linear operator for some observable (such as position, momentum, energy, angular momentum etc.). If is an eigenfunction of the operator , then
where is the eigenvalue of the operator, corresponding to the measured value of the observable, i.e. observable has a measured value .
If is an eigenfunction of a given operator , then a definite quantity (the eigenvalue ) will be observed if a measurement of the observable is made on the state . Conversely, if is not an eigenfunction of , then it has no eigenvalue for , and the observable does not have a single definite value in that case. Instead, measurements of the observable will yield each eigenvalue with a certain probability (related to the decomposition of relative to the orthonormal eigenbasis of ).
In bra–ket notation the above can be written;
that are equal if is an eigenvector, or eigenket of the observable .
Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator, which is itself a vector (useful in momentum-related quantum operators, in the table below).
An operator in n-dimensional space can be written:
where ej are basis vectors corresponding to each component operator Aj. Each component will yield a corresponding eigenvalue . Acting this on the wave function :
in which we have used
In bra–ket notation:
Commutation of operators on Ψ
If two observables A and B have linear operators and , the commutator is defined by,
The commutator is itself a (composite) operator. Acting the commutator on ψ gives:
If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute:
then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties , simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this:
It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state (ψ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision.
If the operators do not commute:
they cannot be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables
even if ψ is an eigenfunction the above relation holds. Notable pairs are position-and-momentum and energy-and-time uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as Lx and Ly, or sy and sz, etc.).
Expectation values of operators on Ψ
The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R. The expectation value of the operator is calculated from:
This can be generalized to any function F of an operator:
An example of F is the 2-fold action of A on ψ, i.e. squaring an operator or doing it twice:
Hermitian operators
The definition of a Hermitian operator is:
Following from this, in bra–ket notation:
Important properties of Hermitian operators include:
real eigenvalues,
eigenvectors with different eigenvalues are orthogonal,
eigenvectors can be chosen to be a complete orthonormal basis,
Operators in matrix mechanics
An operator can be written in matrix form to map one basis vector to another. Since the operators are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element can be connected to another, by the expression:
which is a matrix element:
A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal. In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial:
where I is the n × n identity matrix, as an operator it corresponds to the identity operator. For a discrete basis:
while for a continuous basis:
Inverse of an operator
A non-singular operator has an inverse defined by:
If an operator has no inverse, it is a singular operator. In a finite-dimensional space, an operator is non-singular if and only if its determinant is nonzero:
and hence the determinant is zero for a singular operator.
Table of QM operators
The operators used in quantum mechanics are collected in the table below (see for example). The bold-face vectors with circumflexes are not unit vectors, they are 3-vector operators; all three spatial components taken together.
{| class="wikitable"
|- style="vertical-align:top;"
! scope="col" | Operator (common name/s)
! scope="col" | Cartesian component
! scope="col" | General definition
! scope="col" | SI unit
! scope="col" | Dimension
|- style="vertical-align:top;"
! Position
|
|
| m
| [L]
|- style="vertical-align:top;"
!rowspan="2"| Momentum
| General
| General
| J s m−1 = N s
| [M] [L] [T]−1
|- style="vertical-align:top;"
| Electromagnetic field
| Electromagnetic field (uses kinetic momentum; A, vector potential)
| J s m−1 = N s
| [M] [L] [T]−1
|- style="vertical-align:top;"
!rowspan="3"| Kinetic energy
| Translation
|
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
| Electromagnetic field
| Electromagnetic field (A, vector potential)
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
| Rotation (I, moment of inertia)
| Rotation
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
! Potential energy
| N/A
|
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
! Total energy
| N/A
| Time-dependent potential:
Time-independent:
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
! Hamiltonian
|
|
| J
| [M] [L]2 [T]−2
|- style="vertical-align:top;"
! Angular momentum operator
|
|
| J s = N s m
| [M] [L]2 [T]−1
|- style="vertical-align:top;"
! Spin angular momentum
|
where
are the Pauli matrices for spin-1/2 particles.
|
where σ is the vector whose components are the Pauli matrices.
| J s = N s m
| [M] [L]2 [T]−1
|- style="vertical-align:top;"
! Total angular momentum
|
|
| J s = N s m
| [M] [L]2 [T]−1
|- style="vertical-align:top;"
! Transition dipole moment (electric)
|
|
| C m
| [I] [T] [L]
|}
Examples of applying quantum operators
The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in position basis in one dimension is:
Letting this act on ψ we obtain:
if ψ is an eigenfunction of , then the momentum eigenvalue p is the value of the particle's momentum, found by:
For three dimensions the momentum operator uses the nabla operator to become:
In Cartesian coordinates (using the standard Cartesian basis vectors ex, ey, ez) this can be written;
that is:
The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting on ψ obtains:
See also
Bounded linear operator
Representation theory
References
Operator theory
Theoretical physics
de:Operator (Mathematik)#Operatoren der Physik | 0.769726 | 0.99108 | 0.76286 |
Natural environment | The natural environment or natural world encompasses all biotic and abiotic things occurring naturally, meaning in this case not artificial. The term is most often applied to Earth or some parts of Earth. This environment encompasses the interaction of all living species, climate, weather and natural resources that affect human survival and economic activity.
The concept of the natural environment can be distinguished as components:
Complete ecological units that function as natural systems without massive civilized human intervention, including all vegetation, microorganisms, soil, rocks, plateaus, mountains, the atmosphere and natural phenomena that occur within their boundaries and their nature.
Universal natural resources and physical phenomena that lack clear-cut boundaries, such as air, water and climate, as well as energy, radiation, electric charge and magnetism, not originating from civilized human actions.
In contrast to the natural environment is the built environment. Built environments are where humans have fundamentally transformed landscapes such as urban settings and agricultural land conversion, the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mud hut or a photovoltaic system in the desert, the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hence beaver dams and the works of mound-building termites are thought of as natural.
People cannot find absolutely natural environments on Earth, and naturalness usually varies in a continuum, from 100% natural in one extreme to 0% natural in the other. The massive environmental changes of humanity in the Anthropocene have fundamentally effected all natural environments including: climate change, biodiversity loss and pollution from plastic and other chemicals in the air and water. More precisely, we can consider the different aspects or components of an environment, and see that their degree of naturalness is not uniform. If, for instance, in an agricultural field, the mineralogic composition and the structure of its soil are similar to those of an undisturbed forest soil, but the structure is quite different.
Composition
Earth science generally recognizes four spheres, the lithosphere, the hydrosphere, the atmosphere and the biosphere as correspondent to rocks, water, air and life respectively. Some scientists include as part of the spheres of the Earth, the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere, as well as the pedosphere (to soil) as an active and intermixed sphere. Earth science (also known as geoscience, the geographical sciences or the Earth Sciences), is an all-embracing term for the sciences related to the planet Earth. There are four major disciplines in earth sciences, namely geography, geology, geophysics and geodesy. These major disciplines use physics, chemistry, biology, chronology and mathematics to build a qualitative and quantitative understanding of the principal areas or spheres of Earth.
Geological activity
The Earth's crust or lithosphere, is the outermost solid surface of the planet and is chemically, physically and mechanically different from underlying mantle. It has been generated greatly by igneous processes in which magma cools and solidifies to form solid rock. Beneath the lithosphere lies the mantle which is heated by the decay of radioactive elements. The mantle though solid is in a state of rheic convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Volcanoes result primarily from the melting of subducted crust material or of rising mantle at mid-ocean ridges and mantle plumes.
Water on Earth
Most water is found in various kinds of natural body of water.
Oceans
An ocean is a major body of saline water and a component of the hydrosphere. Approximately 71% of the surface of the Earth (an area of some 362 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over 3,000 meters (9,800 ft) deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several separate oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. The deep seabeds are more than half the Earth's surface, and are among the least-modified natural environments. The major oceanic divisions are defined in part by the continents, various archipelagos and other criteria, these divisions are : (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean and the Arctic Ocean.
Rivers
A river is a natural watercourse, usually freshwater, flowing toward an ocean, a lake, a sea or another river. A few rivers simply flow into the ground and dry up completely without reaching another body of water.
The water in a river is usually in a channel, made up of a stream bed between banks. In larger rivers there is often also a wider floodplain shaped by waters over-topping the channel. Flood plains may be very wide in relation to the size of the river channel. Rivers are a part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs and the release of water stored in glaciers and snowpacks.
Small rivers may also be called by several other names, including stream, creek and brook. Their current is confined within a bed and stream banks. Streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general is known as surface hydrology.
Lakes
A lake (from Latin lacus) is a terrain feature, a body of water that is localized to the bottom of basin. A body of water is considered a lake when it is inland, is not part of an ocean and is larger and deeper than a pond.
Natural lakes on Earth are generally found in mountainous areas, rift zones and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.
Ponds
A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams by their current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind-driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools.
Human impact on water
Humans impact the water in different ways such as modifying rivers (through dams and stream channelization), urbanization and deforestation. These impact lake levels, groundwater conditions, water pollution, thermal pollution, and marine pollution. Humans modify rivers by using direct channel manipulation. We build dams and reservoirs and manipulate the direction of the rivers and water path. Dams can usefully create reservoirs and hydroelectric power. However, reservoirs and dams may negatively impact the environment and wildlife. Dams stop fish migration and the movement of organisms downstream. Urbanization affects the environment because of deforestation and changing lake levels, groundwater conditions, etc. Deforestation and urbanization go hand in hand. Deforestation may cause flooding, declining stream flow and changes in riverside vegetation. The changing vegetation occurs because when trees cannot get adequate water they start to deteriorate, leading to a decreased food supply for the wildlife in an area.
Atmosphere, climate and weather
The atmosphere of the Earth serves as a key factor in sustaining the planetary ecosystem. The thin layer of gases that envelops the Earth is held in place by the planet's gravity. Dry air consists of 78% nitrogen, 21% oxygen, 1% argon, inert gases and carbon dioxide. The remaining gases are often referred to as trace gases. The atmosphere includes greenhouse gases such as carbon dioxide, methane, nitrous oxide and ozone. Filtered air includes trace amounts of many other chemical compounds. Air also contains a variable amount of water vapor and suspensions of water droplets and ice crystals seen as clouds. Many natural substances may be present in tiny amounts in an unfiltered air sample, including dust, pollen and spores, sea spray, volcanic ash and meteoroids. Various industrial pollutants also may be present, such as chlorine (elementary or in compounds), fluorine compounds, elemental mercury, and sulphur compounds such as sulphur dioxide (SO2).
The ozone layer of the Earth's atmosphere plays an important role in reducing the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.
Layers of the atmosphere
Principal layers
Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From highest to lowest, these layers are:
Exosphere: The outermost layer of Earth's atmosphere extends from the exobase upward, mainly composed of hydrogen and helium.
Thermosphere: The top of the thermosphere is the bottom of the exosphere, called the exobase. Its height varies with solar activity and ranges from about . The International Space Station orbits in this layer, between . In another way, the thermosphere is Earth's second highest atmospheric layer, extending from approximately 260,000 feet at the mesopause to the thermopause at altitudes ranging from 1,600,000 to 3,300,000 feet.
Mesosphere: The mesosphere extends from the stratopause to . It is the layer where most meteors burn up upon entering the atmosphere.
Stratosphere: The stratosphere extends from the tropopause to about . The stratopause, which is the boundary between the stratosphere and mesosphere, typically is at .
Troposphere: The troposphere begins at the surface and extends to between at the poles and at the equator, with some variation due to weather. The troposphere is mostly heated by transfer of energy from the surface, so on average the lowest part of the troposphere is warmest and temperature decreases with altitude. The tropopause is the boundary between the troposphere and stratosphere.
Other layers
Within the five principal layers determined by temperature there are several layers determined by other properties.
The ozone layer is contained within the stratosphere. It is mainly located in the lower portion of the stratosphere from about , though the thickness varies seasonally and geographically. About 90% of the ozone in our atmosphere is contained in the stratosphere.
The ionosphere: The part of the atmosphere that is ionized by solar radiation, stretches from and typically overlaps both the exosphere and the thermosphere. It forms the inner edge of the magnetosphere.
The homosphere and heterosphere: The homosphere includes the troposphere, stratosphere and mesosphere. The upper part of the heterosphere is composed almost completely of hydrogen, the lightest element.
The planetary boundary layer is the part of the troposphere that is nearest the Earth's surface and is directly affected by it, mainly through turbulent diffusion.
Effects of global warming
The dangers of global warming are being increasingly studied by a wide global consortium of scientists. These scientists are increasingly concerned about the potential long-term effects of global warming on our natural environment and on the planet. Of particular concern is how climate change and global warming caused by anthropogenic, or human-made releases of greenhouse gases, most notably carbon dioxide, can act interactively and have adverse effects upon the planet, its natural environment and humans' existence. It is clear the planet is warming, and warming rapidly. This is due to the greenhouse effect, which is caused by greenhouse gases, which trap heat inside the Earth's atmosphere because of their more complex molecular structure which allows them to vibrate and in turn trap heat and release it back towards the Earth. This warming is also responsible for the extinction of natural habitats, which in turn leads to a reduction in wildlife population. The most recent report from the Intergovernmental Panel on Climate Change (the group of the leading climate scientists in the world) concluded that the earth will warm anywhere from 2.7 to almost 11 degrees Fahrenheit (1.5 to 6 degrees Celsius) between 1990 and 2100.
Efforts have been increasingly focused on the mitigation of greenhouse gases that are causing climatic changes, on developing adaptative strategies to global warming, to assist humans, other animal, and plant species, ecosystems, regions and nations in adjusting to the effects of global warming. Some examples of recent collaboration to address climate change and global warming include:
The United Nations Framework Convention Treaty and convention on Climate Change, to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.
The Kyoto Protocol, which is the protocol to the international Framework Convention on Climate Change treaty, again with the objective of reducing greenhouse gases in an effort to prevent anthropogenic climate change.
The Western Climate Initiative, to identify, evaluate, and implement collective and cooperative ways to reduce greenhouse gases in the region, focusing on a market-based cap-and-trade system.
A significantly profound challenge is to identify the natural environmental dynamics in contrast to environmental changes not within natural variances. A common solution is to adapt a static view neglecting natural variances to exist. Methodologically, this view could be defended when looking at processes which change slowly and short time series, while the problem arrives when fast processes turns essential in the object of the study.
Climate
Climate looks at the statistics of temperature, humidity, atmospheric pressure, wind, rainfall, atmospheric particle count and other meteorological elements in a given region over long periods of time. Weather, on the other hand, is the present condition of these same elements over periods up to two weeks.
Climates can be classified according to the average and typical ranges of different variables, most commonly temperature and precipitation. The most commonly used classification scheme is the one originally developed by Wladimir Köppen. The Thornthwaite system, in use since 1948, uses evapotranspiration as well as temperature and precipitation information to study animal species diversity and the potential impacts of climate changes.
Weather
Weather is a set of all the phenomena occurring in a given atmospheric area at a given time. Most weather phenomena occur in the troposphere, just below the stratosphere. Weather refers, generally, to day-to-day temperature and precipitation activity, whereas climate is the term for the average atmospheric conditions over longer periods of time. When used without qualification, "weather" is understood to be the weather of Earth.
Weather occurs due to density (temperature and moisture) differences between one place and another. These differences can occur due to the sun angle at any particular spot, which varies by latitude from the tropics. The strong temperature contrast between polar and tropical air gives rise to the jet stream. Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. On the Earth's surface, temperatures usually range ±40 °C (100 °F to −40 °F) annually. Over thousands of years, changes in the Earth's orbit have affected the amount and distribution of solar energy received by the Earth and influenced long-term climate.
Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes due to differences in compressional heating. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. The atmosphere is a chaotic system, and small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts to control the weather have occurred throughout human history, and there is evidence that civilized human activity such as agriculture and industry has inadvertently modified weather patterns.
Life
Evidence suggests that life on Earth has existed for about 3.7 billion years. All known life forms share fundamental molecular mechanisms, and based on these observations, theories on the origin of life attempt to find a mechanism explaining the formation of a primordial single cell organism from which all life originates. There are many different hypotheses regarding the path that might have been taken from simple organic molecules via pre-cellular life to protocells and metabolism.
Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli and reproduction. Life may also be said to be simply the characteristic state of organisms. In biology, the science of living organisms, "life" is the condition which distinguishes active organisms from inorganic matter, including the capacity for growth, functional activity and the continual change preceding death.
A diverse variety of living organisms (life forms) can be found in the biosphere on Earth, and properties common to these organisms—plants, animals, fungi, protists, archaea, and bacteria—are a carbon- and water-based cellular form with complex organization and heritable genetic information. Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations. More complex living organisms can communicate through various means.
Ecosystems
An ecosystem (also called an environment) is a natural unit consisting of all plants, animals, and micro-organisms (biotic factors) in an area functioning together with all of the non-living physical (abiotic) factors of the environment.
Central to the ecosystem concept is the idea that living organisms are continually engaged in a highly interrelated set of relationships with every other element constituting the environment in which they exist. Eugene Odum, one of the founders of the science of ecology, stated: "Any unit that includes all of the organisms (i.e.: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem."
The human ecosystem concept is then grounded in the deconstruction of the human/nature dichotomy, and the emergent premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of their biotope.
A more significant number or variety of species or biological diversity of an ecosystem may contribute to greater resilience of an ecosystem because there are more species present at a location to respond to change and thus "absorb" or reduce its effects. This reduces the effect before the ecosystem's structure changes to a different state. This is not universally the case and there is no proven relationship between the species diversity of an ecosystem and its ability to provide goods and services on a sustainable level.
The term ecosystem can also pertain to human-made environments, such as human ecosystems and human-influenced ecosystems. It can describe any situation where there is relationship between living organisms and their environment. Fewer areas on the surface of the earth today exist free from human contact, although some genuine wilderness areas continue to exist without any forms of human intervention.
Biogeochemical cycles
Global biogeochemical cycles are critical to life, most notably those of water, oxygen, carbon, nitrogen and phosphorus.
The nitrogen cycle is the transformation of nitrogen and nitrogen-containing compounds in nature. It is a cycle which includes gaseous components.
The water cycle, is the continuous movement of water on, above, and below the surface of the Earth. Water can change states among liquid, vapour, and ice at various places in the water cycle. Although the balance of water on Earth remains fairly constant over time, individual water molecules can come and go.
The carbon cycle is the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of the Earth.
The oxygen cycle is the movement of oxygen within and between its three main reservoirs: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for the modern Earth's atmospheric composition and life.
The phosphorus cycle is the movement of phosphorus through the lithosphere, hydrosphere, and biosphere. The atmosphere does not play a significant role in the movements of phosphorus, because phosphorus and phosphorus compounds are usually solids at the typical ranges of temperature and pressure found on Earth.
Wilderness
Wilderness is generally defined as a natural environment on Earth that has not been significantly modified by human activity. The WILD Foundation goes into more detail, defining wilderness as: "The most intact, undisturbed wild natural areas left on our planet – those last truly wild places that humans do not control and have not developed with roads, pipelines or other industrial infrastructure." Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, solitude, and recreation. Wilderness is deeply valued for cultural, spiritual, moral, and aesthetic reasons. Some nature writers believe wilderness areas are vital for the human spirit and creativity.
The word, "wilderness", derives from the notion of wildness; in other words that which is not controllable by humans. The word etymology is from the Old English wildeornes, which in turn derives from wildeor meaning wild beast (wild + deor = beast, deer). From this point of view, it is the wildness of a place that makes it a wilderness. The mere presence or activity of people does not disqualify an area from being "wilderness". Many ecosystems that are, or have been, inhabited or influenced by activities of people may still be considered "wild". This way of looking at wilderness includes areas within which natural processes operate without very noticeable human interference.
Wildlife includes all non-domesticated plants, animals and other organisms. Domesticating wild plant and animal species for human benefit has occurred many times all over the planet, and has a major impact on the environment, both positive and negative. Wildlife can be found in all ecosystems. Deserts, rain forests, plains, and other areas—including the most developed urban sites—all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by civilized human factors, most scientists agree that wildlife around the world is (now) impacted by human activities.
Challenges
It is the common understanding of natural environment that underlies environmentalism — a broad political, social and philosophical movement that advocates various actions and policies in the interest of protecting what nature remains in the natural environment, or restoring or expanding the role of nature in this environment. While true wilderness is increasingly rare, wild nature (e.g., unmanaged forests, uncultivated grasslands, wildlife, wildflowers) can be found in many locations previously inhabited by humans.
Goals for the benefit of people and natural systems, commonly expressed by environmental scientists and environmentalists include:
Elimination of pollution and toxicants in air, water, soil, buildings, manufactured goods, and food.
Preservation of biodiversity and protection of endangered species.
Conservation and sustainable use of resources such as water, land, air, energy, raw materials, and natural resources.
Halting human-induced global warming, which represents pollution, a threat to biodiversity, and a threat to human populations.
Shifting from fossil fuels to renewable energy in electricity, heating and cooling, and transportation, which addresses pollution, global warming, and sustainability. This may include public transportation and distributed generation, which have benefits for traffic congestion and electric reliability.
Shifting from meat-intensive diets to largely plant-based diets in order to help mitigate biodiversity loss and climate change.
Establishment of nature reserves for recreational purposes and ecosystem preservation.
Sustainable and less polluting waste management including waste reduction (or even zero waste), reuse, recycling, composting, waste-to-energy, and anaerobic digestion of sewage sludge.
Reducing profligate consumption and clamping down on illegal fishing and logging.
Slowing and stabilisation of human population growth.
Reducing the import of second hand electronic appliances from developed countries to developing countries.
Criticism
In some cultures the term environment is meaningless because there is no separation between people and what they view as the natural world, or their surroundings. Specifically in the United States and Arabian countries many native cultures do not recognize the "environment", or see themselves as environmentalists.
See also
Biophilic design
Citizen's dividend
Conservation movement
Environmental history of the United States
Gaia hypothesis
Geological engineering
Greening
Index of environmental articles
List of conservation topics
List of environmental books
List of environmental issues
List of environmental websites
Natural capital
Natural history
Natural landscape
Nature-based solutions
Sustainability
Sustainable agriculture
Timeline of environmental history
References
Further reading
Allaby, Michael, and Chris Park, eds. A dictionary of environment and conservation (Oxford University Press, 2013), with a British emphasis.
External links
UNEP - United Nations Environment Programme
BBC - Science and Nature.
Science.gov – Environment & Environmental Quality
Habitat
Earth | 0.76364 | 0.998972 | 0.762855 |
Chapman–Kolmogorov equation | In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation (CKE) is an identity relating the joint probability distributions of different sets of coordinates on a stochastic process. The equation was derived independently by both the British mathematician Sydney Chapman and the Russian mathematician Andrey Kolmogorov. The CKE is prominently used in recent Variational Bayesian methods.
Mathematical description
Suppose that { fi } is an indexed collection of random variables, that is, a stochastic process. Let
be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman–Kolmogorov equation is
i.e. a straightforward marginalization over the nuisance variable.
(Note that nothing yet has been assumed about the temporal (or any other) ordering of the random variables—the above equation applies equally to the marginalization of any of them.)
In terms of Markov kernels
If we consider the Markov kernels induced by the transitions of a Markov process, the Chapman-Kolmogorov equation can be seen as giving a way of composing the kernel, generalizing the way stochastic matrices compose. Given a measurable space and a Markov kernel , the two-step transition kernel is given by
for all and .
One can interpret this as a sum, over all intermediate states, of pairs of independent probabilistic transitions.
More generally, given measurable spaces , and , and Markov kernels and , we get a composite kernel by
for all and .
Because of this, Markov kernels, like stochastic matrices, form a category.
Application to time-dilated Markov chains
When the stochastic process under consideration is Markovian, the Chapman–Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that i1 < ... < in. Then, because of the Markov property,
where the conditional probability is the transition probability between the times . So, the Chapman–Kolmogorov equation takes the form
Informally, this says that the probability of going from state 1 to state 3 can be found from the probabilities of going from 1 to an intermediate state 2 and then from 2 to 3, by adding up over all the possible intermediate states 2.
When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman–Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:
where P(t) is the transition matrix of jump t, i.e., P(t) is the matrix such that entry (i,j) contains the probability of the chain moving from state i to state j in t steps.
As a corollary, it follows that to calculate the transition matrix of jump t, it is sufficient to raise the transition matrix of jump one to the power of t, that is
The differential form of the Chapman–Kolmogorov equation is known as a master equation.
See also
Fokker–Planck equation (also known as Kolmogorov forward equation)
Kolmogorov backward equation
Examples of Markov chains
Category of Markov kernels
Citations
Further reading
External links
Equations
Markov processes
Stochastic calculus | 0.773693 | 0.98599 | 0.762854 |
Energy transition | An energy transition (or energy system transformation) is a major structural change to energy supply and consumption in an energy system. Currently, a transition to sustainable energy is underway to limit climate change. Most of the sustainable energy is renewable energy. Therefore, another term for energy transition is renewable energy transition. The current transition aims to reduce greenhouse gas emissions from energy quickly and sustainably, mostly by phasing-down fossil fuels and changing as many processes as possible to operate on low carbon electricity. A previous energy transition perhaps took place during the Industrial Revolution from 1760 onwards, from wood and other biomass to coal, followed by oil and later natural gas.
Over three-quarters of the world's energy needs are met by burning fossil fuels, but this usage emits greenhouse gases. Energy production and consumption are responsible for most human-caused greenhouse gas emissions. To meet the goals of the 2015 Paris Agreement on climate change, emissions must be reduced as soon as possible and reach net-zero by mid-century. Since the late 2010s, the renewable energy transition has also been driven by the rapidly falling cost of both solar and wind power. Another benefit of the energy transition is its potential to reduce the health and environmental impacts of the energy industry.
Heating of buildings is being electrified, with heat pumps being the most efficient technology by far. To improve the flexibility of electrical grids, the installation of energy storage and super grids are vital to enable the use of variable, weather-dependent technologies. However fossil-fuel subsidies are slowing the energy transition.
Definition
An energy transition is a broad shift in technologies and behaviours that are needed to replace one source of energy with another. A prime example is the change from a pre-industrial system relying on traditional biomass, wind, water and muscle power to an industrial system characterized by pervasive mechanization, steam power and the use of coal.
The IPCC does not define energy transition in the glossary of its Sixth Assessment Report but it does define transition as: "The process of changing from one state or condition to another in a given period of time. Transition can occur in individuals, firms, cities, regions and nations, and can be based on incremental or transformative change."
Development of the term
After the 1973 oil crisis, the term energy transition was coined by politicians and media. It was popularised by US President Jimmy Carter in his 1977 Address on the Nation on Energy, calling to "look back into history to understand our energy problem. Twice in the last several hundred years, there has been a transition in the way people use energy ... Because we are now running out of gas and oil, we must prepare quickly for a third change to strict conservation and to the renewed use of coal and to permanent renewable energy sources like solar power." The term was later globalised after the 1979 second oil shock, during the 1981 United Nations Conference on New and Renewable Sources of Energy.
From the 1990s, debates on energy transition have increasingly taken climate change mitigation into account. Parties to the agreement committed "to limit global warming to "well below 2 °C, preferably 1.5 °C compared to pre-industrial levels". This requires a rapid energy transition with a downshift of fossil fuel production to stay within the carbon emissions budget.
In this context, the term energy transition encompasses a reorientation of energy policy. This could imply a shift from centralized to distributed generation. It also includes attempts to replace overproduction and avoidable energy consumption with energy-saving measures and increased efficiency.
The historical transitions from locally supplied wood, water and wind energies to globally supplied fossil and nuclear fuels has induced growth in end-use demand through the rapid expansion of engineering research, education and standardisation. The mechanisms for the whole-systems changes include new discipline in Transition Engineering amongst all engineering professions, entrepreneurs, researchers and educators.
Examples of past energy transitions
Historic approaches to past energy transitions are shaped by two main discourses. One argues that humankind experienced several energy transitions in its past, while the other suggests the term "energy additions" as better reflecting the changes in global energy supply in the last three centuries.
The chronologically first discourse was most broadly described by Vaclav Smil. It underlines the change in the energy mix of countries and the global economy. By looking at data in percentages of the primary energy source used in a given context, it paints a picture of the world's energy systems as having changed significantly over time, going from biomass to coal, to oil, and now a mix of mostly coal, oil and natural gas. Until the 1950s, the economic mechanism behind energy systems was local rather than global.
The second discourse was most broadly described by Jean-Baptiste Fressoz. It emphasises that the term "energy transition" was first used by politicians, not historians, to describe a goal to achieve in the future – not as a concept to analyse past trends. When looking at the sheer amount of energy being used by humankind, the picture is one of ever-increasing consumption of all the main energy sources available to humankind. For instance, the increased use of coal in the 19th century did not replace wood consumption, indeed more wood was burned. Another example is the deployment of passenger cars in the 20th century. This evolution triggered an increase in both oil consumption (to drive the car) and coal consumption (to make the steel needed for the car). In other words, according to this approach, humankind never performed a single energy transition in its history but performed several energy additions.
Contemporary energy transitions differ in terms of motivation and objectives, drivers and governance. As development progressed, different national systems became more and more integrated becoming the large, international systems seen today. Historical changes of energy systems have been extensively studied. While historical energy changes were generally protracted affairs, unfolding over many decades, this does not necessarily hold true for the present energy transition, which is unfolding under very different policy and technological conditions.
For current energy systems, many lessons can be learned from history. The need for large amounts of firewood in early industrial processes in combination with prohibitive costs for overland transportation led to a scarcity of accessible (e.g. affordable) wood, and eighteenth century glass-works "operated like a forest clearing enterprise". When Britain had to resort to coal after largely having run out of wood, the resulting fuel crisis triggered a chain of events that two centuries later culminated in the Industrial Revolution. Similarly, increased use of peat and coal were vital elements paving the way for the Dutch Golden Age, roughly spanning the entire 17th century. Another example where resource depletion triggered technological innovation and a shift to new energy sources is 19th century whaling: whale oil eventually became replaced by kerosene and other petroleum-derived products. To speed up the energy transition it is also conceivable that there will be government buyouts or bailouts of coal mining regions.
Drivers for current energy transition
Climate change mitigation and co-benefits
A rapid energy transition to very-low or zero-carbon sources is required to mitigate the effects of climate change. Coal, oil and gas combustion account for 89% of emissions and still provide 78% of primary energy consumption.
Despite the knowledge about the risks of climate change and the increasing number of climate policies adopted since the 1980s, however, energy transitions have not accelerated towards decarbonization beyond historical trends and remain far off track in achieving climate targets .
The deployment of renewable energy can generate co-benefits of climate change mitigation: positive socio-economic effects on employment, industrial development, health and energy access. Depending on the country and the deployment scenario, replacing coal power plants can more than double the number of jobs per average MW capacity. The energy transition could create many green jobs, for example in Africa. The costs for retraining workers for the renewable energy industry was found to be trivial for both coal in the U.S. and oil sands in Canada. The latter of which would only demand 2–6% of federal, provincial, and territorial oil and gas subsidies for a single year to be reallocated to provide oil and gas workers with a new career of approximately equivalent pay. In non-electrified rural areas, the deployment of solar mini-grids can significantly improve electricity access.
Employment opportunities by the green transition are associated with the use of renewable energy sources or building activity for infrastructure improvements and renovations.
Energy security
Another important driver is energy security and independence, with increasing importance in Europe and Taiwan because of the 2022 Russian invasion of Ukraine. Unlike Europes 2010s dependence on Russian gas, even if China stops supplying solar panels those already installed continue generating electricity. Militaries are using and developing electric vehicles, particularly for their stealthiness, but not tanks. As of 2023 renewable energy in Taiwan is far too small to help in a blockade.
Centralised facilities such as oil refineries and thermal power plants can be put out of action by air attack, whereas although solar can be attacked decentralised power such as solar and wind may be less vulnerable. Solar and batteries reduces risky fuel convoys. However large hydropower plants are vulnerable. Some say that nuclear power plants are unlikely to be military targets, but others conclude that civil NPPs in war zones can be weaponised and exploited by the hostile forces not only for impeding energy supplies (and thus shattering the public morale of the adversary) but also for blackmailing and coercing the decisionmakers of the attacked state and their international allies with a vision of man-made nuclear disaster.
Economic development
For many developing economies, for example in the mineral-rich countries of Sub-Saharan Africa, the transition to renewable energies is predicted to become a driver of sustainable economic development. The International Energy Agency (IEA) has identified 37 minerals as critical for clean energy technologies and estimates that by 2050 global demand for these will increase by 235 per cent. Africa has large reserves of many of these so-called "green minerals, such as bauxite, cobalt, copper, chromium, manganese and graphite. The African Union has outlined a policy framework, the Africa Mining Vision, to leverage the continent's mineral reserves in pursuit of sustainable development and socio-economic transformation. Achieving these goals requires mineral-rich African economies to transition from commodity export to manufacture of higher value-added products.
Cost competitiveness of renewable energies
From 2010 to 2019, the competitiveness of wind and solar power substantially increased. Unit costs of solar energy dropped sharply by 85%, wind energy by 55%, and lithium-ion batteries by 85%. This has made wind and solar power the cheapest form for new installations in many regions. Levelized costs for combined onshore wind or solar with storage for a few hours are already lower than for gas peaking power plants. In 2021, the new electricity generating capacity of renewables exceeded 80% of all installed power.
Key technologies and approaches
The emissions reductions necessary to keep global warming below 2°C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change.
Many climate change mitigation pathways envision three main aspects of a low-carbon energy system:
The use of low-emission energy sources to produce electricity
Electrification – that is increased use of electricity instead of directly burning fossil fuels
Accelerated adoption of energy efficiency measures
Renewable energy
The most important energy sources in the low carbon energy transition are wind power and solar power. They could reduce net emissions by 4 billion tons CO2 equivalent per year each, half of it with lower net lifetime costs than the reference. Other renewable energy sources include bioenergy, geothermal energy and tidal energy, but they currently have higher net lifetime costs.
By 2022, hydroelectricity is the largest source of renewable electricity in the world, providing 16% of the world's total electricity in 2019. However, because of its heavy dependence on geography and the generally high environmental and social impact of hydroelectric power plants, the growth potential of this technology is limited. Wind and solar power are considered more scalable, but still require vast quantities of land and materials. They have higher potential for growth. These sources have grown nearly exponentially in recent decades thanks to rapidly decreasing costs. In 2019, wind power supplied 5.3% worldwide electricity while solar power supplied 2.6%.
While production from most types of hydropower plants can be actively controlled, production from wind and solar power depends on the weather. Electrical grids must be extended and adjusted to avoid wastage. Dammed hydropower is a dispatchable source, while solar and wind are variable renewable energy sources. These sources require dispatchable backup generation or energy storage to provide continuous and reliable electricity. For this reason, storage technologies also play a key role in the renewable energy transition. As of 2020, the largest scale storage technology is pumped storage hydroelectricity, accounting for the great majority of energy storage capacity installed worldwide. Other important forms of energy storage are electric batteries and power to gas.
The "Electricity Grids and Secure Energy Transitions" report by the IEA emphasizes the necessity of increasing grid investments to over $600 billion annually by 2030, up from $300 billion, to accommodate the integration of renewable energy. By 2040, the grid must expand by more than 80 million kilometers to manage renewable sources, which are projected to account for over 80% of the global power capacity increase over the next two decades. Failure to enhance grid infrastructure timely could lead to an additional 58 gigatonnes of CO2 emissions by 2050, significantly risking a 2°C global temperature rise.
Integration of variable renewable energy sources
With the integration of renewable energy, local electricity production is becoming more variable. It has been recommended that "coupling sectors, energy storage, smart grids, demand side management, sustainable biofuels, hydrogen electrolysis and derivatives will ultimately be needed to accommodate large shares of renewables in energy systems". Fluctuations can be smoothened by combining wind and sun power and by extending electricity grids over large areas. This reduces the dependence on local weather conditions.
With highly variable prices, electricity storage and grid extension become more competitive. Researchers have found that "costs for accommodating the integration of variable renewable energy sources in electricity systems are expected to be modest until 2030". Furthermore, "it will be more challenging to supply the entire energy system with renewable energy".
Fast fluctuations increase with a high integration of wind and solar energy. They can be addressed by operating reserves. Large-scale batteries can react within seconds and are increasingly used to keep the electricity grid stable.
100% renewable energy
Nuclear power
In the 1970s and 1980s, nuclear power gained a large share in some countries. In France and Slovakia more than half of the electrical power is still nuclear. It is a low carbon energy source but comes with risks and increasing costs. Since the late 1990s, deployment has slowed down. Decommissioning increases as many reactors are close to the end of their lifetime or long before because of anti-nuclear sentiments. Germany stopped its last three nuclear power plants by mid April 2023. On the other hand, the China General Nuclear Power Group is aiming for 200 GW by 2035, produced by 150 additional reactors.
Electrification
With the switch to clean energy sources where power is generated via electricity, end uses of energy such as transportation and heating need to be electrified to run on these clean energy sources. Concurrent with this switch is an expansion of the grid to handle larger amounts of generated electricity to supply to these end uses. Two key areas of electrification are electric vehicles and heat pumps.
It is easier to sustainably produce electricity than it is to sustainably produce liquid fuels. Therefore, adoption of electric vehicles is a way to make transport more sustainable. While electric vehicle technology is relatively mature in road transport, electric shipping and aviation are still early in their development, hence sustainable liquid fuels may have a larger role to play in these sectors.
A key sustainable solution to heating is electrification (heat pumps, or the less efficient electric heater). The IEA estimates that heat pumps currently provide only 5% of space and water heating requirements globally, but could provide over 90%. Use of ground source heat pumps not only reduces total annual energy loads associated with heating and cooling, it also flattens the electric demand curve by eliminating the extreme summer peak electric supply requirements. However, heat pumps and resistive heating alone will not be sufficient for the electrification of industrial heat. This because in several processes higher temperatures are required which cannot be achieved with these types of equipment. For example, for the production of ethylene via steam cracking temperatures as high as 900 °C are required. Hence, drastically new processes are required. Nevertheless, power-to-heat is expected to be the first step in the electrification of the chemical industry with an expected large-scale implementation by 2025.
Economic and geopolitical aspects
A shift in energy sources has the potential to redefine relations and dependencies between countries, stakeholders and companies. Countries or land owners with resources – fossil or renewable – face massive losses or gains depending on the development of any energy transition. In 2021, energy costs reached 13% of global gross domestic product.
Global rivalries have contributed to the driving forces of the economics behind the low carbon energy transition. Technological innovations developed within a country have the potential to become an economic force.
Influences
The energy transition discussion is heavily influenced by contributions from the fossil fuel industries.
One way that oil companies are able to continue their work despite growing environmental, social and economic concerns is by lobbying local and national governments.
Historically, the fossil fuel lobby has been highly successful in limiting regulations. From 1988 to 2005, Exxon Mobil, one of the largest oil companies in the world, spent nearly $16 million in anti-climate change lobbying and providing misleading information about climate change to the general public. The fossil fuel industry acquires significant support through the existing banking and investment structure. The concept that the industry should no longer be financially supported has led to the social movement known as divestment. Divestment is defined as the removal of investment capital from stocks, bonds or funds in oil, coal and gas companies for both moral and financial reasons.
Banks, investing firms, governments, universities, institutions and businesses are all being challenged with this new moral argument against their existing investments in the fossil fuel industry and many; such as Rockefeller Brothers Fund, the University of California, New York City and more; have begun making the shift to more sustainable, eco-friendly investments.
In 2024 the International Renewable Energy Agency (IRENA) projected that by 2050, over half of the world's energy will be carried by electricity and over three-quarters of the global energy mix will be from renewables. Although overtaken by both biomass and clean hydrogen, fossil fuels were still projected to supply 12% of energy. The transition is expected to reshape geopolitical power by reducing reliance on long-distance fossil fuel trade and enhancing the importance of regional energy markets.
Social and environmental aspects
Impacts
A renewable energy transition can present negative social impacts for some people who rely on the existing energy economy or who are affected by mining for minerals required for the transition. This has led to calls for a just transition, which the IPCC defines as, "A set of principles, processes and practices that aim to ensure that no people, workers, places, sectors, countries or regions are left behind in the transition from a high-carbon to a low carbon economy."
Use of local energy sources may stabilise and stimulate some local economies, create opportunities for energy trade between communities, states and regions, and increase energy security.
Coal mining is economically important in some regions, and a transition to renewables would decrease its viability and could have severe impacts on the communities that rely on this business. Not only do these communities face energy poverty already, but they also face economic collapse when the coal mining businesses move elsewhere or disappear altogether. This broken system perpetuates the poverty and vulnerability that decreases the adaptive capacity of coal mining communities. Potential mitigation could include expanding the program base for vulnerable communities to assist with new training programs, opportunities for economic development and subsidies to assist with the transition.
Increasing energy prices resulting from an energy transition may negatively impact developing countries including Vietnam and Indonesia.
Increased mining for lithium, cobalt, nickel, copper, and other critical minerals needed for expansion of renewable energy infrastructure has created increased environmental conflict and environmental justice issues for some communities.
Labour
A large portion of the global workforce works directly or indirectly for the fossil fuel economy. Moreover, many other industries are currently dependent on unsustainable energy sources (such as the steel industry or cement and concrete industry). Transitioning these workforces during the rapid period of economic change requires considerable forethought and planning. The international labor movement has advocated for a just transition that addresses these concerns.
Recently, an energy crisis is upon the nations of Europe as a result of dependence on Russia's natural gas, which was cut off during the Russia-Ukraine war.
This goes to show that humanity is still heavily dependent on fossil fuel energy sources and care should be taken to have a smooth transition, less energy-shortage shocks cripple the very efforts to effectively energise the transition.
Risks and barriers
Amongst the key issues to consider in relation to the pace of the global transition to renewables is how well individual electric companies are able to adapt to the changing reality of the power sector. For example, to date, the uptake of renewables by electric utilities has remained slow, hindered by their continued investment in fossil fuel generation capacity.
Incomplete regulations on clean energy uptake and concerns about electricity shortages have been identified as key barriers to the energy transition in coal-dependent, fast developing economies such as Vietnam.
Examples by country
From 2000 to 2012 coal was the source of energy with the total largest growth. The use of oil and natural gas also had considerable growth, followed by hydropower and renewable energy. Renewable energy grew at a rate faster than any other time in history during this period. The demand for nuclear energy decreased, in part due to fear mongering and inaccurate media portrayal of some nuclear disasters (Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011).
More recently, consumption of coal has declined relative to low carbon energy. Coal dropped from about 29% of the global total primary energy consumption in 2015 to 27% in 2017, and non-hydro renewables were up to about 4% from 2%.
Asia
China
India
India has set renewable energy goals to transition 50% of its total energy consumption into renewable sources in the Paris climate accords. As of 2022 the Central Electricity Authority are well on track of achieving their goals, producing 160 GW electricity from clean sources like solar, wind, hydro power and nuclear power plants, this is 40% of its total capacity. India is ranked third on Ernst and Young's renewable energy country attractive index behind the US and China.
Hydro electric power plants are a major part of India's energy infrastructure since the days of its independence in 1947. Former prime Minister Jawahar Lal Nehru called them the " temples of modern India" and believed them to be key drivers of modernity and industrialism for the nascent republic. Notable examples of hydro power stations include the 2400 MW Tehri hydropower complex, the 1960 MW Koyna hydroelectric project and the 1670 MW Srisailam Dam. Recently, India has given due importance to emerging renewable technologies like solar power plants and wind farms. They house 3 of the world's top 5 solar farms, including world's largest 2255 MW Bhadla Solar Park in and world's second-largest solar park of 2000 MW Pavgada Solar Park and 100 MW Kurnool Ultra mega solar park.
While there has been positive change, air pollution from coal still kills many people and India has to cut down its reliance on traditional coal based power production as it still accounts for around 50% of its energy production. India is also moving towards its goal for electrification of the automotive industry, aiming to have at least 30% EV ownership among private vehicles by 2030.
Vietnam
Vietnam has led the Southeast Asia in solar and wind uptake, achieving about 20 GW in 2022 from almost zero in 2017. Thailand has the highest number of EV registrations, with 218,000 in 2022. The energy transition in Southeast Asia can be summarized as: Challenging, achievable, and interdependent. This implies that while there are obstacles, feasibility largely relies on international support.
Public demand for improved local environmental quality and government's aims to promote a green economy are found to be key drivers in Vietnam.
Governments ambition to attract international support for green growth initiatives and public demand for a clean environment have been found to be drivers of the energy transition in developing countries, such as Vietnam. Thanks to a relatively more conducive investment environment, Vietnam is poised to a faster energy transition than some other ASEAN members
Europe
European Union
The European Green Deal is a set of policy initiatives by the European Commission with the overarching aim of making Europe climate neutral in 2050. An impact assessed plan will also be presented to increase the EU's greenhouse gas emission reductions target for 2030 to at least 50% and towards 55% compared with 1990 levels. The plan is to review each existing law on its climate merits, and also introduce new legislation on the circular economy, building renovation, biodiversity, farming and innovation. The president of the European Commission, Ursula von der Leyen, stated that the European Green Deal would be Europe's "man on the Moon moment", as the plan would make Europe the first climate-neutral continent.
A survey found that digitally advanced companies put more money into energy-saving strategies. In the European Union, 59% of companies that have made investments in both basic and advanced technologies have also invested in energy efficiency measures, compared to only 50% of US firms in the same category. Overall, there is a significant disparity between businesses' digital profiles and investments in energy efficiency.
Germany
Germany has played an outsized role in the transition away from fossil fuels and nuclear power to renewables. The energy transition in Germany is known as die Energiewende (literally, "the energy turn") indicating a turn away from old fuels and technologies to new one. The key policy document outlining the Energiewende was published by the German government in September 2010, some six months before the Fukushima nuclear accident; legislative support was passed in September 2010.
The policy has been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 17% in 2010, reaching close to the OECD average of 18% usage of renewables. In 2022 Germany has a share of 46,2 % and surpassed the OECD average. A large driver for this increase in the shares of renewables energy are decreases in cost of capital. Germany boasts some of the lowest cost of capitals for renewable solar and wind onshore energy worldwide. In 2021 the International Renewable Energy Agency reported capital costs of around 1.1% and 2.4% for solar and wind onshore. This constitutes a significant decrease from previous numbers in the early 2000s, where capital costs hovered around 5.1% and 4.5% respectively. This decrease in capital costs was influenced by a variety of economic and political drivers. Following the global financial crisis of 2008–2009, Germany eased the refinancing regulations on banks by giving out cheap loans with low interest rates in order to stimulate the economy again.
During this period, the industry around renewable energies also started to experience learning effects in manufacturing, project organisation as well as financing thanks to rising investment and order volumes. This coupled with various forms of subsidies contributed to a large reduction of the capital cost and the levelized cost of electricity (LCOE) for solar and onshore wind power. As the technologies have matured and become integral parts of the existing sociotechnical systems it is to be expected that in the future, experience effects and general interest rates will be key determinants for the cost-competitiveness of these technologies.
Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. Nuclear power stations were closed, and the existing nine stations will close earlier than necessary, in 2022.
The reduction of reliance on nuclear stations has had the consequence of increased reliance on fossil fuels. One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8300 km of power lines must be built or upgraded.
Different Länder have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills. Germans in 2013 had some of the highest electricity costs in Europe. Nonetheless, for the first time in more than ten years, electricity prices for household customers fell at the beginning of 2015.
Switzerland
Due to the high share of hydroelectricity (59.6%) and nuclear power (31.7%) in electricity production, Switzerland's per capita energy-related emissions are 28% lower than the European Union average and roughly equal to those of France. On 21 May 2017, Swiss voters accepted the new Energy Act establishing the 'energy strategy 2050'. The aims of the energy strategy 2050 are: to reduce energy consumption; to increase energy efficiency; and to promote renewable energies (such as water, solar, wind and geothermal power as well as biomass fuels). The Energy Act of 2006 forbids the construction of new nuclear power plants in Switzerland.
United Kingdom
By law production of greenhouse gas emissions by the United Kingdom will be reduced to net zero by 2050. To help in reaching this statutory goal national energy policy is mainly focusing on the country's off-shore wind power and delivering new and advanced nuclear power. The increase in national renewable power – particularly from biomass – together with the 20% of electricity generated by nuclear power in the United Kingdom meant that by 2019 low carbon British electricity had overtaken that generated by fossil fuels.
In order to meet the net zero target energy networks must be strengthened. Electricity is only a part of energy in the United Kingdom, so natural gas used for industrial and residential heat and petroleum used for transport in the United Kingdom must also be replaced by either electricity or another form of low-carbon energy, such as sustainable bioenergy crops or green hydrogen.
Although the need for the energy transition is not disputed by any major political party, in 2020 there is debate about how much of the funding to try and escape the COVID-19 recession should be spent on the transition, and how many jobs could be created, for example in improving energy efficiency in British housing. Some believe that due to post-covid government debt that funding for the transition will be insufficient. Brexit may significantly affect the energy transition, but this is unclear . The government is urging UK business to sponsor the climate change conference in 2021, possibly including energy companies but only if they have a credible short-term plan for the energy transition.
See also
References
Energy infrastructure
Emissions reduction
Energy policy
Energy development
Renewable energy
Renewable energy commercialization | 0.769288 | 0.991628 | 0.762848 |
Introduction to entropy | In thermodynamics, entropy is a numerical quantity that shows that many physical processes can go in only one direction in time. For example, cream and coffee can be mixed together, but cannot be "unmixed"; a piece of wood can be burned, but cannot be "unburned". The word 'entropy' has entered popular usage to refer to a lack of order or predictability, or of a gradual decline into disorder. A more physical interpretation of thermodynamic entropy refers to spread of energy or matter, or to extent and diversity of microscopic motion.
If a movie that shows coffee being mixed or wood being burned is played in reverse, it would depict processes highly improbable in reality. Mixing coffee and burning wood are "irreversible". Irreversibility is described by a law of nature known as the second law of thermodynamics, which states that in an isolated system (a system not connected to any other system) which is undergoing change, entropy increases over time.
Entropy does not increase indefinitely. A body of matter and radiation eventually will reach an unchanging state, with no detectable flows, and is then said to be in a state of thermodynamic equilibrium. Thermodynamic entropy has a definite value for such a body and is at its maximum value. When bodies of matter or radiation, initially in their own states of internal thermodynamic equilibrium, are brought together so as to intimately interact and reach a new joint equilibrium, then their total entropy increases. For example, a glass of warm water with an ice cube in it will have a lower entropy than that same system some time later when the ice has melted leaving a glass of cool water. Such processes are irreversible: A glass of cool water will not spontaneously turn into a glass of warm water with an ice cube in it. Some processes in nature are almost reversible. For example, the orbiting of the planets around the Sun may be thought of as practically reversible: A movie of the planets orbiting the Sun which is run in reverse would not appear to be impossible.
While the second law, and thermodynamics in general, accurately predicts the intimate interactions of complex physical systems, scientists are not content with simply knowing how a system behaves, they also want to know why it behaves the way it does. The question of why entropy increases until equilibrium is reached was answered in 1877 by physicist Ludwig Boltzmann. The theory developed by Boltzmann and others, is known as statistical mechanics. Statistical mechanics explains thermodynamics in terms of the statistical behavior of the atoms and molecules which make up the system. The theory not only explains thermodynamics, but also a host of other phenomena which are outside the scope of thermodynamics.
Explanation
Thermodynamic entropy
The concept of thermodynamic entropy arises from the second law of thermodynamics. This law of entropy increase quantifies the reduction in the capacity of an isolated compound thermodynamic system to do thermodynamic work on its surroundings, or indicates whether a thermodynamic process may occur. For example, whenever there is a suitable pathway, heat spontaneously flows from a hotter body to a colder one.
Thermodynamic entropy is measured as a change in entropy to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system of interest). It is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system.
Following the formalism of Clausius, the basic calculation can be mathematically stated as:
where is the increase or decrease in entropy, is the heat added to the system or subtracted from it, and is temperature. The 'equals' sign and the symbol imply that the heat transfer should be so small and slow that it scarcely changes the temperature .
If the temperature is allowed to vary, the equation must be integrated over the temperature path. This calculation of entropy change does not allow the determination of absolute value, only differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not,
According to the first law of thermodynamics, which deals with the conservation of energy, the loss of heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of decrease in internal energy and the corresponding increase in internal energy of the surroundings at a given temperature. In many cases, a visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. When applicable, entropy increase is the quantitative measure of that kind of a spontaneous process: how much energy has been effectively lost or become unavailable, by dispersing itself, or spreading itself out, as assessed at a specific temperature. For this assessment, when the temperature is higher, the amount of energy dispersed is assessed as 'costing' proportionately less. This is because a hotter body is generally more able to do thermodynamic work, other factors, such as internal energy, being equal. This is why a steam engine has a hot firebox.
The second law of thermodynamics deals only with changes of entropy. The absolute entropy (S) of a system may be determined using the third law of thermodynamics, which specifies that the entropy of all perfectly crystalline substances is zero at the absolute zero of temperature. The entropy at another temperature is then equal to the increase in entropy on heating the system reversibly from absolute zero to the temperature of interest.
Statistical mechanics and information entropy
Thermodynamic entropy bears a close relationship to the concept of information entropy (H). Information entropy is a measure of the "spread" of a probability density or probability mass function. Thermodynamics makes no assumptions about the atomistic nature of matter, but when matter is viewed in this way, as a collection of particles constantly moving and exchanging energy with each other, and which may be described in a probabilistic manner, information theory may be successfully applied to explain the results of thermodynamics. The resulting theory is known as statistical mechanics.
An important concept in statistical mechanics is the idea of the microstate and the macrostate of a system. If we have a container of gas, for example, and we know the position and velocity of every molecule in that system, then we know the microstate of that system. If we only know the thermodynamic description of that system, the pressure, volume, temperature, and/or the entropy, then we know the macrostate of that system. Boltzmann realized that there are many different microstates that can yield the same macrostate, and, because the particles are colliding with each other and changing their velocities and positions, the microstate of the gas is always changing. But if the gas is in equilibrium, there seems to be no change in its macroscopic behavior: No changes in pressure, temperature, etc. Statistical mechanics relates the thermodynamic entropy of a macrostate to the number of microstates that could yield that macrostate. In statistical mechanics, the entropy of the system is given by Ludwig Boltzmann's equation:
where S is the thermodynamic entropy, W is the number of microstates that may yield the macrostate, and is the Boltzmann constant. The natural logarithm of the number of microstates is known as the information entropy of the system. This can be illustrated by a simple example:
If you flip two coins, you can have four different results. If H is heads and T is tails, we can have (H,H), (H,T), (T,H), and (T,T). We can call each of these a "microstate" for which we know exactly the results of the process. But what if we have less information? Suppose we only know the total number of heads?. This can be either 0, 1, or 2. We can call these "macrostates". Only microstate (T,T) will give macrostate zero, (H,T) and (T,H) will give macrostate 1, and only (H,H) will give macrostate 2. So we can say that the information entropy of macrostates 0 and 2 are ln(1) which is zero, but the information entropy of macrostate 1 is ln(2) which is about 0.69. Of all the microstates, macrostate 1 accounts for half of them.
It turns out that if you flip a large number of coins, the macrostates at or near half heads and half tails accounts for almost all of the microstates. In other words, for a million coins, you can be fairly sure that about half will be heads and half tails. The macrostates around a 50–50 ratio of heads to tails will be the "equilibrium" macrostate. A real physical system in equilibrium has a huge number of possible microstates and almost all of them are the equilibrium macrostate, and that is the macrostate you will almost certainly see if you wait long enough. In the coin example, if you start out with a very unlikely macrostate (like all heads, for example with zero entropy) and begin flipping one coin at a time, the entropy of the macrostate will start increasing, just as thermodynamic entropy does, and after a while, the coins will most likely be at or near that 50–50 macrostate, which has the greatest information entropy – the equilibrium entropy.
The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates.
The concept of information entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. When it is applied to the problem of a large number of interacting particles, along with some other constraints, like the conservation of energy, and the assumption that all microstates are equally likely, the resultant theory of statistical mechanics is extremely successful in explaining the laws of thermodynamics.
Example of increasing entropy
Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice and water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat (δQ) from the warmer surroundings at 298 K (25 °C; 77 °F) transfers to the cooler system of ice and water at its constant temperature (T) of 273 K (0 °C; 32 °F), the melting temperature of ice. The entropy of the system, which is , increases by . The heat δQ for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. ΔH for ice fusion.
The entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of for the surroundings is smaller than the ratio (entropy change), of for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy.
As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the over the continuous range, "at many increments", in the initially cool to finally warm water can be found by calculus. The entire miniature 'universe', i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that 'universe' than when the glass of ice and water was introduced and became a 'system' within it.
Origins and uses
Originally, entropy was named to describe the "waste heat", or more accurately, energy loss, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics.
For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the "motional" (i.e. kinetic) energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal. Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together.
The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy, which lacks the Boltzmann constant inherent in thermodynamic entropy.
Classical calculation of entropy
When the word 'entropy' was first defined and used in 1865, the very existence of atoms was still controversial, though it had long been speculated that temperature was due to the motion of microscopic constituents and that "heat" was the transferring of that motion from one place to another. Entropy change, , was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, can be explained, part by part, in modern terms describing how molecules are responsible for what is happening:
is the change in entropy of a system (some physical substance of interest) after some motional energy ("heat") has been transferred to it by fast-moving molecules. So, .
Then, , the quotient of the motional energy ("heat") q that is transferred "reversibly" (rev) to the system from the surroundings (or from another system in contact with the first system) divided by T, the absolute temperature at which the transfer occurs.
"Reversible" or "reversibly" (rev) simply means that T, the temperature of the system, has to stay (almost) exactly the same while any energy is being transferred to or from it. That is easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example, in the melting of ice at 273.15 K, no matter what temperature the surroundings are – from 273.20 K to 500 K or even higher, the temperature of the ice will stay at 273.15 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 joules at 273 K. Therefore, the entropy change per mole is , or 22 J/K.
When the temperature is not at the melting or boiling point of a substance no intermolecular bond-breaking is possible, and so any motional molecular energy ("heat") from the surroundings transferred to a system raises its temperature, making its molecules move faster and faster. As the temperature is constantly rising, there is no longer a particular value of "T" at which energy is transferred. However, a "reversible" energy transfer can be measured at a very small temperature increase, and a cumulative total can be found by adding each of many small temperature intervals or increments. For example, to find the entropy change from 300 K to 310 K, measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and so on, dividing the q by each T, and finally adding them all.
Calculus can be used to make this calculation easier if the effect of energy input to the system is linearly dependent on the temperature change, as in simple heating of a system at moderate to relatively high temperatures. Thus, the energy being transferred "per incremental change in temperature" (the heat capacity, ), multiplied by the integral of from to , is directly given by .
Alternate explanations of entropy
Thermodynamic entropy
A measure of energy unavailable for work: This is an often-repeated phrase which, although it is true, requires considerable clarification to be understood. It is only true for cyclic reversible processes, and is in this sense misleading. By "work" is meant moving an object, for example, lifting a weight, or bringing a flywheel up to speed, or carrying a load up a hill. To convert heat into work, using a coal-burning steam engine, for example, one must have two systems at different temperatures, and the amount of work you can extract depends on how large the temperature difference is, and how large the systems are. If one of the systems is at room temperature, and the other system is much larger, and near absolute zero temperature, then almost ALL of the energy of the room temperature system can be converted to work. If they are both at the same room temperature, then NONE of the energy of the room temperature system can be converted to work. Entropy is then a measure of how much energy cannot be converted to work, given these conditions. More precisely, for an isolated system comprising two closed systems at different temperatures, in the process of reaching equilibrium the amount of entropy lost by the hot system, multiplied by the temperature of the hot system, is the amount of energy that cannot converted to work.
An indicator of irreversibility: fitting closely with the 'unavailability of energy' interpretation is the 'irreversibility' interpretation. Spontaneous thermodynamic processes are irreversible, in the sense that they do not spontaneously undo themselves. Thermodynamic processes artificially imposed by agents in the surroundings of a body also have irreversible effects on the body. For example, when James Prescott Joule used a device that delivered a measured amount of mechanical work from the surroundings through a paddle that stirred a body of water, the energy transferred was received by the water as heat. There was scarce expansion of the water doing thermodynamic work back on the surroundings. The body of water showed no sign of returning the energy by stirring the paddle in reverse. The work transfer appeared as heat, and was not recoverable without a suitably cold reservoir in the surroundings. Entropy gives a precise account of such irreversibility.
Dispersal: Edward A. Guggenheim proposed an ordinary language interpretation of entropy that may be rendered as "dispersal of modes of microscopic motion throughout their accessible range". Later, along with a criticism of the idea of entropy as 'disorder', the dispersal interpretation was advocated by Frank L. Lambert, and is used in some student textbooks.
The interpretation properly refers to dispersal in abstract microstate spaces, but it may be loosely visualised in some simple examples of spatial spread of matter or energy. If a partition is removed from between two different gases, the molecules of each gas spontaneously disperse as widely as possible into their respectively newly accessible volumes; this may be thought of as mixing. If a partition, that blocks heat transfer between two bodies of different temperatures, is removed so that heat can pass between the bodies, then energy spontaneously disperses or spreads as heat from the hotter to the colder.
Beyond such loose visualizations, in a general thermodynamic process, considered microscopically, spontaneous dispersal occurs in abstract microscopic phase space. According to Newton's and other laws of motion, phase space provides a systematic scheme for the description of the diversity of microscopic motion that occurs in bodies of matter and radiation. The second law of thermodynamics may be regarded as quantitatively accounting for the intimate interactions, dispersal, or mingling of such microscopic motions. In other words, entropy may be regarded as measuring the extent of diversity of motions of microscopic constituents of bodies of matter and radiation in their own states of internal thermodynamic equilibrium.
Information entropy and statistical mechanics
As a measure of disorder: Traditionally, 20th century textbooks have introduced entropy as order and disorder so that it provides "a measurement of the disorder or randomness of a system". It has been argued that ambiguities in, and arbitrary interpretations of, the terms used (such as "disorder" and "chaos") contribute to widespread confusion and can hinder comprehension of entropy for most students. On the other hand, in a convenient though arbitrary interpretation, "disorder" may be sharply defined as the Shannon entropy of the probability distribution of microstates given a particular macrostate, in which case the connection of "disorder" to thermodynamic entropy is straightforward, but arbitrary and not immediately obvious to anyone unfamiliar with information theory.
Missing information: The idea that information entropy is a measure of how much one does not know about a system is quite useful.
If, instead of using the natural logarithm to define information entropy, we instead use the base 2 logarithm, then the information entropy is roughly equal to the average number of (carefully chosen ) yes/no questions that would have to be asked to get complete information about the system under study. In the introductory example of two flipped coins, the information entropy for the macrostate which contains one head and one tail, one would only need one question to determine its exact state, (e.g. is the first one heads?") and instead of expressing the entropy as ln(2) one could say, equivalently, that it is log2(2) which equals the number of binary questions we would need to ask: One. When measuring entropy using the natural logarithm (ln), the unit of information entropy is called a "nat", but when it is measured using the base-2 logarithm, the unit of information entropy is called a "shannon" (alternatively, "bit"). This is just a difference in units, much like the difference between inches and centimeters. (1 nat = log2e shannons). Thermodynamic entropy is equal to the Boltzmann constant times the information entropy expressed in nats. The information entropy expressed with the unit shannon (Sh) is equal to the number of yes–no questions that need to be answered in order to determine the microstate from the macrostate.
The concepts of "disorder" and "spreading" can be analyzed with this information entropy concept in mind. For example, if we take a new deck of cards out of the box, it is arranged in "perfect order" (spades, hearts, diamonds, clubs, each suit beginning with the ace and ending with the king), we may say that we then have an "ordered" deck with an information entropy of zero. If we thoroughly shuffle the deck, the information entropy will be about 225.6 shannons: We will need to ask about 225.6 questions, on average, to determine the exact order of the shuffled deck. We can also say that the shuffled deck has become completely "disordered" or that the ordered cards have been "spread" throughout the deck. But information entropy does not say that the deck needs to be ordered in any particular way. If we take our shuffled deck and write down the names of the cards, in order, then the information entropy becomes zero. If we again shuffle the deck, the information entropy would again be about 225.6 shannons, even if by some miracle it reshuffled to the same order as when it came out of the box, because even if it did, we would not know that. So the concept of "disorder" is useful if, by order, we mean maximal knowledge and by disorder we mean maximal lack of knowledge. The "spreading" concept is useful because it gives a feeling to what happens to the cards when they are shuffled. The probability of a card being in a particular place in an ordered deck is either 0 or 1, in a shuffled deck it is 1/52. The probability has "spread out" over the entire deck. Analogously, in a physical system, entropy is generally associated with a "spreading out" of mass or energy.
The connection between thermodynamic entropy and information entropy is given by Boltzmann's equation, which says that . If we take the base-2 logarithm of W, it will yield the average number of questions we must ask about the microstate of the physical system in order to determine its macrostate.
See also
Entropy (classical thermodynamics)
Entropy (energy dispersal)
Second law of thermodynamics
Statistical mechanics
Thermodynamics
List of textbooks on thermodynamics and statistical mechanics
References
Further reading
Chapters 4–12 touch on entropy.
Thermodynamic entropy | 0.771616 | 0.988623 | 0.762837 |
Electromagnetic mass | Electromagnetic mass was initially a concept of classical mechanics, denoting as to how much the electromagnetic field, or the self-energy, is contributing to the mass of charged particles. It was first derived by J. J. Thomson in 1881 and was for some time also considered as a dynamical explanation of inertial mass per se. Today, the relation of mass, momentum, velocity, and all forms of energy – including electromagnetic energy – is analyzed on the basis of Albert Einstein's special relativity and mass–energy equivalence. As to the cause of mass of elementary particles, the Higgs mechanism in the framework of the relativistic Standard Model is currently used. However, some problems concerning the electromagnetic mass and self-energy of charged particles are still studied.
Charged particles
Rest mass and energy
It was recognized by J. J. Thomson in 1881 that a charged sphere moving in a space filled with a medium of a specific inductive capacity (the electromagnetic aether of James Clerk Maxwell), is harder to set in motion than an uncharged body. (Similar considerations were already made by George Gabriel Stokes (1843) with respect to hydrodynamics, who showed that the inertia of a body moving in an incompressible perfect fluid is increased.) So due to this self-induction effect, electrostatic energy behaves as having some sort of momentum and "apparent" electromagnetic mass, which can increase the ordinary mechanical mass of the bodies, or in more modern terms, the increase should arise from their electromagnetic self-energy. This idea was worked out in more detail by Oliver Heaviside (1889), Thomson (1893), George Frederick Charles Searle (1897), Max Abraham (1902), Hendrik Lorentz (1892, 1904), and was directly applied to the electron by using the Abraham–Lorentz force. Now, the electrostatic energy and mass of an electron at rest was calculated to be
where is the charge, uniformly distributed on the surface of a sphere, and is the classical electron radius, which must be nonzero to avoid infinite energy accumulation. Thus the formula for this electromagnetic energy–mass relation is
This was discussed in connection with the proposal of the electrical origin of matter, so Wilhelm Wien (1900), and Max Abraham (1902), came to the conclusion that the total mass of the bodies is identical to its electromagnetic mass. Wien stated, that if it is assumed that gravitation is an electromagnetic effect too, then there has to be a proportionality between electromagnetic energy, inertial mass, and gravitational mass. When one body attracts another one, the electromagnetic energy store of gravitation is according to Wien diminished by the amount (where is the attracted mass, the gravitational constant, the distance):
Henri Poincaré in 1906 argued that when mass is in fact the product of the electromagnetic field in the aether – implying that no "real" mass exists – and because matter is inseparably connected with mass, then also matter doesn't exist at all and electrons are only concavities in the aether.
Mass and speed
Thomson and Searle
Thomson (1893) noticed that electromagnetic momentum and energy of charged bodies, and therefore their masses, depend on the speed of the bodies as well. He wrote:
In 1897, Searle gave a more precise formula for the electromagnetic energy of charged sphere in motion:
and like Thomson he concluded:
Longitudinal and transverse mass
From Searle's formula, Walter Kaufmann (1901) and Max Abraham (1902) derived the formula for the electromagnetic mass of moving bodies:
However, it was shown by Abraham (1902), that this value is only valid in the longitudinal direction ("longitudinal mass"), i.e., that the electromagnetic mass also depends on the direction of the moving bodies with respect to the aether. Thus Abraham also derived the "transverse mass":
On the other hand, already in 1899 Lorentz assumed that the electrons undergo length contraction in the line of motion, which leads to results for the acceleration of moving electrons that differ from those given by Abraham. Lorentz obtained factors of parallel to the direction of motion and perpendicular to the direction of motion, where and is an undetermined factor. Lorentz expanded his 1899 ideas in his famous 1904 paper, where he set the factor to unity, thus:
,
So, eventually Lorentz arrived at the same conclusion as Thomson in 1893: no body can reach the speed of light because the mass becomes infinitely large at this velocity.
Additionally, a third electron model was developed by Alfred Bucherer and Paul Langevin, in which the electron contracts in the line of motion, and expands perpendicular to it, so that the volume remains constant. This gives:
Kaufmann's experiments
The predictions of the theories of Abraham and Lorentz were supported by the experiments of Walter Kaufmann (1901), but the experiments were not precise enough to distinguish between them. In 1905 Kaufmann conducted another series of experiments (Kaufmann–Bucherer–Neumann experiments) which confirmed Abraham's and Bucherer's predictions, but contradicted Lorentz's theory and the "fundamental assumption of Lorentz and Einstein", i.e., the relativity principle. In the following years experiments by Alfred Bucherer (1908), Gunther Neumann (1914) and others seemed to confirm Lorentz's mass formula. It was later pointed out that the Bucherer–Neumann experiments were also not precise enough to distinguish between the theories – it lasted until 1940 when the precision required was achieved to eventually prove Lorentz's formula and to refute Abraham's by these kinds of experiments. (However, other experiments of different kind already refuted Abraham's and Bucherer's formulas long before.)
Poincaré stresses and the problem
The idea of an electromagnetic nature of matter, however, had to be given up. Abraham (1904, 1905) argued that non-electromagnetic forces were necessary to prevent Lorentz's contractile electrons from exploding. He also showed that different results for the longitudinal electromagnetic mass can be obtained in Lorentz's theory, depending on whether the mass is calculated from its energy or its momentum, so a non-electromagnetic potential (corresponding to of the electron's electromagnetic energy) was necessary to render these masses equal. Abraham doubted whether it was possible to develop a model satisfying all of these properties.
To solve those problems, Henri Poincaré in 1905 and 1906 introduced some sort of pressure ("Poincaré stresses") of non-electromagnetic nature. As required by Abraham, these stresses contribute non-electromagnetic energy to the electrons, amounting to of their total energy or to of their electromagnetic energy. So, the Poincaré stresses remove the contradiction in the derivation of the longitudinal electromagnetic mass, they prevent the electron from exploding, they remain unaltered by a Lorentz transformation (i.e. they are Lorentz invariant), and were also thought as a dynamical explanation of length contraction. However, Poincaré still assumed that only the electromagnetic energy contributes to the mass of the bodies.
As it was later noted, the problem lies in the factor of electromagnetic rest mass – given above as when derived from the Abraham–Lorentz equations. However, when it is derived from the electron's electrostatic energy alone, we have where the factor is missing. This can be solved by adding the non-electromagnetic energy of the Poincaré stresses to , the electron's total energy now becomes:
Thus the missing factor is restored when the mass is related to its electromagnetic energy, and it disappears when the total energy is considered.
Inertia of energy and radiation paradoxes
Radiation pressure
Another way of deriving some sort of electromagnetic mass was based on the concept of radiation pressure. These pressures or tensions in the electromagnetic field were derived by James Clerk Maxwell (1874) and Adolfo Bartoli (1876). Lorentz recognized in 1895 that those tensions also arise in his theory of the stationary aether. So if the electromagnetic field of the aether is able to set bodies in motion, the action / reaction principle demands that the aether must be set in motion by matter as well. However, Lorentz pointed out that any tension in the aether requires the mobility of the aether parts, which is not possible since in his theory the aether is immobile. (unlike contemporaries like Thomson who used fluid descriptions) This represents a violation of the reaction principle that was accepted by Lorentz consciously. He continued by saying, that one can only speak about fictitious tensions, since they are only mathematical models in his theory to ease the description of the electrodynamic interactions.
Mass of the fictitious electromagnetic fluid
In 1900 Poincaré studied the conflict between the action/reaction principle and Lorentz's theory. He tried to determine whether the center of gravity still moves with a uniform velocity when electromagnetic fields and radiation are involved. He noticed that the action/reaction principle does not hold for matter alone, but that the electromagnetic field has its own momentum (such a momentum was also derived by Thomson in 1893 in a more complicated way). Poincaré concluded, the electromagnetic field energy behaves like a fictitious fluid („fluide fictif“) with a mass density of (in other words ). Now, if the center of mass frame (COM-frame) is defined by both the mass of matter and the mass of the fictitious fluid, and if the fictitious fluid is indestructible – it is neither created or destroyed – then the motion of the center of mass frame remains uniform.
But this electromagnetic fluid is not indestructible, because it can be absorbed by matter (which according to Poincaré was the reason why he regarded the em-fluid as "fictitious" rather than "real"). Thus the COM-principle would be violated again. As it was later done by Einstein, an easy solution of this would be to assume that the mass of em-field is transferred to matter in the absorption process. But Poincaré created another solution: He assumed that there exists an immobile non-electromagnetic energy fluid at each point in space, also carrying a mass proportional to its energy. When the fictitious em-fluid is destroyed or absorbed, its electromagnetic energy and mass is not carried away by moving matter, but is transferred into the non-electromagnetic fluid and remains at exactly the same place in that fluid. (Poincaré added that one should not be too surprised by these assumptions, since they are only mathematical fictions.) In this way, the motion of the COM-frame, including matter, fictitious em-fluid, and fictitious non-em-fluid, at least theoretically remains uniform.
However, since only matter and electromagnetic energy are directly observable by experiment (not the non-em-fluid), Poincaré's resolution still violates the reaction principle and the COM-theorem, when an emission/absorption process is practically considered. This leads to a paradox when changing frames: if waves are radiated in a certain direction, the device will suffer a recoil from the momentum of the fictitious fluid. Then, Poincaré performed a Lorentz boost (to first order in ) to the frame of the moving source. He noted that energy conservation holds in both frames, but that the law of conservation of momentum is violated. This would allow perpetual motion, a notion which he abhorred. The laws of nature would have to be different in the frames of reference, and the relativity principle would not hold. Therefore, he argued that also in this case there has to be another compensating mechanism in the ether.
Poincaré came back to this topic in 1904. This time he rejected his own solution that motions in the ether can compensate the motion of matter, because any such motion is unobservable and therefore scientifically worthless. He also abandoned the concept that energy carries mass and wrote in connection to the above-mentioned recoil:
These iterative developments culminated in his 1906 publication "The End of Matter" in which he notes that when applying the methodology of using an electric or magnetic field deviations to determine charge-to-mass ratios, one finds that the apparent mass added by charge makes up all of the apparent mass, thus the "real mass is equal to zero." Thus he goes on to postulate that electrons are only holes or motion effects in the aether while the aether itself is the only thing "endowed with inertia."
He then goes on to address the possibility that all matter might share this same quality and thereby his position changes from viewing aether as a "fictitious fluid" to suggesting it might be the only thing that actually exists in the universe, finally stating "In this system there is no actual matter, there are only holes in the aether."
Finally he repeats this exact problem of "Newton's principle" from 1904 again in 1908 publication in his section on "the principle of reaction" he notes that the actions of radiation pressure cannot be tied solely to matter in light of Fizeau's proof that the Hertz notion of total ether drag is untenable. This, he clarifies in the next section in his own explanation of Mass–energy equivalence:
Thus Poincaré's mass of a fictitious fluid led him to, instead, later find that the mass of matter itself was "fictitious."
Einstein's own 1906 publication grants credit to Poincare for previously exploring the mass-energy equivalence and it is from these comments that it is commonly reported that Lorentz ether theory is "mathematically equivalent."
Momentum and cavity radiation
However, Poincaré's idea of momentum and mass associated with radiation proved to be fruitful, when in 1903 Max Abraham introduced the term „electromagnetic momentum“, having a field density of per cm3 and per cm2. Contrary to Lorentz and Poincaré, who considered momentum as a fictitious force, he argued that it is a real physical entity, and therefore conservation of momentum is guaranteed.
In 1904, Friedrich Hasenöhrl specifically associated inertia with radiation by studying the dynamics of a moving cavity. Hasenöhrl suggested that part of the mass of a body (which he called apparent mass) can be thought of as radiation bouncing around a cavity. The apparent mass of radiation depends on the temperature (because every heated body emits radiation) and is proportional to its energy, and he first concluded that . However, in 1905 Hasenöhrl published a summary of a letter, which was written by Abraham to him. Abraham concluded that Hasenöhrl's formula of the apparent mass of radiation is not correct, and on the basis of his definition of electromagnetic momentum and longitudinal electromagnetic mass Abraham changed it to , the same value for the electromagnetic mass for a body at rest. Hasenöhrl recalculated his own derivation and verified Abraham's result. He also noticed the similarity between the apparent mass and the electromagnetic mass that Poincaré would comment on in 1906. However, Hasenöhrl stated that this energy-apparent-mass relation only holds as long a body radiates, i.e. if the temperature of a body is greater than 0 K.
Modern view
Mass–energy equivalence
The idea that the principal relations between mass, energy, momentum and velocity can only be considered on the basis of dynamical interactions of matter was superseded, when Albert Einstein found out in 1905 that considerations based on the special principle of relativity require that all forms of energy (not only electromagnetic) contribute to the mass of bodies (mass–energy equivalence). That is, the entire mass of a body is a measure of its energy content by , and Einstein's considerations were independent from assumptions about the constitution of matter. By this equivalence, Poincaré's radiation paradox can be solved without using "compensating forces", because the mass of matter itself (not the non-electromagnetic aether fluid as suggested by Poincaré) is increased or diminished by the mass of electromagnetic energy in the course of the emission/absorption process. Also the idea of an electromagnetic explanation of gravitation was superseded in the course of developing general relativity.
So every theory dealing with the mass of a body must be formulated in a relativistic way from the outset. This is for example the case in the current quantum field explanation of mass of elementary particles in the framework of the Standard Model, the Higgs mechanism. Because of this, the idea that any form of mass is completely caused by interactions with electromagnetic fields, is not relevant any more.
Relativistic mass
The concepts of longitudinal and transverse mass (equivalent to those of Lorentz) were also used by Einstein in his first papers on relativity. However, in special relativity they apply to the entire mass of matter, not only to the electromagnetic part. Later it was shown by physicists like Richard Chace Tolman that expressing mass as the ratio of force and acceleration is not advantageous. Therefore, a similar concept without direction dependent terms, in which force is defined as , was used as relativistic mass
This concept is sometimes still used in modern physics textbooks, although the term 'mass' is now considered by many to refer to invariant mass, see mass in special relativity.
Self-energy
When the special case of the electromagnetic self-energy or self-force of charged particles is discussed, also in modern texts some sort of "effective" electromagnetic mass is sometimes introduced – not as an explanation of mass per se, but in addition to the ordinary mass of bodies. Many different reformulations of the Abraham–Lorentz force have been derived – for instance, in order to deal with the -problem (see next section) and other problems that arose from this concept. Such questions are discussed in connection with renormalization, and on the basis of quantum mechanics and quantum field theory, which must be applied when the electron is considered physically point-like. At distances located in the classical domain, the classical concepts again come into play. A rigorous derivation of the electromagnetic self-force, including the contribution to the mass of the body, was published by Gralla et al. (2009).
problem
Max von Laue in 1911 also used the Abraham–Lorentz equations of motion in his development of special relativistic dynamics, so that also in special relativity the factor is present when the electromagnetic mass of a charged sphere is calculated. This contradicts the mass–energy equivalence formula, which requires the relation without the factor, or in other words, four-momentum doesn't properly transform like a four-vector when the factor is present. Laue found a solution equivalent to Poincaré's introduction of a non-electromagnetic potential (Poincaré stresses), but Laue showed its deeper, relativistic meaning by employing and advancing Hermann Minkowski's spacetime formalism. Laue's formalism required that there are additional components and forces, which guarantee that spatially extended systems (where both electromagnetic and non-electromagnetic energies are combined) are forming a stable or "closed system" and transform as a four-vector. That is, the factor arises only with respect to electromagnetic mass, while the closed system has total rest mass and energy of .
Another solution was found by authors such as Enrico Fermi (1922), Paul Dirac (1938) Fritz Rohrlich (1960), or Julian Schwinger (1983), who pointed out that the electron's stability and the 4/3-problem are two different things. They showed that the preceding definitions of four-momentum are non-relativistic per se, and by changing the definition into a relativistic form, the electromagnetic mass can simply be written as and thus the factor doesn't appear at all. So every part of the system, not only "closed" systems, properly transforms as a four-vector. However, binding forces like the Poincaré stresses are still necessary to prevent the electron from exploding due to Coulomb repulsion. But on the basis of the Fermi–Rohrlich definition, this is only a dynamical problem and has nothing to do with the transformation properties any more.
Also other solutions have been proposed, for instance, Valery Morozov (2011) gave consideration to movement of an imponderable charged sphere. It turned out that a flux of nonelectromagnetic energy exists in the sphere body. This flux has an impulse exactly equal to of the sphere electromagnetic impulse regardless of a sphere internal structure or a material, it is made of. The problem was solved without attraction of any additional hypotheses. In this model, sphere tensions are not connected with its mass.
See also
History of special relativity
Abraham–Lorentz force
Wheeler–Feynman absorber theory
Secondary sources ( references)
Primary sources
Special relativity
Obsolete theories in physics | 0.786852 | 0.969462 | 0.762823 |
Accelerated Reader | Accelerated Reader (AR) is an educational program created by Renaissance Learning. It is designed to monitor and manage students' independent reading practice and comprehension in both English and Spanish. The program assesses students' performance through quizzes and tests based on the books they have read. As the students read and take quizzes, they are awarded points. AR monitors students' progress and establishes personalised reading goals according to their reading levels.
Components
ATOS
ATOS is a readability formula designed by Renaissance Learning.
Books with quizzes in Accelerated Reader are assigned an ATOS readability level. This ATOS score is used by AR in combination with a book length to assign a point value to each book. It can also be used by students to help choose books of appropriate reading levels.
Quiz
Accelerated (going up to 7th grade) Reader (AR) quizzes are available on fiction and non-fiction books, textbooks, supplemental materials, and magazines. Most are in the form of reading practice quizzes although, some are curriculum-based with multiple subjects.
Many of the company's quizzes are available in an optional recorded voice format for primary-level book in which the quiz questions and answers are read to the student taking the quiz. These quizzes are designed to help emerge English and Spanish readers to take the quizzes without additional assistance.
The Renaissance Place version of Accelerated Reader also includes quizzes designed to practice vocabulary. The quizzes use words from books, and are taken after the book has been read. Bookmarks can be printed out to display the vocabulary words so that as students read, they can refer to the bookmark for help. The quizzes will keep track of the words learned.
Reports
Reports are generated on demand to help students, teachers, and parents monitor student progress. Reports are available regarding student reading, comprehension, amount of reading, diagnostic information, and other variables. Customizable reports available in the Renaissance Place edition can also report district-level information.
The TOPS Report (The Opportunity to Praise Students) reports quiz results after each quiz is taken. Diagnostic Reports identify students in need of intervention based on various factors. The Student Record Report is a complete record of the books the student has read.
Evaluation research
A number of studies have been conducted regarding the effectiveness of using Accelerated Reader in the classroom. The following two studies were reviewed by the What Works Clearinghouse and were found to meet their research standards.
In a study conducted in Memphis, Tennessee, 1,665 students and 76 teachers from 12 schools (grades K-8) were surveyed. The study involved randomly selecting some teachers to implement the Accelerated Reader software, while others continued with the regular curriculum without the software. The results indicated that students in classrooms utilising the Accelerated Reader program showed academic gains.
In another study, Nunnery, Ross, and McDonald evaluated the reading achievement of students in grades 3–8. They examined the impact of individual, classroom, and school factors on reading achievement. The findings revealed that students in classrooms using the Accelerated Reader program outperformed those in control classrooms. Additionally, students with learning disabilities in classrooms with high levels of Accelerated Reader implementation showed better performance compared to similar students in classrooms with low or no implementation.
Other evaluations
In a controlled evaluation, Holmes and Brown found that two schools using the School Renaissance program achieved statistically significantly higher standardized test scores compared with two comparison schools that only used the Renaissance program in a limited way. Because so many schools in the United States are using Accelerated Reader, it was difficult for the authors of this study to find two schools in Georgia that were not already using Accelerated Reader. The authors noted:
In 2003, Samuels and Wu found that after six months, third and fifth grade students who used Accelerated Reader demonstrated twice the gain in reading comprehension as those that did not use Accelerated Reader. The comparison students completed book reports suggesting that delayed feedback through book reports is not as useful as the immediate feedback provided by Accelerated Reader. In another study, Samuels and Wu found students in Accelerated Reader classrooms in a Minnesota elementary school after controlling the amount of time spent reading each day outperformed students in control classrooms.
Researcher Keith Topping completed many studies on Accelerated Reader that found the software to be an effective assessment for deciding curriculum.
Criticism
Renaissance Learning, the developer of Accelerated Reader, has outlined the primary purpose of the program as an assessment tool to gauge whether students have read a book, not to assess higher-order thinking skills, to teach or otherwise replace curriculum, to supersede the role of the teacher, or to provide an extrinsic reward. Educator Jim Trelease however, describes Accelerated Reader, along with Scholastic's Reading Counts!, as "reading incentive software" in an article exploring the pros and cons of the two software packages. Stephen D. Krashen, in a 2003 literature review, also asserts that reading incentives is one of the aspects of Accelerated Reader. He reiterates prior research stating that reading for incentives does not create long-term readers.
Renaissance Place does include recognizing setting and understanding sequence as examples of higher-order thinking. Turner and Paris's study explore the role of classroom literacy tasks in which students take end-of-book tests called Reading Practice Quizzes that are composed of literal-recall questions to which there is only one answer. Turner and Paris would classify these quizzes as "closed tasks." They concluded that open-ended tasks are more supportive of literacy growth in the future.
Florida Center for Reading Research, citing two studies that support the product, noted both the lack of available books in a school's library and the lack of assessment of "inferential or critical thinking skills" as weaknesses of the software. Their guide also noted a number of strengths of the software, including its ability to motivate students and provide immediate results on student's reading habits and progress.
Use of the program has been criticized by Scholastic as preventing children from reading from a variety of difficulty levels. A PowerPoint from Scholastic made in 2006 indicates that 39% of children between the ages of five and ten have read a Harry Potter novel with 68% of students in that age range having an interest in reading or re-reading a Harry Potter book. For example, the ATOS reading level of {Harry Potter and the Philosopher's Stone} is 5.5 (with ATOS numbers corresponding to grade levels). This would indicate that students below that grade range may not be able to read and comprehend the book. Since teachers, parents and students use readability levels to select books, this may discourage students from reading the book as the student is under pressure to earn Accelerated Reader points during the school year. Although, students can take tests and earn points for books at any ATOS level.
References
External links
Accelerated Reader
ERIC - Education Resources Information Center
National Center on Student Progress Monitoring
Software for children
Renaissance Learning software
Children's educational video games | 0.764841 | 0.997356 | 0.762818 |
Yang–Mills theory | Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group , or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. ) as well as quantum chromodynamics, the theory of the strong force (based on ). Thus it forms the basis of the understanding of the Standard Model of particle physics.
History and qualitative description
Gauge theory in electrodynamics
All known fundamental interactions can be described in terms of gauge theories, but working this out took decades. Hermann Weyl's pioneering work on this project started in 1915 when his colleague Emmy Noether proved that every conserved physical quantity has a matching symmetry, and culminated in 1928 when he published his book applying the geometrical theory of symmetry (group theory) to quantum mechanics. Weyl named the relevant symmetry in Noether's theorem the "gauge symmetry", by analogy to distance standardization in railroad gauges.
Erwin Schrödinger in 1922, three years before working on his equation, connected Weyl's group concept to electron charge. Schrödinger showed that the group produced a phase shift in electromagnetic fields that matched the conservation of electric charge. As the theory of quantum electrodynamics developed in the 1930's and 1940's the group transformations played a central role. Many physicists thought there must be an analog for the dynamics of nucleons.
Chen Ning Yang in particular was obsessed with this possibility.
Yang and Mills find the nuclear force gauge theory
Yang's core idea was to look for a conserved quantity in nuclear physics comparable to electric charge and use it to develop a corresponding gauge theory comparable to electrodynamics. He settled on conservation of isospin, a quantum number that distinguishes a neutron from a proton, but he made no progress on a theory. Taking a break from Princeton in the summer of 1953, Yang met a collaborator who could help: Robert Mills. As Mills himself describes:"During the academic year 1953–1954, Yang was a visitor to Brookhaven National Laboratory ... I was at Brookhaven also ... and was assigned to the same office as Yang. Yang, who has demonstrated on a number of occasions his generosity to physicists beginning their careers, told me about his idea of generalizing gauge invariance and we discussed it at some length ... I was able to contribute something to the discussions, especially with regard to the quantization procedures, and to a small degree in working out the formalism; however, the key ideas were Yang's."
In the summer 1953, Yang and Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to non-abelian groups, selecting the group to provide an explanation for isospin conservation in collisions involving the strong interactions. Yang's presentation of the work at Princeton in February 1954 was challenged by Pauli, asking about the mass in the field developed with the gauge invariance idea. Pauli knew that this might be an issue as he had worked on applying gauge invariance but chose not to publish it, viewing the massless excitations of the theory to be "unphysical 'shadow particles'". Yang and Mills published in October 1954; near the end of the paper, they admit:
This problem of unphysical massless excitation blocked further progress.
The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward, initially by Jeffrey Goldstone, Yoichiro Nambu, and Giovanni Jona-Lasinio. This prompted a significant restart of Yang–Mills theory studies that proved successful in the formulation of both electroweak unification and quantum chromodynamics (QCD). The electroweak interaction is described by the gauge group , while QCD is an Yang–Mills theory. The massless gauge bosons of the electroweak mix after spontaneous symmetry breaking to produce the three massive bosons of the weak interaction (, , and ) as well as the still-massless photon field. The dynamics of the photon field and its interactions with matter are, in turn, governed by the gauge theory of quantum electrodynamics. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group . In the current epoch the strong interaction is not unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies.
Phenomenology at lower energies in quantum chromodynamics is not completely understood due to the difficulties of managing such a theory with a strong coupling. This may be the reason why confinement has not been theoretically proven, though it is a consistent experimental observation. This shows why QCD confinement at low energy is a mathematical problem of great relevance, and why the Yang–Mills existence and mass gap problem is a Millennium Prize Problem.
Parallel work on non-Abelian gauge theories
In 1953, in a private correspondence, Wolfgang Pauli formulated a six-dimensional theory of Einstein's field equations of general relativity, extending the five-dimensional theory of Kaluza, Klein, Fock, and others to a higher-dimensional internal space. However, there is no evidence that Pauli developed the Lagrangian of a gauge field or the quantization of it. Because Pauli found that his theory "leads to some rather unphysical shadow particles", he refrained from publishing his results formally. Although Pauli did not publish his six-dimensional theory, he gave two seminar lectures about it in Zürich in November 1953.
In January 1954 Ronald Shaw, a graduate student at the University of Cambridge also developed a non-Abelian gauge theory for nuclear forces.
However, the theory needed massless particles in order to maintain gauge invariance. Since no such massless particles were known at the time, Shaw and his supervisor Abdus Salam chose not to publish their work.
Shortly after Yang and Mills published their paper in October 1954, Salam encouraged Shaw to publish his work to mark his contribution. Shaw declined, and instead it only forms a chapter of his PhD thesis published in 1956.
Mathematical overview
Yang–Mills theories are special examples of gauge theories with a non-abelian symmetry group given by the Lagrangian
with the generators of the Lie algebra, indexed by , corresponding to the -quantities (the curvature or field-strength form) satisfying
Here, the are structure constants of the Lie algebra (totally antisymmetric if the generators of the Lie algebra are normalised such that is proportional to ), the covariant derivative is defined as
is the identity matrix (matching the size of the generators), is the vector potential, and is the coupling constant. In four dimensions, the coupling constant is a pure number and for a group one has
The relation
can be derived by the commutator
The field has the property of being self-interacting and the equations of motion that one obtains are said to be semilinear, as nonlinearities are both with and without derivatives. This means that one can manage this theory only by perturbation theory with small nonlinearities.
Note that the transition between "upper" ("contravariant") and "lower" ("covariant") vector or tensor components is trivial for a indices (e.g. ), whereas for μ and ν it is nontrivial, corresponding e.g. to the usual Lorentz signature,
From the given Lagrangian one can derive the equations of motion given by
Putting these can be rewritten as
A Bianchi identity holds
which is equivalent to the Jacobi identity
since Define the dual strength tensor
then the Bianchi identity can be rewritten as
A source enters into the equations of motion as
Note that the currents must properly change under gauge group transformations.
We give here some comments about the physical dimensions of the coupling. In dimensions, the field scales as and so the coupling must scale as This implies that Yang–Mills theory is not renormalizable for dimensions greater than four. Furthermore, for the coupling is dimensionless and both the field and the square of the coupling have the same dimensions of the field and the coupling of a massless quartic scalar field theory. So, these theories share the scale invariance at the classical level.
Quantization
A method of quantizing the Yang–Mills theory is by functional methods, i.e. path integrals. One introduces a generating functional for -point functions as
but this integral has no meaning as it is because the potential vector can be arbitrarily chosen due to the gauge freedom. This problem was already known for quantum electrodynamics but here becomes more severe due to non-abelian properties of the gauge group. A way out has been given by Ludvig Faddeev and Victor Popov with the introduction of a ghost field (see Faddeev–Popov ghost) that has the property of being unphysical since, although it agrees with Fermi–Dirac statistics, it is a complex scalar field, which violates the spin–statistics theorem. So, we can write the generating functional as
being
for the field,
for the gauge fixing and
for the ghost. This is the expression commonly used to derive Feynman's rules (see Feynman diagram). Here we have for the ghost field while fixes the gauge's choice for the quantization. Feynman's rules obtained from this functional are the following
These rules for Feynman's diagrams can be obtained when the generating functional given above is rewritten as
with
being the generating functional of the free theory. Expanding in and computing the functional derivatives, we are able to obtain all the -point functions with perturbation theory. Using LSZ reduction formula we get from the -point functions the corresponding process amplitudes, cross sections and decay rates. The theory is renormalizable and corrections are finite at any order of perturbation theory.
For quantum electrodynamics the ghost field decouples because the gauge group is abelian. This can be seen from the coupling between the gauge field and the ghost field that is For the abelian case, all the structure constants are zero and so there is no coupling. In the non-abelian case, the ghost field appears as a useful way to rewrite the quantum field theory without physical consequences on the observables of the theory such as cross sections or decay rates.
One of the most important results obtained for Yang–Mills theory is asymptotic freedom. This result can be obtained by assuming that the coupling constant is small (so small nonlinearities), as for high energies, and applying perturbation theory. The relevance of this result is due to the fact that a Yang–Mills theory that describes strong interaction and asymptotic freedom permits proper treatment of experimental results coming from deep inelastic scattering.
To obtain the behavior of the Yang–Mills theory at high energies, and so to prove asymptotic freedom, one applies perturbation theory assuming a small coupling. This is verified a posteriori in the ultraviolet limit. In the opposite limit, the infrared limit, the situation is the opposite, as the coupling is too large for perturbation theory to be reliable. Most of the difficulties that research meets is just managing the theory at low energies. That is the interesting case, being inherent to the description of hadronic matter and, more generally, to all the observed bound states of gluons and quarks and their confinement (see hadrons). The most used method to study the theory in this limit is to try to solve it on computers (see lattice gauge theory). In this case, large computational resources are needed to be sure the correct limit of infinite volume (smaller lattice spacing) is obtained. This is the limit the results must be compared with. Smaller spacing and larger coupling are not independent of each other, and larger computational resources are needed for each. As of today, the situation appears somewhat satisfactory for the hadronic spectrum and the computation of the gluon and ghost propagators, but the glueball and hybrids spectra are yet a questioned matter in view of the experimental observation of such exotic states. Indeed, the resonance
is not seen in any of such lattice computations and contrasting interpretations have been put forward. This is a hotly debated issue.
Open problems
Yang–Mills theories met with general acceptance in the physics community after Gerard 't Hooft, in 1972, worked out their renormalization, relying on a formulation of the problem worked out by his advisor Martinus Veltman.
Renormalizability is obtained even if the gauge bosons described by this theory are massive, as in the electroweak theory, provided the mass is only an "acquired" one, generated by the Higgs mechanism.
The mathematics of the Yang–Mills theory is a very active field of research, yielding e.g. invariants of differentiable structures on four-dimensional manifolds via work of Simon Donaldson. Furthermore, the field of Yang–Mills theories was included in the Clay Mathematics Institute's list of "Millennium Prize Problems". Here the prize-problem consists, especially, in a proof of the conjecture that the lowest excitations of a pure Yang–Mills theory (i.e. without matter fields) have a finite mass-gap with regard to the vacuum state. Another open problem, connected with this conjecture, is a proof of the confinement property in the presence of additional fermions.
In physics the survey of Yang–Mills theories does not usually start from perturbation analysis or analytical methods, but more recently from systematic application of numerical methods to lattice gauge theories.
See also
Aharonov–Bohm effect
Coulomb gauge
Deformed Hermitian Yang–Mills equations
Gauge covariant derivative
Gauge theory (mathematics)
Hermitian Yang–Mills equations
Kaluza–Klein theory
Lattice gauge theory
Lorenz gauge
= 4 supersymmetric Yang–Mills theory
Propagator
Quantum gauge theory
Field theoretical formulation of the standard model
Symmetry in physics
Two-dimensional Yang–Mills theory
Weyl gauge
Yang–Mills equations
Yang–Mills existence and mass gap
Yang–Mills–Higgs equations
References
Further reading
Books
Articles
External links
Gauge theories
Symmetry | 0.765992 | 0.995855 | 0.762817 |
Axial parallelism | Axial parallelism (also called gyroscopic stiffness, inertia or rigidity, or "rigidity in space") is the characteristic of a rotating body in which the direction of the axis of rotation remains fixed as the object moves through space. In astronomy, this characteristic is found in astronomical bodies in orbit. It is the same effect that causes a gyroscope's axis of rotation to remain constant as Earth rotates, allowing the devices to measure Earth's rotation.
Examples
Earth's axial parallelism
The Earth's orbit, with its axis tilted at 23.5 degrees, exhibits approximate axial parallelism, maintaining its direction towards Polaris (the "North Star") year-round. Together with the Earth's axial tilt, this is one of the primary reasons for the Earth's seasons, as illustrated by the diagram to the right. It is also the reason that the stars appear fixed in the night sky, such as a "fixed" pole star, throughout Earth's orbit around the Sun.
Minor variation in the direction of the axis, known as axial precession, takes place over the course of 26,000 years. As a result, over the next 11,000 years the Earth's axis will move to point towards Vega instead of Polaris.
Other astronomical examples
Axial parallelism is widely observed in astronomy. For example, the axial parallelism of the Moon's orbital plane is a key factor in the phenomenon of eclipses. The Moon's orbital axis precesses a full circle during the 18 year, 10 day saros cycle. When the Moon's orbital tilt is aligned with the ecliptic tilt, it is 29 degrees from the ecliptic, while when they are anti-aligned (9 years later), the orbital inclination is only 18 degrees.
In addition, the rings of Saturn remain in a fixed direction as that planet rotates around the Sun.
Explanation
Early gyroscopes were used to demonstrate the principle, most notably the Foucault's gyroscope experiment. Prior to the invention of the gyroscope, it had been explained by scientists in various ways. Early modern astronomer David Gregory, a contemporary of Isaac Newton, wrote:
To explain the Motion of the Celestial Bodies about their proper Axes, given in Position, and the Revolutions of them… If a Body be said to be moved about a given Axe, being in other respects not moved, that Axe is suppos'd to be unmov'd, and every point out of it to describe a Circle, to whose Plane the Axis is perpendicular. And for that reason, if a Body be carried along a line, and at the same time be revolved about a given Axe; the Axe, in all the time of the Body's motion, will continue parallel to it self. Nor is any thing else required to preserve this Parallelism, than that no other Motion besides these two be impressed upon the Body; for if there be no other third Motion in it, its Axe will continue always parallel to the Right-line, to which it was once parallel.
This gyroscopic effect is described in modern times as "gyroscopic stiffness" or "rigidity in space". The Newtonian mechanical explanation is known as the conservation of angular momentum.
See also
Axial tilt
Polar motion
Rotation around a fixed axis
True polar wander
References
Technical factors of astrology
Celestial mechanics | 0.78038 | 0.977435 | 0.762771 |
Time constant | In physics and engineering, the time constant, usually denoted by the Greek letter (tau), is the parameter characterizing the response to a step input of a first-order, linear time-invariant (LTI) system. The time constant is the main characteristic unit of a first-order LTI system. It gives speed of the response.
In the time domain, the usual choice to explore the time response is through the step response to a step input, or the impulse response to a Dirac delta function input. In the frequency domain (for example, looking at the Fourier transform of the step response, or using an input that is a simple sinusoidal function of time) the time constant also determines the bandwidth of a first-order time-invariant system, that is, the frequency at which the output signal power drops to half the value it has at low frequencies.
The time constant is also used to characterize the frequency response of various signal processing systems – magnetic tapes, radio transmitters and receivers, record cutting and replay equipment, and digital filters – which can be modelled or approximated by first-order LTI systems. Other examples include time constant used in control systems for integral and derivative action controllers, which are often pneumatic, rather than electrical.
Time constants are a feature of the lumped system analysis (lumped capacity analysis method) for thermal systems, used when objects cool or warm uniformly under the influence of convective cooling or warming.
Physically, the time constant represents the elapsed time required for the system response to decay to zero if the system had continued to decay at the initial rate, because of the progressive change in the rate of decay the response will have actually decreased in value to in this time (say from a step decrease). In an increasing system, the time constant is the time for the system's step response to reach of its final (asymptotic) value (say from a step increase). In radioactive decay the time constant is related to the decay constant (λ), and it represents both the mean lifetime of a decaying system (such as an atom) before it decays, or the time it takes for all but 36.8% of the atoms to decay. For this reason, the time constant is longer than the half-life, which is the time for only 50% of the atoms to decay.
Differential equation
First order LTI systems are characterized by the differential equation
where represents the exponential decay constant and is a function of time
The right-hand side is the forcing function describing an external driving function of time, which can be regarded as the system input, to which is the response, or system output. Classical examples for are:
The Heaviside step function, often denoted by :
the impulse function, often denoted by , and also the sinusoidal input function:
or
where is the amplitude of the forcing function, is the frequency in Hertz, and is the frequency in radians per second.
Example solution
An example solution to the differential equation with initial value and no forcing function is
where
is the initial value of . Thus, the response is an exponential decay with time constant .
Discussion
Suppose
This behavior is referred to as a "decaying" exponential function. The time (tau) is referred to as the "time constant" and can be used (as in this case) to indicate how rapidly an exponential function decays.
Here:
is time (generally in control engineering)
is the initial value (see "specific cases" below).
Specific cases
Let ; then , and so
Let ; then
Let , and so
Let ; then
After a period of one time constant the function reaches = approximately 37% of its initial value. In case 4, after five time constants the function reaches a value less than 1% of its original. In most cases this 1% threshold is considered sufficient to assume that the function has decayed to zero – as a rule of thumb, in control engineering a stable system is one that exhibits such an overall damped behavior.
Relation of time constant to bandwidth
Suppose the forcing function is chosen as sinusoidal so:
(Response to a real cosine or sine wave input can be obtained by taking the real or imaginary part of the final result by virtue of Euler's formula.) The general solution to this equation for times , assuming is:
For long times the decaying exponentials become negligible and the steady-state solution or long-time solution is:
The magnitude of this response is:
By convention, the bandwidth of this system is the frequency where drops to half-value, or where . This is the usual bandwidth convention, defined as the frequency range where power drops by less than half (at most −3 dB). Using the frequency in hertz, rather than radians/s:
The notation stems from the expression of power in decibels and the observation that half-power corresponds to a drop in the value of by a factor of 1/2 or by 3 decibels.
Thus, the time constant determines the bandwidth of this system.
Step response with arbitrary initial conditions
Suppose the forcing function is chosen as a step input so:
with the unit step function. The general solution to this equation for times , assuming is:
(It may be observed that this response is the limit of the above response to a sinusoidal input.)
The long-time solution is time independent and independent of initial conditions:
The time constant remains the same for the same system regardless of the starting conditions. Simply stated, a system approaches its final, steady-state situation at a constant rate, regardless of how close it is to that value at any arbitrary starting point.
For example, consider an electric motor whose startup is well modelled by a first-order LTI system. Suppose that when started from rest, the motor takes of a second to reach 63% of its nominal speed of 100 RPM, or 63 RPM—a shortfall of 37 RPM. Then it will be found that after the next of a second, the motor has sped up an additional 23 RPM, which equals 63% of that 37 RPM difference. This brings it to 86 RPM—still 14 RPM low. After a third of a second, the motor will have gained an additional 9 RPM (63% of that 14 RPM difference), putting it at 95 RPM.
In fact, given any initial speed of a second later this particular motor will have gained an additional
Examples
Time constants in electrical circuits
In an RL circuit composed of a single resistor and inductor, the time constant (in seconds) is
where R is the resistance (in ohms) and L is the inductance (in henrys).
Similarly, in an RC circuit composed of a single resistor and capacitor, the time constant (in seconds) is:
where R is the resistance (in ohms) and C is the capacitance (in farads).
Electrical circuits are often more complex than these examples, and may exhibit multiple time constants (See Step response and Pole splitting for some examples.) In the case where feedback is present, a system may exhibit unstable, increasing oscillations. In addition, physical electrical circuits are seldom truly linear systems except for very low amplitude excitations; however, the approximation of linearity is widely used.
In digital electronic circuits another measure, the FO4 is often used. This can be converted to time constant units via the equation .
Thermal time constant
Time constants are a feature of the lumped system analysis (lumped capacity analysis method) for thermal systems, used when objects cool or warm uniformly under the influence of convective cooling or warming. In this case, the heat transfer from the body to the ambient at a given time is proportional to the temperature difference between the body and the ambient:
where h is the heat transfer coefficient, and As is the surface area, T is the temperature function, i.e., T(t) is the body temperature at time t, and Ta is the constant ambient temperature. The positive sign indicates the convention that F is positive when heat is leaving the body because its temperature is higher than the ambient temperature (F is an outward flux). As heat is lost to the ambient, this heat transfer leads to a drop in temperature of the body given by:
where ρ = density, cp = specific heat and V is the body volume. The negative sign indicates the temperature drops when the heat transfer is outward from the body (that is, when F > 0). Equating these two expressions for the heat transfer,
Evidently, this is a first-order LTI system that can be cast in the form:
with
In other words, larger masses ρV with higher heat capacities cp lead to slower changes in temperature (longer time constant τ), while larger surface areas As with higher heat transfer h lead to more rapid temperature change (shorter time constant τ).
Comparison with the introductory differential equation suggests the possible generalization to time-varying ambient temperatures Ta. However, retaining the simple constant ambient example, by substituting the variable ΔT ≡ (T − Ta), one finds:
Systems for which cooling satisfies the above exponential equation are said to satisfy Newton's law of cooling. The solution to this equation suggests that, in such systems, the difference between the temperature of the system and its surroundings ΔT as a function of time t, is given by:
where ΔT0 is the initial temperature difference, at time t = 0. In words, the body assumes the same temperature as the ambient at an exponentially slow rate determined by the time constant.
Time constants in biophysics
In an excitable cell such as a muscle or neuron, the time constant of the membrane potential is
where rm is the resistance across the membrane and cm is the capacitance of the membrane.
The resistance across the membrane is a function of the number of open ion channels and the capacitance is a function of the properties of the lipid bilayer.
The time constant is used to describe the rise and fall of membrane voltage, where the rise is described by
and the fall is described by
where voltage is in millivolts, time is in seconds, and is in seconds.
Vmax is defined as the maximum voltage change from the resting potential, where
where rm is the resistance across the membrane and I is the membrane current.
Setting for t = for the rise sets V(t) equal to 0.63Vmax. This means that the time constant is the time elapsed after 63% of Vmax has been reached
Setting for t = for the fall sets V(t) equal to 0.37Vmax, meaning that the time constant is the time elapsed after it has fallen to 37% of Vmax.
The larger a time constant is, the slower the rise or fall of the potential of a neuron. A long time constant can result in temporal summation, or the algebraic summation of repeated potentials. A short time constant rather produces a coincidence detector through spatial summation.
Exponential decay
In exponential decay, such as of a radioactive isotope, the time constant can be interpreted as the mean lifetime. The half-life THL or T1/2 is related to the exponential decay constant by
The reciprocal of the time constant is called the decay constant, and is denoted
Meteorological sensors
A time constant is the amount of time it takes for a meteorological sensor to respond to a rapid change in a measure, and until it is measuring values within the accuracy tolerance usually expected of the sensor.
This most often applies to measurements of temperature, dew-point temperature, humidity and air pressure. Radiosondes are especially affected due to their rapid increase in altitude.
See also
RC time constant
Cutoff frequency
Exponential decay
Lead–lag compensator
Length constant
Rise time
Fall time
Frequency response
Impulse response
Step response
Settling time
Notes
References
External links
Conversion of time constant τ to cutoff frequency fc and vice versa
All about circuits – Voltage and current calculations
Energy and Thermal Time Constant of Buildings
Physical constants
Neuroscience
Durations | 0.766316 | 0.995366 | 0.762765 |
Constitutive equation | In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations.
Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function.
Mechanical properties of matter
The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms
like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form stress rate = f (velocity gradient, stress, density) was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell.
In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions.
Definitions
Deformation of solids
Friction
Friction is a complicated phenomenon. Macroscopically, the friction force F between the interface of two materials can be modelled as proportional to the reaction force R at a point of contact between two interfaces through a dimensionless coefficient of friction μf, which depends on the pair of materials:
This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object).
Stress and strain
The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) k in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement x:
meaning the material responds linearly. Equivalently, in terms of the stress σ, Young's modulus E, and strain ε (dimensionless):
In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor:
where C is the elasticity tensor and S is the compliance tensor.
Solid-state deformations
Several classes of deformations in elastic materials are the following:
Plastic The applied force induces non-recoverable deformations in the material when the stress (or elastic strain) reaches a critical magnitude, called the yield point.
Elastic The material recovers its initial shape after deformation.
Viscoelastic If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs.
Anelastic If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines).
Hyperelastic The applied force induces displacements in the material following a strain energy density function.
Collisions
The relative speed of separation vseparation of an object A after a collision with another object B is related to the relative speed of approach vapproach by the coefficient of restitution, defined by Newton's experimental impact law:
which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually , in which for completely elastic collisions, and for completely inelastic collisions. It is possible for to occur – for superelastic (or explosive) collisions.
Deformation of fluids
The drag equation gives the drag force D on an object of cross-section area A moving through a fluid of density ρ at velocity v (relative to the fluid)
where the drag coefficient (dimensionless) cd depends on the geometry of the object and the drag forces at the interface between the fluid and object.
For a Newtonian fluid of viscosity μ, the shear stress τ is linearly related to the strain rate (transverse flow velocity gradient) ∂u/∂y (units s−1). In a uniform shear flow:
with u(y) the variation of the flow velocity u in the cross-flow (transverse) direction y. In general, for a Newtonian fluid, the relationship between the elements τij of the shear stress tensor and the deformation of the fluid is given by
with and
where vi are the components of the flow velocity vector in the corresponding xi coordinate directions, eij are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and δij is the Kronecker delta.
The ideal gas law is a constitutive relation in the sense the pressure p and volume V are related to the temperature T, via the number of moles n of gas:
where R is the gas constant (J⋅K−1⋅mol−1).
Electromagnetism
Constitutive equations in electromagnetism and related areas
In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used.
For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory).
These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth.
It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations.
Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves:
where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases.
Without magnetic or dielectric materials
In the absence of magnetic or dielectric materials, the constitutive relations are simple:
where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively.
Isotropic linear materials
In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are:
where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are:
where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by:
General case
For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny.
In general, the constitutive relations can usually still be written:
but ε and μ are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are:
As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional coupling constants ξ and ζ:
In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration).
Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability.
Calculation of constitutive relations
The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields.
The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation.
These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function.
A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium (valid for excitations with wavelengths much larger than the scale of the inhomogeneity).
The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, ε of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and ε at optical-light frequencies is often measured by ellipsometry.
Thermoelectric and electromagnetic properties of matter
These constitutive equations are often used in crystallography, a field of solid-state physics.
Photonics
Refractive index
The (absolute) refractive index of a medium n (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum c0 to that in the medium c:
where ε is the permittivity and εr the relative permittivity of the medium, likewise μ is the permeability and μr are the relative permeability of the medium. The vacuum permittivity is ε0 and vacuum permeability is μ0. In general, n (also εr) are complex numbers.
The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces;
Speed of light in matter
As a consequence of the definition, the speed of light in matter is
for special case of vacuum; and ,
Piezooptic effect
The piezooptic effect relates the stresses in solids σ to the dielectric impermeability a, which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1):
Transport phenomena
Definitions
Definitive laws
There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read:
Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material.
In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material.
See also
Defining equation (physical chemistry)
Governing equation
Principle of material objectivity
Rheology
Notes
References
Elasticity (physics)
Equations of physics
Electric and magnetic fields in matter | 0.771099 | 0.989175 | 0.762751 |
Rotation (mathematics) | Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. Rotation can have a sign (as in the sign of an angle): a clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude.
A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire -dimensional flat of fixed points in a -dimensional space.
Mathematically, a rotation is a map. All rotations about a fixed point form a group under composition called the rotation group (of a particular space). But in mechanics and, more generally, in physics, this concept is frequently understood as a coordinate transformation (importantly, a transformation of an orthonormal basis), because for any motion of a body there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example, in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. These two types of rotation are called active and passive transformations.
Related definitions and terminology
The rotation group is a Lie group of rotations about a fixed point. This (common) fixed point or center is called the center of rotation and is usually identified with the origin. The rotation group is a point stabilizer in a broader group of (orientation-preserving) motions.
For a particular rotation:
The axis of rotation is a line of its fixed points. They exist only in .
The plane of rotation is a plane that is invariant under the rotation. Unlike the axis, its points are not fixed themselves. The axis (where present) and the plane of a rotation are orthogonal.
A representation of rotations is a particular formalism, either algebraic or geometric, used to parametrize a rotation map. This meaning is somehow inverse to the meaning in the group theory.
Rotations of (affine) spaces of points and of respective vector spaces are not always clearly distinguished. The former are sometimes referred to as affine rotations (although the term is misleading), whereas the latter are vector rotations. See the article below for details.
Definitions and representations
In Euclidean geometry
A motion of a Euclidean space is the same as its isometry: it leaves the distance between any two points unchanged after the transformation. But a (proper) rotation also has to preserve the orientation structure. The "improper rotation" term refers to isometries that reverse (flip) the orientation. In the language of group theory the distinction is expressed as direct vs indirect isometries in the Euclidean group, where the former comprise the identity component. Any direct Euclidean motion can be represented as a composition of a rotation about the fixed point and a translation.
In one-dimensional space, there are only trivial rotations. In two dimensions, only a single angle is needed to specify a rotation about the origin – the angle of rotation that specifies an element of the circle group (also known as ). The rotation is acting to rotate an object counterclockwise through an angle about the origin; see below for details. Composition of rotations sums their angles modulo 1 turn, which implies that all two-dimensional rotations about the same point commute. Rotations about different points, in general, do not commute. Any two-dimensional direct motion is either a translation or a rotation; see Euclidean plane isometry for details.
Rotations in three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important even about the same point. Also, unlike the two-dimensional case, a three-dimensional direct motion, in general position, is not a rotation but a screw operation. Rotations about the origin have three degrees of freedom (see rotation formalisms in three dimensions for details), the same as the number of dimensions.
A three-dimensional rotation can be specified in a number of ways. The most usual methods are:
Euler angles (pictured at the left). Any rotation about the origin can be represented as the composition of three rotations defined as the motion obtained by changing one of the Euler angles while leaving the other two constant. They constitute a mixed axes of rotation system because angles are measured with respect to a mix of different reference frames, rather than a single frame that is purely external or purely intrinsic. Specifically, the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third is an intrinsic rotation (a spin) around an axis fixed in the body that moves. Euler angles are typically denoted as α, β, γ, or φ, θ, ψ. This presentation is convenient only for rotations about a fixed point.
Axis–angle representation (pictured at the right) specifies an angle with the axis about which the rotation takes place. It can be easily visualised. There are two variants to represent it:
as a pair consisting of the angle and a unit vector for the axis, or
as a Euclidean vector obtained by multiplying the angle with this unit vector, called the rotation vector (although, strictly speaking, it is a pseudovector).
Matrices, versors (quaternions), and other algebraic things: see the section Linear and Multilinear Algebra Formalism for details.
A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation; see rotations in 4-dimensional Euclidean space for details. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are and then all points not in the planes rotate through an angle between and . Rotations in four dimensions about a fixed point have six degrees of freedom. A four-dimensional direct motion in general position is a rotation about certain point (as in all even Euclidean dimensions), but screw operations exist also.
Linear and multilinear algebra formalism
When one considers motions of the Euclidean space that preserve the origin, the distinction between points and vectors, important in pure mathematics, can be erased because there is a canonical one-to-one correspondence between points and position vectors. The same is true for geometries other than Euclidean, but whose space is an affine space with a supplementary structure; see an example below. Alternatively, the vector description of rotations can be understood as a parametrization of geometric rotations up to their composition with translations. In other words, one vector rotation presents many equivalent rotations about all points in the space.
A motion that preserves the origin is the same as a linear operator on vectors that preserves the same geometric structure but expressed in terms of vectors. For Euclidean vectors, this expression is their magnitude (Euclidean norm). In components, such operator is expressed with orthogonal matrix that is multiplied to column vectors.
As it was already stated, a (proper) rotation is different from an arbitrary fixed-point motion in its preservation of the orientation of the vector space. Thus, the determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is , and this result means the transformation is a hyperplane reflection, a point reflection (for odd ), or another kind of improper rotation. Matrices of all proper rotations form the special orthogonal group.
Two dimensions
In two dimensions, to carry out a rotation using a matrix, the point to be rotated counterclockwise is written as a column vector, then multiplied by a rotation matrix calculated from the angle :
.
The coordinates of the point after rotation are , and the formulae for and are
The vectors and have the same magnitude and are separated by an angle as expected.
Points on the plane can be also presented as complex numbers: the point in the plane is represented by the complex number
This can be rotated through an angle by multiplying it by , then expanding the product using Euler's formula as follows:
and equating real and imaginary parts gives the same result as a two-dimensional matrix:
Since complex numbers form a commutative ring, vector rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation.
Three dimensions
As in two dimensions, a matrix can be used to rotate a point to a point . The matrix used is a matrix,
This is multiplied by a vector representing the point to give the result
The set of all appropriate matrices together with the operation of matrix multiplication is the rotation group SO(3). The matrix is a member of the three-dimensional special orthogonal group, , that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix.
Above-mentioned Euler angles and axis–angle representations can be easily converted to a rotation matrix.
Another possibility to represent a rotation of three-dimensional Euclidean vectors are quaternions described below.
Quaternions
Unit quaternions, or versors, are in some ways the least intuitive representation of three-dimensional rotations. They are not the three-dimensional instance of a general approach. They are more compact than matrices and easier to work with than all other methods, so are often preferred in real-world applications.
A versor (also called a rotation quaternion) consists of four real numbers, constrained so the norm of the quaternion is 1. This constraint limits the degrees of freedom of the quaternion to three, as required. Unlike matrices and complex numbers two multiplications are needed:
where is the versor, is its inverse, and is the vector treated as a quaternion with zero scalar part. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions,
where is the rotation vector treated as a quaternion.
A single multiplication by a versor, either left or right, is itself a rotation, but in four dimensions. Any four-dimensional rotation about the origin can be represented with two quaternion multiplications: one left and one right, by two different unit quaternions.
Further notes
More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices in dimensions which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group .
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using homogeneous coordinates. Projective transformations are represented by matrices. They are not rotation matrices, but a transformation that represents a Euclidean rotation has a rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.
More alternatives to the matrix formalism
As was demonstrated above, there exist three multilinear algebra rotation formalisms: one with U(1), or complex numbers, for two dimensions, and two others with versors, or quaternions, for three and four dimensions.
In general (even for vectors equipped with a non-Euclidean Minkowski quadratic form) the rotation of a vector space can be expressed as a bivector. This formalism is used in geometric algebra and, more generally, in the Clifford algebra representation of Lie groups.
In the case of a positive-definite Euclidean quadratic form, the double covering group of the isometry group is known as the Spin group, . It can be conveniently described in terms of a Clifford algebra. Unit quaternions give the group .
In non-Euclidean geometries
In spherical geometry, a direct motion of the -sphere (an example of the elliptic geometry) is the same as a rotation of -dimensional Euclidean space about the origin. For odd , most of these motions do not have fixed points on the -sphere and, strictly speaking, are not rotations of the sphere; such motions are sometimes referred to as Clifford translations. Rotations about a fixed point in elliptic and hyperbolic geometries are not different from Euclidean ones.
Affine geometry and projective geometry have not a distinct notion of rotation.
In relativity
A generalization of a rotation applies in special relativity, where it can be considered to operate on a four-dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity, this space is called Minkowski space, and the four-dimensional rotations, called Lorentz transformations, have a physical interpretation. These transformations preserve a quadratic form called the spacetime interval.
If a rotation of Minkowski space is in a space-like plane, then this rotation is the same as a spatial rotation in Euclidean space. By contrast, a rotation in a plane spanned by a space-like dimension and a time-like dimension is a hyperbolic rotation, and if this plane contains the time axis of the reference frame, is called a "Lorentz boost". These transformations demonstrate the pseudo-Euclidean nature of the Minkowski space. Hyperbolic rotations are sometimes described as squeeze mappings and frequently appear on Minkowski diagrams that visualize (1 + 1)-dimensional pseudo-Euclidean geometry on planar drawings. The study of relativity is deals with the Lorentz group generated by the space rotations and hyperbolic rotations.
Whereas rotations, in physics and astronomy, correspond to rotations of celestial sphere as a 2-sphere in the Euclidean 3-space, Lorentz transformations from induce conformal transformations of the celestial sphere. It is a broader class of the sphere transformations known as Möbius transformations.
Discrete rotations
Importance
Rotations define important classes of symmetry: rotational symmetry is an invariance with respect to a particular rotation. The circular symmetry is an invariance with respect to all rotation about the fixed axis.
As was stated above, Euclidean rotations are applied to rigid body dynamics. Moreover, most of mathematical formalism in physics (such as the vector calculus) is rotation-invariant; see rotation for more physical aspects. Euclidean rotations and, more generally, Lorentz symmetry described above are thought to be symmetry laws of nature. In contrast, the reflectional symmetry is not a precise symmetry law of nature.
Generalizations
The complex-valued matrices analogous to real orthogonal matrices are the unitary matrices , which represent rotations in complex space. The set of all unitary matrices in a given dimension forms a unitary group of degree ; and its subgroup representing proper rotations (those that preserve the orientation of space) is the special unitary group of degree . These complex rotations are important in the context of spinors. The elements of are used to parametrize three-dimensional Euclidean rotations (see above), as well as respective transformations of the spin (see representation theory of SU(2)).
See also
Aircraft principal axes
Change of basis
Charts on SO(3)
Rotations and reflections in two dimensions
CORDIC
Squeeze mapping
Infinitesimal rotation matrix
Irrational rotation
Orientation (geometry)
Rodrigues' rotation formula
Rotation of axes
Vortex
Footnotes
References
Euclidean symmetries
Rotational symmetry
Linear operators
Unitary operators | 0.768649 | 0.992324 | 0.762748 |
Hardware acceleration | Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput, and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and the need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs).
Hardware acceleration is advantageous for performance, and practical when the functions are fixed, so updates are not as needed as in software solutions. With the advent of reprogrammable logic devices such as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processing control flow. The disadvantage, however, is that in many open source projects, it requires proprietary libraries that not all vendors are keen to distribute or expose, making it difficult to integrate in such projects.
Overview
Integrated circuits are designed to handle various operations on both analog and digital signals. In computing, digital signals are the most common and are typically represented as binary numbers. Computer hardware and software use this binary representation to perform computations. This is done by processing Boolean functions on the binary input, and then outputting the results for storage or further processing by other devices.
Computational equivalence of hardware and software
Because all Turing machines can run any computable function, it is always possible to design custom hardware that performs the same function as a given piece of software. Conversely, software can always be used to emulate the function of a given piece of hardware. Custom hardware may offer higher performance per watt for the same functions that can be specified in software. Hardware description languages (HDLs) such as Verilog and VHDL can model the same semantics as software and synthesize the design into a netlist that can be programmed to an FPGA or composed into the logic gates of an ASIC.
Stored-program computers
The vast majority of software-based computing occurs on machines implementing the von Neumann architecture, collectively known as stored-program computers. Computer programs are stored as data and executed by processors. Such processors must fetch and decode instructions, as well as load data operands from memory (as part of the instruction cycle), to execute the instructions constituting the software program. Relying on a common cache for code and data leads to the "von Neumann bottleneck", a fundamental limitation on the throughput of software on processors implementing the von Neumann architecture. Even in the modified Harvard architecture, where instructions and data have separate caches in the memory hierarchy, there is overhead to decoding instruction opcodes and multiplexing available execution units on a microprocessor or microcontroller, leading to low circuit utilization. Modern processors that provide simultaneous multithreading exploit under-utilization of available processor functional units and instruction level parallelism between different hardware threads.
Hardware execution units
Hardware execution units do not in general rely on the von Neumann or modified Harvard architectures and do not need to perform the instruction fetch and decode steps of an instruction cycle and incur those stages' overhead. If needed calculations are specified in a register transfer level (RTL) hardware design, the time and circuit area costs that would be incurred by instruction fetch and decoding stages can be reclaimed and put to other uses.
This reclamation saves time, power, and circuit area in computation. The reclaimed resources can be used for increased parallel computation, other functions, communication, or memory, as well as increased input/output capabilities. This comes at the cost of general-purpose utility.
Emerging hardware architectures
Greater RTL customization of hardware designs allows emerging architectures such as in-memory computing, transport triggered architectures (TTA) and networks-on-chip (NoC) to further benefit from increased locality of data to execution context, thereby reducing computing and communication latency between modules and functional units.
Custom hardware is limited in parallel processing capability only by the area and logic blocks available on the integrated circuit die. Therefore, hardware is much more free to offer massive parallelism than software on general-purpose processors, offering a possibility of implementing the parallel random-access machine (PRAM) model.
It is common to build multicore and manycore processing units out of microprocessor IP core schematics on a single FPGA or ASIC. Similarly, specialized functional units can be composed in parallel, as in digital signal processing, without being embedded in a processor IP core. Therefore, hardware acceleration is often employed for repetitive, fixed tasks involving little conditional branching, especially on large amounts of data. This is how Nvidia's CUDA line of GPUs are implemented.
Implementation metrics
As device mobility has increased, new metrics have been developed that measure the relative performance of specific acceleration protocols, considering characteristics such as physical hardware dimensions, power consumption, and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed.
Applications
Examples of hardware acceleration include bit blit acceleration functionality in graphics processing units (GPUs), use of memristors for accelerating neural networks, and regular expression hardware acceleration for spam control in the server industry, intended to prevent regular expression denial of service (ReDoS) attacks. The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit called a hardware accelerator, though they are usually referred to with a more specific term, such as 3D accelerator, or cryptographic accelerator.
Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled by instruction fetch (for example, moving temporary results to and from a register file). Hardware accelerators improve the execution of a specific algorithm by allowing greater concurrency, having specific datapaths for their temporary variables, and reducing the overhead of instruction control in the fetch-decode-execute cycle.
Modern processors are multi-core and often feature parallel "single-instruction; multiple data" (SIMD) units. Even so, hardware acceleration still yields benefits. Hardware acceleration is suitable for any computation-intensive algorithm which is executed frequently in a task or program. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (like motion estimation in MPEG-2).
Hardware acceleration units by application
See also
Coprocessor
DirectX Video Acceleration (DXVA)
Direct memory access (DMA)
High-level synthesis
C to HDL
Flow to HDL
Soft microprocessor
Flynn's taxonomy of parallel computer architectures
Single instruction, multiple data (SIMD)
Single instruction, multiple threads (SIMT)
Multiple instructions, multiple data (MIMD)
Computer for operations with functions
References
External links
Application-specific integrated circuits
Central processing unit
Computer optimization
Gate arrays
Graphics hardware
Articles with example C code | 0.769464 | 0.991239 | 0.762722 |
Understeer and oversteer | Understeer and oversteer are vehicle dynamics terms used to describe the sensitivity of the vehicle to changes in steering angle associated with changes in lateral acceleration. This sensitivity is defined for a level road for a given steady state operating condition by the Society of Automotive Engineers (SAE) in document J670 and by the International Organization for Standardization (ISO) in document 8855. Whether the vehicle is understeer or oversteer depends on the rate of change of the understeer angle. The Understeer Angle is the amount of additional steering (at the road wheels, not the hand wheel) that must be added in any given steady-state maneuver beyond the Ackermann steer angle. The Ackermann Steer Angle is the steer angle at which the vehicle would travel about a curve when there is no lateral acceleration required (at negligibly low speed).
The Understeer Gradient (U) is the rate of change of the understeer angle with respect to lateral acceleration on a level road for a given steady state operating condition.
The vehicle is Understeer if the understeer gradient is positive, Oversteer if the understeer gradient is negative, and Neutral steer if the understeer gradient is zero.
Car and motorsport enthusiasts often use the terminology informally in magazines and blogs to describe vehicle response to steering in a variety of manoueuvres.
Dynamics
Test to determine understeer gradient
Several tests can be used to determine understeer gradient: constant radius (repeat tests at different speeds), constant speed (repeat tests with different steering angles), or constant steer (repeat tests at different speeds). Formal descriptions of these three kinds of testing are provided by ISO. Gillespie goes into some detail on two of the measurement methods.
Results depend on the type of test, so simply giving a deg/g value is not sufficient; it is also necessary to indicate the type of procedure used to measure the gradient.
Vehicles are inherently nonlinear systems, and it is normal for U to vary over the range of testing. It is possible for a vehicle to show understeer in some conditions and oversteer in others. Therefore, it is necessary to specify the speed and lateral acceleration whenever reporting understeer/oversteer characteristics.
Contributions to understeer gradient
Many properties of the vehicle affect the understeer gradient, including tyre cornering stiffness, camber thrust, lateral force compliance steer, self aligning torque, lateral weight transfer, and compliance in the steering system. Weight distribution affects the normal force on each tyre and therefore its grip. These individual contributions can be identified analytically or by measurement in a Bundorf analysis.
In contrast to limit handling behavior
Great care must be taken to avoid conflating the understeer/oversteer behavior with the limit behavior of a vehicle. The physics are very different. They have different handling implications and different causes. The former is concerned with tire distortion effects due to slip and camber angles as increasing levels of lateral acceleration are attained. The latter is concerned with the limiting friction case in which either the front or rear wheels become saturated first. It is best to use race driver's descriptive terms "push (plow) and loose (spin)" for limit behavior so that these concepts are not confused.
Limit handling characteristics
Tyres transmit lateral (side to side) and longitudinal (front to back) forces to the ground. The total traction force (grip) available to the a tyre is the vector sum of the lateral and longitudinal forces, a function of the normal force and coefficient of friction. If the lateral and longitudinal forces presented at the tyre during operations exceeds the tyre's available traction force then the tyre is said to be saturated and will loose its grip on the ground and start to slip.
Push (plow) can be understood as a condition where, while cornering, the front tyres become saturated before the rear and slip first. Since the front tyres cannot provide any additional lateral force and the rear tyres can, the front of the vehicle will follow a path of greater radius than the rear and if there are no changes to the steering angle (i.e. the steering wheel stays in the same position), the vehicle's front will slide to the outside of the curve.
If the rear tyres become saturated before the front, the front tyres will keep the front of the vehicle on the desired path but the rear tyres will slip and follow a path with a greater radius. The back end will swing out and the vehicle will turn toward the inside of the curve. If the steering angle is not changed, then the front wheels will trace out a smaller and smaller circle while the rear wheels continue to swing around the front of the car. This is what is happening when a car 'spins out'. A car susceptible to being loose is sometimes known as 'tail happy', as in the way a dog wags its tail when happy and a common problem is fishtailing.
In real-world driving, there are continuous changes in speed, acceleration (vehicle braking or accelerating), steering angle, etc. Those changes are all constantly altering the load distribution of the vehicle, which, along with changes in tyre temperatures and road surface conditions are is constantly changing the maximum traction force available at each tyre. Generally, though, it is changes to the center of mass which cause tyre saturation and inform limit handling characteristics.
If the center of mass is moved forward, the understeer gradient tends to increase due to tyre load sensitivity. When the center of mass is moved rearward, the understeer gradient tends to decrease. The shifting of the center of mass is proportional to acceleration and affected by the height of the center of mass. When braking, more of the vehicles weight (load) is put on the front tyres and an less on the rear tyres. Conversely, when the vehicle accelerates, the opposite happens, the weight shifts to the rear tires. Similarly, as the center of mass of the load is shifted from one side to the other, the inside or outside tyres traction changes. In extreme cases, the inside or front tyres may completely lift off the ground, eliminating or reducing the steering input that can be transferred to the ground.
While weight distribution and suspension geometry have the greatest effect on measured understeer gradient in a steady-state test, power distribution, brake bias and front-rear weight transfer will also affect which wheels lose traction first in many real-world scenarios.
Limit conditions
When an understeer vehicle is taken to the grip limit of the tyres, where it is no longer possible to increase lateral acceleration, the vehicle will follow a path with a radius larger than intended. Although the vehicle cannot increase lateral acceleration, it is dynamically stable.
When an oversteer vehicle is taken to the grip limit of the tyres, it becomes dynamically unstable with a tendency to spin. Although the vehicle is unstable in open-loop control, a skilled driver can maintain control past the point of instability with countersteering and/or correct use of the throttle or even brakes; this is done purposely in the sport of drifting.
If a rear-wheel-drive vehicle has enough power to spin the rear wheels, it can initiate oversteer at any time by sending enough engine power to the wheels that they start spinning. Once traction is broken, they are relatively free to swing laterally. Under braking load, more work is typically done by the front brakes. If this forward bias is too great, then the front tyres may lose traction, causing understeer.
Related measures
Understeer gradient is one of the main measures for characterizing steady-state cornering behavior. It is involved in other properties such as characteristic speed (the speed for an understeer vehicle where the steer angle needed to negotiate a turn is twice the Ackermann angle), lateral acceleration gain (g's/deg), yaw velocity gain (1/s), and critical speed (the speed where an oversteer vehicle has infinite lateral acceleration gain).
References
Automotive steering technologies
Automotive safety
Tires
Vehicle dynamics | 0.768523 | 0.992447 | 0.762718 |
Bremsstrahlung | In particle physics, (; ) is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation (i.e., photons), thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases.
Broadly speaking, or braking radiation is any radiation produced due to the acceleration (positive or negative) of a charged particle, which includes synchrotron radiation (i.e., photon emission by a relativistic particle), cyclotron radiation (i.e. photon emission by a non-relativistic particle), and the emission of electrons and positrons during beta decay. However, the term is frequently used in the more narrow sense of radiation from electrons (from whatever source) slowing in matter.
Bremsstrahlung emitted from plasma is sometimes referred to as free–free radiation. This refers to the fact that the radiation in this case is created by electrons that are free (i.e., not in an atomic or molecular bound state) before, and remain free after, the emission of a photon. In the same parlance, bound–bound radiation refers to discrete spectral lines (an electron "jumps" between two bound states), while free–bound radiation refers to the radiative combination process, in which a free electron recombines with an ion.
This article uses SI units, along with the scaled single-particle charge .
Classical description
If quantum effects are negligible, an accelerating charged particle radiates power as described by the Larmor formula and its relativistic generalization.
Total radiated power
The total radiated power is
where (the velocity of the particle divided by the speed of light), is the Lorentz factor, is the vacuum permittivity, signifies a time derivative of and is the charge of the particle.
In the case where velocity is parallel to acceleration (i.e., linear motion), the expression reduces to
where is the acceleration. For the case of acceleration perpendicular to the velocity, for example in synchrotrons, the total power is
Power radiated in the two limiting cases is proportional to or . Since , we see that for particles with the same energy the total radiated power goes as or , which accounts for why electrons lose energy to bremsstrahlung radiation much more rapidly than heavier charged particles (e.g., muons, protons, alpha particles). This is the reason a TeV energy electron-positron collider (such as the proposed International Linear Collider) cannot use a circular tunnel (requiring constant acceleration), while a proton-proton collider (such as the Large Hadron Collider) can utilize a circular tunnel. The electrons lose energy due to bremsstrahlung at a rate times higher than protons do.
Angular distribution
The most general formula for radiated power as a function of angle is:
where is a unit vector pointing from the particle towards the observer, and is an infinitesimal solid angle.
In the case where velocity is parallel to acceleration (for example, linear motion), this simplifies to
where is the angle between and the direction of observation .
Simplified quantum-mechanical description
The full quantum-mechanical treatment of bremsstrahlung is very involved. The "vacuum case" of the interaction of one electron, one ion, and one photon, using the pure Coulomb potential, has an exact solution that was probably first published by Arnold Sommerfeld in 1931. This analytical solution involves complicated mathematics, and several numerical calculations have been published, such as by Karzas and Latter. Other approximate formulas have been presented, such as in recent work by Weinberg and Pradler and Semmelrock.
This section gives a quantum-mechanical analog of the prior section, but with some simplifications to illustrate the important physics. We give a non-relativistic treatment of the special case of an electron of mass , charge , and initial speed decelerating in the Coulomb field of a gas of heavy ions of charge and number density . The emitted radiation is a photon of frequency and energy . We wish to find the emissivity which is the power emitted per (solid angle in photon velocity space * photon frequency), summed over both transverse photon polarizations. We express it as an approximate classical result times the free−free emission Gaunt factor gff accounting for quantum and other corrections:
if , that is, the electron does not have enough kinetic energy to emit the photon. A general, quantum-mechanical formula for exists but is very complicated, and usually is found by numerical calculations. We present some approximate results with the following additional assumptions:
Vacuum interaction: we neglect any effects of the background medium, such as plasma screening effects. This is reasonable for photon frequency much greater than the plasma frequency with the plasma electron density. Note that light waves are evanescent for and a significantly different approach would be needed.
Soft photons: , that is, the photon energy is much less than the initial electron kinetic energy.
With these assumptions, two unitless parameters characterize the process: , which measures the strength of the electron-ion Coulomb interaction, and , which measures the photon "softness" and we assume is always small (the choice of the factor 2 is for later convenience). In the limit , the quantum-mechanical Born approximation gives:
In the opposite limit , the full quantum-mechanical result reduces to the purely classical result
where is the Euler–Mascheroni constant. Note that which is a purely classical expression without the Planck constant .
A semi-classical, heuristic way to understand the Gaunt factor is to write it as where and are maximum and minimum "impact parameters" for the electron-ion collision, in the presence of the photon electric field. With our assumptions, : for larger impact parameters, the sinusoidal oscillation of the photon field provides "phase mixing" that strongly reduces the interaction. is the larger of the quantum-mechanical de Broglie wavelength and the classical distance of closest approach where the electron-ion Coulomb potential energy is comparable to the electron's initial kinetic energy.
The above approximations generally apply as long as the argument of the logarithm is large, and break down when it is less than unity. Namely, these forms for the Gaunt factor become negative, which is unphysical. A rough approximation to the full calculations, with the appropriate Born and classical limits, is
Thermal bremsstrahlung in a medium: emission and absorption
This section discusses bremsstrahlung emission and the inverse absorption process (called inverse bremsstrahlung) in a macroscopic medium. We start with the equation of radiative transfer, which applies to general processes and not just bremsstrahlung:
is the radiation spectral intensity, or power per (area × × photon frequency) summed over both polarizations. is the emissivity, analogous to defined above, and is the absorptivity. and are properties of the matter, not the radiation, and account for all the particles in the medium – not just a pair of one electron and one ion as in the prior section. If is uniform in space and time, then the left-hand side of the transfer equation is zero, and we find
If the matter and radiation are also in thermal equilibrium at some temperature, then must be the blackbody spectrum:
Since and are independent of , this means that must be the blackbody spectrum whenever the matter is in equilibrium at some temperature – regardless of the state of the radiation. This allows us to immediately know both and once one is known – for matter in equilibrium.
In plasma: approximate Classical Results
NOTE: this section currently gives formulas that apply in the Rayleigh–Jeans limit , and does not use a quantized (Planck) treatment of radiation. Thus a usual factor like does not appear. The appearance of in below is due to the quantum-mechanical treatment of collisions.
In a plasma, the free electrons continually collide with the ions, producing bremsstrahlung. A complete analysis requires accounting for both binary Coulomb collisions as well as collective (dielectric) behavior. A detailed treatment is given by Bekefi, while a simplified one is given by Ichimaru. In this section we follow Bekefi's dielectric treatment, with collisions included approximately via the cutoff wavenumber,
Consider a uniform plasma, with thermal electrons distributed according to the Maxwell–Boltzmann distribution with the temperature . Following Bekefi, the power spectral density (power per angular frequency interval per volume, integrated over the whole sr of solid angle, and in both polarizations) of the bremsstrahlung radiated, is calculated to be
where is the electron plasma frequency, is the photon frequency, is the number density of electrons and ions, and other symbols are physical constants. The second bracketed factor is the index of refraction of a light wave in a plasma, and shows that emission is greatly suppressed for (this is the cutoff condition for a light wave in a plasma; in this case the light wave is evanescent). This formula thus only applies for . This formula should be summed over ion species in a multi-species plasma.
The special function is defined in the exponential integral article, and the unitless quantity is
is a maximum or cutoff wavenumber, arising due to binary collisions, and can vary with ion species. Roughly, when (typical in plasmas that are not too cold), where eV is the Hartree energy, and is the electron thermal de Broglie wavelength. Otherwise, where is the classical Coulomb distance of closest approach.
For the usual case , we find
The formula for is approximate, in that it neglects enhanced emission occurring for slightly above
In the limit , we can approximate as where is the Euler–Mascheroni constant. The leading, logarithmic term is frequently used, and resembles the Coulomb logarithm that occurs in other collisional plasma calculations. For the log term is negative, and the approximation is clearly inadequate. Bekefi gives corrected expressions for the logarithmic term that match detailed binary-collision calculations.
The total emission power density, integrated over all frequencies, is
and decreases with ; it is always positive. For , we find
Note the appearance of due to the quantum nature of . In practical units, a commonly used version of this formula for is
This formula is 1.59 times the one given above, with the difference due to details of binary collisions. Such ambiguity is often expressed by introducing Gaunt factor , e.g. in one finds
where everything is expressed in the CGS units.
Relativistic corrections
For very high temperatures there are relativistic corrections to this formula, that is, additional terms of the order of
Bremsstrahlung cooling
If the plasma is optically thin, the bremsstrahlung radiation leaves the plasma, carrying part of the internal plasma energy. This effect is known as the bremsstrahlung cooling. It is a type of radiative cooling. The energy carried away by bremsstrahlung is called bremsstrahlung losses and represents a type of radiative losses. One generally uses the term bremsstrahlung losses in the context when the plasma cooling is undesired, as e.g. in fusion plasmas.
Polarizational bremsstrahlung
Polarizational bremsstrahlung (sometimes referred to as "atomic bremsstrahlung") is the radiation emitted by the target's atomic electrons as the target atom is polarized by the Coulomb field of the incident charged particle. Polarizational bremsstrahlung contributions to the total bremsstrahlung spectrum have been observed in experiments involving relatively massive incident particles, resonance processes, and free atoms. However, there is still some debate as to whether or not there are significant polarizational bremsstrahlung contributions in experiments involving fast electrons incident on solid targets.
It is worth noting that the term "polarizational" is not meant to imply that the emitted bremsstrahlung is polarized. Also, the angular distribution of polarizational bremsstrahlung is theoretically quite different than ordinary bremsstrahlung.
Sources
X-ray tube
In an X-ray tube, electrons are accelerated in a vacuum by an electric field towards a piece of material called the "target". X-rays are emitted as the electrons hit the target.
Already in the early 20th century physicists found out that X-rays consist of two components, one independent of the target material and another with characteristics of fluorescence. Now we say that the output spectrum consists of a continuous spectrum of X-rays with additional sharp peaks at certain energies. The former is due to bremsstrahlung, while the latter are characteristic X-rays associated with the atoms in the target. For this reason, bremsstrahlung in this context is also called continuous X-rays. The German term itself was introduced in 1909 by Arnold Sommerfeld in order to explain the nature of the first variety of X-rays.
The shape of this continuum spectrum is approximately described by Kramers' law.
The formula for Kramers' law is usually given as the distribution of intensity (photon count) against the wavelength of the emitted radiation:
The constant is proportional to the atomic number of the target element, and is the minimum wavelength given by the Duane–Hunt law.
The spectrum has a sharp cutoff at which is due to the limited energy of the incoming electrons. For example, if an electron in the tube is accelerated through 60 kV, then it will acquire a kinetic energy of 60 keV, and when it strikes the target it can create X-rays with energy of at most 60 keV, by conservation of energy. (This upper limit corresponds to the electron coming to a stop by emitting just one X-ray photon. Usually the electron emits many photons, and each has an energy less than 60 keV.) A photon with energy of at most 60 keV has wavelength of at least , so the continuous X-ray spectrum has exactly that cutoff, as seen in the graph. More generally the formula for the low-wavelength cutoff, the Duane–Hunt law, is:
where is the Planck constant, is the speed of light, is the voltage that the electrons are accelerated through, is the elementary charge, and is picometres.
Beta decay
Beta particle-emitting substances sometimes exhibit a weak radiation with continuous spectrum that is due to bremsstrahlung (see the "outer bremsstrahlung" below). In this context, bremsstrahlung is a type of "secondary radiation", in that it is produced as a result of stopping (or slowing) the primary radiation (beta particles). It is very similar to X-rays produced by bombarding metal targets with electrons in X-ray generators (as above) except that it is produced by high-speed electrons from beta radiation.
Inner and outer bremsstrahlung
The "inner" bremsstrahlung (also known as "internal bremsstrahlung") arises from the creation of the electron and its loss of energy (due to the strong electric field in the region of the nucleus undergoing decay) as it leaves the nucleus. Such radiation is a feature of beta decay in nuclei, but it is occasionally (less commonly) seen in the beta decay of free neutrons to protons, where it is created as the beta electron leaves the proton.
In electron and positron emission by beta decay the photon's energy comes from the electron-nucleon pair, with the spectrum of the bremsstrahlung decreasing continuously with increasing energy of the beta particle. In electron capture, the energy comes at the expense of the neutrino, and the spectrum is greatest at about one third of the normal neutrino energy, decreasing to zero electromagnetic energy at normal neutrino energy. Note that in the case of electron capture, bremsstrahlung is emitted even though no charged particle is emitted. Instead, the bremsstrahlung radiation may be thought of as being created as the captured electron is accelerated toward being absorbed. Such radiation may be at frequencies that are the same as soft gamma radiation, but it exhibits none of the sharp spectral lines of gamma decay, and thus is not technically gamma radiation.
The internal process is to be contrasted with the "outer" bremsstrahlung due to the impingement on the nucleus of electrons coming from the outside (i.e., emitted by another nucleus), as discussed above.
Radiation safety
In some cases, such as the decay of , the bremsstrahlung produced by shielding the beta radiation with the normally used dense materials (e.g. lead) is itself dangerous; in such cases, shielding must be accomplished with low density materials, such as Plexiglas (Lucite), plastic, wood, or water; as the atomic number is lower for these materials, the intensity of bremsstrahlung is significantly reduced, but a larger thickness of shielding is required to stop the electrons (beta radiation).
In astrophysics
The dominant luminous component in a cluster of galaxies is the 107 to 108 kelvin intracluster medium. The emission from the intracluster medium is characterized by thermal bremsstrahlung. This radiation is in the energy range of X-rays and can be easily observed with space-based telescopes such as Chandra X-ray Observatory, XMM-Newton, ROSAT, ASCA, EXOSAT, Suzaku, RHESSI and future missions like IXO and Astro-H .
Bremsstrahlung is also the dominant emission mechanism for H II regions at radio wavelengths.
In electric discharges
In electric discharges, for example as laboratory discharges between two electrodes or as lightning discharges between cloud and ground or within clouds, electrons produce Bremsstrahlung photons while scattering off air molecules. These photons become manifest in terrestrial gamma-ray flashes and are the source for beams of electrons, positrons, neutrons and protons. The appearance of Bremsstrahlung photons also influences the propagation and morphology of discharges in nitrogen–oxygen mixtures with low percentages of oxygen.
Quantum mechanical description
The complete quantum mechanical description was first performed by Bethe and Heitler. They assumed plane waves for electrons which scatter at the nucleus of an atom, and derived a cross section which relates the complete geometry of that process to the frequency of the emitted photon. The quadruply differential cross section, which shows a quantum mechanical symmetry to pair production, is
where is the atomic number, the fine-structure constant, the reduced Planck constant and the speed of light. The kinetic energy of the electron in the initial and final state is connected to its total energy or its momenta via
where is the mass of an electron. Conservation of energy gives
where is the photon energy. The directions of the emitted photon and the scattered electron are given by
where is the momentum of the photon.
The differentials are given as
The absolute value of the virtual photon between the nucleus and electron is
The range of validity is given by the Born approximation
where this relation has to be fulfilled for the velocity of the electron in the initial and final state.
For practical applications (e.g. in Monte Carlo codes) it can be interesting to focus on the relation between the frequency of the emitted photon and the angle between this photon and the incident electron. Köhn and Ebert integrated the quadruply differential cross section by Bethe and Heitler over and and obtained:
with
and
However, a much simpler expression for the same integral can be found in (Eq. 2BN) and in (Eq. 4.1).
An analysis of the doubly differential cross section above shows that electrons whose kinetic energy is larger than the rest energy (511 keV) emit photons in forward direction while electrons with a small energy emit photons isotropically.
Electron–electron bremsstrahlung
One mechanism, considered important for small atomic numbers is the scattering of a free electron at the shell electrons of an atom or molecule. Since electron–electron bremsstrahlung is a function of and the usual electron-nucleus bremsstrahlung is a function of electron–electron bremsstrahlung is negligible for metals. For air, however, it plays an important role in the production of terrestrial gamma-ray flashes.
See also
Beamstrahlung
Cyclotron radiation
Wiggler (synchrotron)
Free-electron laser
History of X-rays
Landau–Pomeranchuk–Migdal effect
Nuclear fusion: bremsstrahlung losses
Radiation length characterising energy loss by bremsstrahlung by high energy electrons in matter
Synchrotron light source
References
Further reading
External links
Index of Early Bremsstrahlung Articles
Atomic physics
Plasma phenomena
Scattering
Quantum electrodynamics | 0.76503 | 0.996943 | 0.762691 |
Diamagnetism | Diamagnetism is the property of materials that are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, μ0. In most materials, diamagnetism is a weak effect which can be detected only by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it entirely expels any magnetic field from its interior (the Meissner effect).
Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as diamagnetic (the prefix dia- meaning through or across), then later changed it to diamagnetism.
A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic.
Materials
Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger such that, when different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants (named after ).
Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as . This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is . The most strongly diamagnetic material is bismuth, , although pyrolytic carbon may have a susceptibility of in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution.
Superconductors
Superconductors may be considered perfect diamagnets, because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect.
Demonstrations
Curving water surfaces
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by a reflection in its surface.
Levitation
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strongly diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective and relatively convenient demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
Theory
The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field.
The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Langevin diamagnetism
Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity , applied to an electron with charge and mass , gives rise to Larmor precession with frequency . The number of revolutions per unit time is, so the current for an atom with electrons is (in SI units)
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the axis. The magnetic moment is therefore
If the distribution of charge is spherically symmetric, we can suppose that the distribution of coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore, . If is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is
In atoms, Langevin susceptibility is of the same order of magnitude as Van Vleck paramagnetic susceptibility.
In metals
The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is
where is the Fermi energy. This is equivalent to , exactly times Pauli paramagnetic susceptibility, where is the Bohr magneton and is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin-1/2 electrons).
In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the De Haas–Van Alphen effect, also first described theoretically by Landau.
See also
Antiferromagnetism
Magnetochemistry
Moses effect
References
External links
The Feynman Lectures on Physics Vol. II Ch. 34: The Magnetism of Matter
Electric and magnetic fields in matter
Magnetic levitation
Magnetism | 0.765274 | 0.996584 | 0.762659 |
K-epsilon turbulence model | K-epsilon (k-ε) turbulence model is one of the most common models used in computational fluid dynamics (CFD) to simulate mean flow characteristics for turbulent flow conditions. It is a two equation model that gives a general description of turbulence by means of two transport equations (partial differential equations, PDEs). The original impetus for the K-epsilon model was to improve the mixing-length model, as well as to find an alternative to algebraically prescribing turbulent length scales in moderate to high complexity flows.
The first transported variable is the turbulent kinetic energy (k).
The second transported variable is the rate of dissipation of turbulent kinetic energy (ε).
Principle
Unlike earlier turbulence models, k-ε model focuses on the mechanisms that affect the turbulent kinetic energy. The mixing length model lacks this kind of generality. The underlying assumption of this model is that the turbulent viscosity is isotropic, in other words, the ratio between Reynolds stress and mean rate of deformations is the same in all directions.
Standard k-ε turbulence model
The exact k-ε equations contain many unknown and unmeasurable terms. For a much more practical approach, the standard k-ε turbulence model (Launder and Spalding, 1974) is used which is based on our best understanding of the relevant processes, thus minimizing unknowns and presenting a set of equations which can be applied to a large number of turbulent applications.
For turbulent kinetic energy k
For dissipation
where
represents velocity component in corresponding direction
represents component of rate of deformation
represents eddy viscosity
The equations also consist of some adjustable constants , , and . The values of these constants have been arrived at by numerous iterations of data fitting for a wide range of turbulent flows. These are as follows:
Applications
The k-ε model has been tailored specifically for planar shear layers and recirculating flows. This model is the most widely used and validated turbulence model with applications ranging from industrial to environmental flows, which explains its popularity. It is usually useful for free-shear layer flows with relatively small pressure gradients as well as in confined flows where the Reynolds shear stresses are most important. It can also be stated as the simplest turbulence model for which only initial and/or boundary conditions needs to be supplied.
However it is more expensive in terms of memory than the mixing length model as it requires two extra PDEs. This model would be an inappropriate choice for problems such as inlets and compressors as accuracy has been shown experimentally to be reduced for flows containing large adverse pressure gradients. The k-ε model also performs poorly in a variety of important cases such as unconfined flows, curved boundary layers, rotating flows and flows in non-circular ducts.
Other models
Realizable k-ε Model: An immediate benefit of the realizable k-ɛ model is that it provides improved predictions for the spreading rate of both planar and round jets. It also exhibits superior performance for flows involving rotation, boundary layers under strong adverse pressure gradients, separation, and recirculation. In virtually every measure of comparison, Realizable k-ɛ demonstrates a superior ability to capture the mean flow of the complex structures.
k-ω Model: used when there are wall effects present within the case.
Reynolds stress equation model: In case of complex turbulent flows, Reynolds stress models are able to provide better predictions. Such flows include turbulent flows with high degrees of anisotropy, significant streamline curvature, flow separation, zones of recirculation and influence of mean rotation effects.
References
Notes
'An Introduction to Computational Fluid Dynamics: The Finite Volume Method (2nd Edition)', H. Versteeg, W. Malalasekera; Pearson Education Limited; 2007;
'Turbulence Modeling for CFD' 2nd Ed., Wilcox C. D.; DCW Industries; 1998;
'An introduction to turbulence and its measurement', Bradshaw, P.; Pergamon Press; 1971;
Turbulence models | 0.770809 | 0.989425 | 0.762657 |
Electric displacement field | In physics, the electric displacement field (denoted by D) or electric induction is a vector field that appears in Maxwell's equations. It accounts for the electromagnetic effects of polarization and that of an electric field, combining the two in an auxiliary field. It plays a major role in topics such as the capacitance of a material, as well as the response of dielectrics to an electric field, and how shapes can change due to electric fields in piezoelectricity or flexoelectricity as well as the creation of voltages and charge transfer due to elastic strains.
In any material, if there is an inversion center then the charge at, for instance, and are the same. This means that there is no dipole. If an electric field is applied to an insulator, then (for instance) the negative charges can move slightly towards the positive side of the field, and the positive charges in the other direction. This leads to an induced dipole which is described as a polarization. There can be slightly different movements of the negative electrons and positive nuclei in molecules, or different displacements of the atoms in an ionic compound. Materials which do not have an inversion center display piezoelectricity and always have a polarization; in others spatially varying strains can break the inversion symmetry and lead to polarization, the flexoelectric effect. Other stimuli such as magnetic fields can lead to polarization in some materials, this being called the magnetoelectric effect.
Definition
The electric displacement field "D" is defined aswhere is the vacuum permittivity (also called permittivity of free space), and P is the (macroscopic) density of the permanent and induced electric dipole moments in the material, called the polarization density.
The displacement field satisfies Gauss's law in a dielectric:
In this equation, is the number of free charges per unit volume. These charges are the ones that have made the volume non-neutral, and they are sometimes referred to as the space charge. This equation says, in effect, that the flux lines of D must begin and end on the free charges. In contrast is the density of all those charges that are part of a dipole, each of which is neutral. In the example of an insulating dielectric between metal capacitor plates, the only free charges are on the metal plates and dielectric contains only dipoles. If the dielectric is replaced by a doped semiconductor or an ionised gas, etc, then electrons move relative to the ions, and if the system is finite they both contribute to at the edges.
D is not determined exclusively by the free charge. As E has a curl of zero in electrostatic situations, it follows that
The effect of this equation can be seen in the case of an object with a "frozen in" polarization like a bar electret, the electric analogue to a bar magnet. There is no free charge in such a material, but the inherent polarization gives rise to an electric field, demonstrating that the D field is not determined entirely by the free charge. The electric field is determined by using the above relation along with other boundary conditions on the polarization density to yield the bound charges, which will, in turn, yield the electric field.
In a linear, homogeneous, isotropic dielectric with instantaneous response to changes in the electric field, P depends linearly on the electric field,
where the constant of proportionality is called the electric susceptibility of the material. Thus
where is the permittivity, and the relative permittivity of the material.
In linear, homogeneous, isotropic media, ε is a constant. However, in linear anisotropic media it is a tensor, and in nonhomogeneous media it is a function of position inside the medium. It may also depend upon the electric field (nonlinear materials) and have a time dependent response. Explicit time dependence can arise if the materials are physically moving or changing in time (e.g. reflections off a moving interface give rise to Doppler shifts). A different form of time dependence can arise in a time-invariant medium, as there can be a time delay between the imposition of the electric field and the resulting polarization of the material. In this case, P is a convolution of the impulse response susceptibility χ and the electric field E. Such a convolution takes on a simpler form in the frequency domain: by Fourier transforming the relationship and applying the convolution theorem, one obtains the following relation for a linear time-invariant medium:
where is the frequency of the applied field. The constraint of causality leads to the Kramers–Kronig relations, which place limitations upon the form of the frequency dependence. The phenomenon of a frequency-dependent permittivity is an example of material dispersion. In fact, all physical materials have some material dispersion because they cannot respond instantaneously to applied fields, but for many problems (those concerned with a narrow enough bandwidth) the frequency-dependence of ε can be neglected.
At a boundary, , where σf is the free charge density and the unit normal points in the direction from medium 2 to medium 1.
History
The earliest known use of the term is from the year 1864, in James Clerk Maxwell's paper A Dynamical Theory of the Electromagnetic Field. Maxwell introduced the term D, specific capacity of electric induction, in a form different from the modern and familiar notations.
It was Oliver Heaviside who reformulated the complicated Maxwell's equations to the modern form. It wasn't until 1884 that Heaviside, concurrently with Willard Gibbs and Heinrich Hertz, grouped the equations together into a distinct set. This group of four equations was known variously as the Hertz–Heaviside equations and the Maxwell–Hertz equations, and is sometimes still known as the Maxwell–Heaviside equations; hence, it was probably Heaviside who lent D the present significance it now has.
Example: Displacement field in a capacitor
Consider an infinite parallel plate capacitor where the space between the plates is empty or contains a neutral, insulating medium. In both cases, the free charges are only on the metal capacitor plates. Since the flux lines D end on free charges, and there are the same number of uniformly distributed charges of opposite sign on both plates, then the flux lines must all simply traverse the capacitor from one side to the other. In SI units, the charge density on the plates is proportional to the value of the D field between the plates. This follows directly from Gauss's law, by integrating over a small rectangular box straddling one plate of the capacitor:
On the sides of the box, dA is perpendicular to the field, so the integral over this section is zero, as is the integral on the face that is outside the capacitor where D is zero. The only surface that contributes to the integral is therefore the surface of the box inside the capacitor, and hence
where A is the surface area of the top face of the box and is the free surface charge density on the positive plate. If the space between the capacitor plates is filled with a linear homogeneous isotropic dielectric with permittivity , then there is a polarization induced in the medium, and so the voltage difference between the plates is
where d is their separation.
Introducing the dielectric increases ε by a factor and either the voltage difference between the plates will be smaller by this factor, or the charge must be higher. The partial cancellation of fields in the dielectric allows a larger amount of free charge to dwell on the two plates of the capacitor per unit of potential drop than would be possible if the plates were separated by vacuum.
If the distance d between the plates of a finite parallel plate capacitor is much smaller than its lateral dimensions
we can approximate it using the infinite case and obtain its capacitance as
See also
Polarization density
Electric susceptibility
Magnetic field
Electric dipole moment
References
Electric and magnetic fields in matter | 0.767967 | 0.993081 | 0.762654 |
Thermobaric weapon | A thermobaric weapon, also called an aerosol bomb, or a vacuum bomb, is a type of explosive munition that works by dispersing an aerosol cloud of gas, liquid or powdered explosive. The fuel is usually a single compound, rather than a mixture of multiple molecules. Many types of thermobaric weapons can be fitted to hand-held launchers, and can also be launched from airplanes.
Terminology
The term thermobaric is derived from the Greek words for 'heat' and 'pressure': thermobarikos (θερμοβαρικός), from thermos (θερμός) 'hot' + baros (βάρος) 'weight, pressure' + suffix -ikos (-ικός) '-ic'.
Other terms used for the family of weapons are high-impulse thermobaric weapons, heat and pressure weapons, vacuum bombs, and fuel-air explosives (FAE).
Mechanism
Most conventional explosives consist of a fuel–oxidiser premix, but thermobaric weapons consist only of fuel and as a result are significantly more energetic than conventional explosives of equal weight. Their reliance on atmospheric oxygen makes them unsuitable for use under water, at high altitude, and in adverse weather. They are, however, considerably more effective when used in enclosed spaces such as tunnels, buildings, and non-hermetically sealed field fortifications (foxholes, covered slit trenches, bunkers).
The initial explosive charge detonates as it hits its target, opening the container and dispersing the fuel mixture as a cloud. The typical blast wave of a thermobaric weapon lasts significantly longer than that of a conventional explosive.
In contrast to an explosive that uses oxidation in a confined region to produce a blast front emanating from a single source, a thermobaric flame front accelerates to a large volume, which produces pressure fronts within the mixture of fuel and oxidant and then also in the surrounding air.
Thermobaric explosives apply the principles underlying accidental unconfined vapor cloud explosions, which include those from dispersions of flammable dusts and droplets. Such dust explosions happened most often in flour mills and their storage containers, grain bins (corn silos etc.), and later in coal mines, prior to the 20th century. Accidental unconfined vapor cloud explosions now happen most often in partially or completely empty oil tankers, refinery tanks, and vessels, such as the Buncefield fire in the United Kingdom in 2005, where the blast wave woke people from its centre.
A typical weapon consists of a container packed with a fuel substance, the centre of which has a small conventional-explosive "scatter charge". Fuels are chosen on the basis of the exothermicity of their oxidation, ranging from powdered metals, such as aluminium or magnesium, to organic materials, possibly with a self-contained partial oxidant. The most recent development involves the use of nanofuels.
A thermobaric bomb's effective yield depends on a combination of a number of factors such as how well the fuel is dispersed, how rapidly it mixes with the surrounding atmosphere and the initiation of the igniter and its position relative to the container of fuel. In some designs, strong munitions cases allow the blast pressure to be contained long enough for the fuel to be heated well above its autoignition temperature so that once the container bursts, the superheated fuel autoignites progressively as it comes into contact with atmospheric oxygen.
Conventional upper and lower limits of flammability apply to such weapons. Close in, blast from the dispersal charge, compressing and heating the surrounding atmosphere, has some influence on the lower limit. The upper limit has been demonstrated to influence the ignition of fogs above pools of oil strongly. That weakness may be eliminated by designs in which the fuel is preheated well above its ignition temperature so that its cooling during its dispersion still results in a minimal ignition delay on mixing. The continual combustion of the outer layer of fuel molecules, as they come into contact with the air, generates added heat which maintains the temperature of the interior of the fireball, and thus sustains the detonation.
In confinement, a series of reflective shock waves is generated, which maintain the fireball and can extend its duration to between 10 and 50 ms as exothermic recombination reactions occur. Further damage can result as the gases cool and pressure drops sharply, leading to a partial vacuum. This rarefaction effect has given rise to the misnomer "vacuum bomb". Piston-type afterburning is also believed to occur in such structures, as flame-fronts accelerate through it.
Fuel–air explosive
A fuel–air explosive (FAE) device consists of a container of fuel and two separate explosive charges. After the munition is dropped or fired, the first explosive charge bursts open the container at a predetermined height and disperses the fuel in a cloud that mixes with atmospheric oxygen (the size of the cloud varies with the size of the munition). The cloud of fuel flows around objects and into structures. The second charge then detonates the cloud and creates a massive blast wave. The blast wave can destroy reinforced buildings, equipment, and kill or injure people. The antipersonnel effect of the blast wave is more severe in foxholes and tunnels and in enclosed spaces, such as bunkers and caves.
Effects
Conventional countermeasures such as barriers (sandbags) and personnel armour are not effective against thermobaric weapons. A Human Rights Watch report of 1 February 2000 quotes a study made by the US Defense Intelligence Agency:
According to a US Central Intelligence Agency study,
Another Defense Intelligence Agency document speculates that, because the "shock and pressure waves cause minimal damage to brain tissue... it is possible that victims of FAEs are not rendered unconscious by the blast, but instead suffer for several seconds or minutes while they suffocate".
Development
German
The first attempts occurred during the First World War when incendiary shells (in German 'Brandgranate') used a slow but intense burning material, such as tar impregnated tissue and gunpowder dust. These shells burned for approximately 2 minutes after the shell exploded and spread the burning elements in every direction.
In World War II, the German Wehrmacht attempted to develop a vacuum bomb, under the direction of the Austrian physicist Mario Zippermayr.
The weapon was claimed by a weapons specialist (K.L. Bergmann) to have been tested on the Eastern front under the code-name "Taifun B" and was ready for deployment during the Normandy invasion in June, 1944. Apparently, canisters of a charcoal, aluminium and aviation fuel would've been launched, followed with a secondary launch of incendiary rockets. It was destroyed by a Western artillery barrage minutes before being fired just before Operation Cobra.
United States
FAEs were developed by the United States for use in the Vietnam War. The CBU-55 FAE fuel-air cluster bomb was mostly developed by the US Naval Weapons Center at China Lake, California.
Current American FAE munitions include the following:
BLU-73 FAE I
BLU-95 (FAE-II)
BLU-96 (FAE-II)
CBU-72 FAE I
AGM-114 Hellfire missile
XM1060 grenade
SMAW-NE round for rocket launcher
The XM1060 40-mm grenade is a small-arms thermobaric device, which was fielded by US forces in Afghanistan in 2002, and proved to be popular against targets in enclosed spaces, such as caves. Since the 2003 invasion of Iraq, the US Marine Corps has introduced a thermobaric "Novel Explosive" (SMAW-NE) round for the Mk 153 SMAW rocket launcher. One team of Marines reported that they had destroyed a large one-story masonry type building with one round from . The AGM-114N Hellfire II, uses a Metal Augmented Charge (MAC) warhead, which contains a thermobaric explosive fill that uses aluminium powder coated or mixed with PTFE layered between the charge casing and a PBXN-112 explosive mixture. When the PBXN-112 detonates, the aluminium mixture is dispersed and rapidly burns. The result is a sustained high pressure that is extremely effective against people and structures.
Soviet, later Russian
Following FAEs developed by the United States for use in the Vietnam War, Soviet Union scientists quickly developed their own FAE weapons. Since Afghanistan, research and development has continued, and Russian forces now field a wide array of third-generation FAE warheads, such as the RPO-A. The Russian armed forces have developed thermobaric ammunition variants for several of their weapons, such as the TBG-7V thermobaric grenade with a lethality radius of , which can be launched from a rocket propelled grenade (RPG) RPG-7. The GM-94 is a pump-action grenade launcher designed mainly to fire thermobaric grenades for close combat. The grenade weighed and contained of explosive, its lethality radius is , but due to the deliberate "fragmentation-free" design of the grenade, a distance of is considered safe.
The RPO-A and upgraded RPO-M are infantry-portable rocket propelled grenades designed to fire thermobaric rockets. The RPO-M, for instance, has a thermobaric warhead with a TNT equivalence of and destructive capabilities similar to a high-explosive fragmentation artillery shell. The RShG-1 and the RShG-2 are thermobaric variants of the RPG-27 and RPG-26 respectively. The RShG-1 is the more powerful variant, with its warhead having a lethality radius and producing about the same effect as of TNT. The RMG is a further derivative of the RPG-26 that uses a tandem-charge warhead, with the precursor high-explosive anti-tank (HEAT) warhead blasting an opening for the main thermobaric charge to enter and detonate inside. The RMG's precursor HEAT warhead can penetrate 300 mm of reinforced concrete or over 100 mm of rolled homogeneous armour, thus allowing the -diameter thermobaric warhead to detonate inside.
Other examples include the semi-automatic command to line of sight (SACLOS) or millimeter-wave active radar homing guided thermobaric variants of the 9M123 Khrizantema, the 9M133F-1 thermobaric warhead variant of the 9M133 Kornet, and the 9M131F thermobaric warhead variant of the 9K115-2 Metis-M, all of which are anti-tank missiles. The Kornet has since been upgraded to the Kornet-EM, and its thermobaric variant has a maximum range of and has a TNT equivalence of . The 9M55S thermobaric cluster warhead rocket was built to be fired from the BM-30 Smerch MLRS. A dedicated carrier of thermobaric weapons is the purpose-built TOS-1, a 24-tube MLRS designed to fire thermobaric rockets. A full salvo from the TOS-1 will cover a rectangle . The Iskander-M theatre ballistic missile can also carry a thermobaric warhead.
Many Russian Air Force munitions have thermobaric variants. The S-8 rocket has the S-8DM and S-8DF thermobaric variants. The S-8's brother, the S-13, has the S-13D and S-13DF thermobaric variants. The S-13DF's warhead weighs only , but its power is equivalent to of TNT. The KAB-500-OD variant of the KAB-500KR has a thermobaric warhead. The ODAB-500PM and ODAB-500PMV unguided bombs carry a fuel–air explosive each. ODAB-1500 is a larger version of the bomb. The KAB-1500S GLONASS/GPS guided bomb also has a thermobaric variant. Its fireball will cover a radius and its lethal zone is a radius. The 9M120 Ataka-V and the 9K114 Shturm ATGMs both have thermobaric variants.
In September 2007, Russia exploded the largest thermobaric weapon ever made, and claimed that its yield was equivalent to that of a nuclear weapon. Russia named this particular ordnance the "Father of All Bombs" in response to the American-developed Massive Ordnance Air Blast (MOAB) bomb, which has the backronym "Mother of All Bombs" and once held the title of the most powerful non-nuclear weapon in history.
Iraq
Iraq was alleged to possess the technology as early as 1990.
Israel
Israel was alleged to possess thermobaric technology as early as 1990, according to Pentagon sources.
Spain
In 1983, a program of military research was launched with collaboration between the Spanish Ministry of Defence (Directorate General of Armament and Material, DGAM) and Explosivos Alaveses (EXPAL) which was a subsidiary of Unión Explosivos Río Tinto (ERT). The goal of the programme was to develop a thermobaric bomb, the BEAC (Bomba Explosiva de Aire-Combustible). A prototype was tested successfully in a foreign location out of safety and confidentiality concerns. The Spanish Air and Space Force has an undetermined number of BEACs in its inventory.
China
In 1996, the People's Liberation Army (PLA) began development of the , a portable thermobaric rocket launcher, based on the Soviet RPO-A Shmel. Introduced in 2000 it is reported to weigh 3.5 kg and contains 2.1 kg of thermobaric filler. An improved version called the PF-97A was introduced in 2008.
China is reported to have other thermobaric weapons, including bombs, grenades and rockets. Research continues on thermobaric weapons capable of reaching 2,500 degrees.
Brazil
In 2004, under request of EMAER (Estado Maior da Aeronáutica - Military Staff of Aeronautics) and DIRMAB (Diretoria de Material Aeronáutico e Bélico - Board of Aeronautical and Military Equipment) the IAE (Instituto de Aeronautica e Espaço - Institute of Aeronautics and Space) started developing a Thermobaric project called Trocano .
Trocano (tɾoˈkɐnu) is a thermobaric weapon similar in design to the United States' MOAB weapon or Russia's FOAB. Like the US weapon, the Trocano was designed to be pallet-loaded into a C-130 Hercules - "Hércules" (ˈɛʁkuleʃ) - aircraft, and deployed using a parachute to drag it from the C-130's cargo bay and separate from its pallet, at which point the bomb's own aerodynamics determine its drop trajectory.
United Kingdom
In 2009, the British Ministry of Defence (MoD) acknowledged that Army Air Corps (AAC) AgustaWestland Apaches had used AGM-114 Hellfire missiles purchased from the United States against Taliban forces in Afghanistan. The MoD stated that 20 missiles, described as "blast fragmentation warheads", were used in 2008 and a further 20 in 2009. MoD officials told Guardian journalist Richard Norton-Taylor that the missiles were "particularly designed to take down structures and kill everyone in the buildings", as AAC AgustaWestland Apaches were previously equipped with weapon systems deemed ineffective to combat the Taliban. The MoD also stated that "British pilots' rules of engagement were strict and everything a pilot sees from the cockpit is recorded."
In 2018, the MoD accidentally divulged the details of General Atomics MQ-9 Reapers utilised by the Royal Air Force (RAF) during the Syrian civil war, which revealed that the drones were equipped with AGM-114 Hellfire missiles. The MoD had sent a report to a British publication, Drone Wars, in response to a freedom of information request. In the report, it was stated that AGM-114N Hellfire missiles which contained a thermobaric warhead were used by RAF attack drones in Syria.
India
Based on the high-explosive squash head (HESH) round, a 120 mm thermobaric round was developed in the 2010s by the Indian Ministry of Defence. This HESH round packs thermobaric explosives into the tank shells to increase the effectiveness against enemy bunkers and light armoured vehicles.
The design and the development of the round was taken up by Armament Research and Development Establishment (ARDE). The rounds were designed for the Arjun MBT. The TB rounds contains fuel rich explosive composition called thermobaric explosive. As the name implies, the shells, when they hit a target, produce blast overpressure and heat energy for hundreds of milliseconds. The overpressure and heat causes damage to enemy fortified structures like bunkers and buildings and for soft targets like enemy personnel and light armoured vehicles.
Serbia
The company Balkan Novoteh, formed in 2011, provides the Thermobaric hand grenade TG-1 to the market.
Military Technical Institute in Belgrade has developed a technology for producing cast-cured thermobaric PBX explosives. Since recently, the Factory of Explosives and Pyrotechnics TRAYAL Corporation has been producing cast-cured thermobaric PBX formulations.
Ukraine
In 2017 Ukroboronprom's Scientific Research Institute for Chemical Products in conjunction with (aka Artem Holding Company) announced to the market its new product, the . These can be combined with the grenade launcher, a demonstration of which was witnessed by Oleksandr Turchynov. The grenades, of approximately 600 grams, "create a two second fire cloud with a volume of not less than 13 m³, inside of which the temperature reaches 2,500 degrees. This temperature allows not only for the destruction of the enemy, but are also able to disable lightly armored vehicles." The firm showed them at the Azerbaijan International Defense Exhibition in 2018.
In 2024, Ukraine started using drones rigged with thermobaric explosives to strike Russian positions in the Russo-Ukrainian War.
History
Attempted prohibitions
Mexico, Switzerland and Sweden presented in 1980 a joint motion to the United Nations to prohibit the use of thermobaric weapons, to no avail.
United Nations Institute for Disarmament Research categorises these weapons as "enhanced blast weapons" and there was pressure to regulate these around 2010, again to no avail.
Military use
United States
FAEs such as first-generation CBU-55 fuel–air weapons saw extensive use in the Vietnam War. A second generation of FAE weapons were based on those, and were used by the United States in Iraq during Operation Desert Storm. A total of 254 CBU-72s were dropped by the United States Marine Corps, mostly from A-6Es. They were targeted against mine fields and personnel in trenches, but were more useful as a psychological weapon.
The US military used thermobaric weapons in Afghanistan. On 3 March 2002, a single laser guided thermobaric bomb was used by the United States Air Force against cave complexes in which Al-Qaeda and Taliban fighters had taken refuge in the Gardez region of Afghanistan. The SMAW-NE was used by the US Marines during the First Battle of Fallujah and the Second Battle of Fallujah. The AGM-114N Hellfire II was first used by US forces in 2003 in Iraq.
Soviet Union
FAEs were reportedly used against China in the 1969 Sino-Soviet border conflict.
The TOS-1 system was test fired in Panjshir Valley during the Soviet–Afghan War in the late 1980s. MiG-27 attack aircraft of the 134th APIB used ODAB-500S/P fuel–air bombs against Mujahideen forces in Afghanistan, but they were found to be unreliable and dangerous to ground crew.
Russia
Russian military forces reportedly used ground-delivered thermobaric weapons during the Battle for Grozny (first and second Chechen Wars) to attack dug-in Chechen fighters. The use of TOS-1 heavy MLRS and "RPO-A Shmel" shoulder-fired rocket system during the Chechen Wars is reported to have occurred. Russia used the RPO-A Shmel in the First Battle of Grozny, whereupon it was designated as a very useful round.
It was thought that, during the September 2004 Beslan school hostage crisis, a multitude of handheld thermobaric weapons were used by the Russian Armed Forces in their efforts to retake the school. The RPO-A and either the TGB-7V thermobaric rocket from the RPG-7 or rockets from either the RShG-1 or the RShG-2 is claimed to have been used by the Spetsnaz during the initial storming of the school. At least three and as many as nine RPO-A casings were later found at the positions of the Spetsnaz. In July 2005 the Russian government admitted to the use of the RPO-A during the crisis.
During the 2022 Russian invasion of Ukraine, CNN reported that Russian forces were moving thermobaric weapons into Ukraine. On 28 February 2022, Ukraine's ambassador to the United States accused Russia of deploying a thermobaric bomb. Russia has claimed to have used the weapon in March 2024 against Ukrainian soldiers in an unspecified location (denied by Ukraine), and during the August 2024 Ukrainian incursion into Kursk Oblast.
United Kingdom
During the War in Afghanistan, British forces, including the Army Air Corps and Royal Air Force, used thermobaric AGM-114N Hellfire missiles against the Taliban. In the Syrian civil war, British military drones used AGM-114N Hellfire missiles; in the first three months of 2018, British drones fired 92 Hellfire missiles in Syria.
Israel
A report by Human Rights Watch claimed Israel has used thermobaric weaponry in the past including the 2008–2009 conflict in Gaza. Moreover, Euro-Med Human Rights Monitor states that Israel appears to be using thermobaric weaponry in the current 2023 Israel-Hamas War. Both organizations claim that the use of this weaponry in densely populated neighborhoods violates international humanitarian law due to its damaging affects on civilians and civilian structures. The Eurasian Times reported that an Israeli AH-64D Apache attack helicopter was photographed with a 'mystery' warhead with a red band that was speculated to be a thermobaric warhead capable of destroying Hamas tunnels and multi-story buildings.
Syria
Reports by the rebel fighters of the Free Syrian Army claim the Syrian Air Force used such weapons against residential area targets occupied by the rebel fighters, such as during the Battle of Aleppo and in Kafar Batna. Others contend that in 2012 the Syrian government used an bomb in Azaz. A United Nations panel of human rights investigators reported that the Syrian government had used thermobaric bombs against the rebellious town of Al-Qusayr in March 2013.
The Russia and Syrian governments have used thermobaric bombs and other thermobaric munitions during the Syrian civil war against insurgents and insurgent-held civilian areas.
Ukraine
Mikhail Tolstykh, a controversial figure and top rank pro-Russian officer in the War in Donbass was killed on 8 February 2017 at his office in Donetsk by an RPO-A rocket fired by members of the Security Service of Ukraine. In March 2023 soldiers from the 59th Motorised Brigade of Ukraine showed off the destruction of a derelict Russian infantry fighting vehicle by a thermobaric RGT-27S2 hand grenade delivered by Mavic 3 drone.
Non-state actor use
Thermobaric and fuel–air explosives have been used in guerrilla warfare since the 1983 Beirut barracks bombing in Lebanon, which used a gas-enhanced explosive mechanism that was probably propane, butane, or acetylene. The explosive used by the bombers in the US 1993 World Trade Center bombing incorporated the FAE principle by using three tanks of bottled hydrogen gas to enhance the blast.
Jemaah Islamiyah bombers used a shock-dispersed solid fuel charge, based on the thermobaric principle, to attack the Sari nightclub during the 2002 Bali bombings.
In 2023, an Israeli reporter accused Hamas of firing thermobaric rockets into civilian houses as part of its October 7 surprise attack on Israel. Hamas and other Palestinian militant groups such as the Palestinian Islamic Jihad have claimed multiple attacks against Israeli forces with thermobaric rockets during the 2023 Israeli ground operation in the Gaza.
International law
International law does not prohibit the use of thermobaric munitions, fuel-air explosive devices, or vacuum bombs against military targets. , all past attempts to regulate or restrict thermobaric weapons have failed.
According to some scholars, thermobaric weapons are not intrinsically indiscriminate by nature, as they are often engineered for precision targeting capabilities. This precision aspect serves to provide humanitarian advantages by potentially minimizing collateral damage and also lessens the amount of munitions needed to effectively engage with the chosen military goals. Nonetheless, authors holding this view recommend that the use of thermobaric weapons in populated areas should be minimised due to their wide-area impact and multiple harm mechanisms.
In media
In the 1995 film Outbreak, a thermobaric weapon (referred to as a fuel air bomb) is used to destroy an African village to keep the perfect biological weapon (a virus) a secret, and later nearly used to wipe out a US town to keep the original virus intact.
See also
Bunker buster
Flame fougasse
References
External links
Explosive weapons
Ammunition
Anti-personnel weapons | 0.763232 | 0.999222 | 0.762639 |
Reductionism | Reductionism is any of several related philosophical ideas regarding the associations between phenomena which can be described in terms of simpler or more fundamental phenomena. It is also described as an intellectual and philosophical position that interprets a complex system as the sum of its parts.
Definitions
The Oxford Companion to Philosophy suggests that reductionism is "one of the most used and abused terms in the philosophical lexicon" and suggests a three-part division:
Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.
Methodological reductionism: the scientific attempt to provide an explanation in terms of ever-smaller entities.
Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation, and explanation.
Reductionism can be applied to any phenomenon, including objects, problems, explanations, theories, and meanings.
For the sciences, application of methodological reductionism attempts explanation of entire systems in terms of their individual, constituent parts and their interactions. For example, the temperature of a gas is reduced to nothing beyond the average kinetic energy of its molecules in motion. Thomas Nagel and others speak of 'psychophysical reductionism' (the attempted reduction of psychological phenomena to physics and chemistry), and 'physico-chemical reductionism' (the attempted reduction of biology to physics and chemistry). In a very simplified and sometimes contested form, reductionism is said to imply that a system is nothing but the sum of its parts.
However, a more nuanced opinion is that a system is composed entirely of its parts, but the system will have features that none of the parts have (which, in essence is the basis of emergentism). "The point of mechanistic explanations is usually showing how the higher level features arise from the parts."
Other definitions are used by other authors. For example, what John Polkinghorne terms 'conceptual' or 'epistemological' reductionism is the definition provided by Simon Blackburn and by Jaegwon Kim: that form of reductionism which concerns a program of replacing the facts or entities involved in one type of discourse with other facts or entities from another type, thereby providing a relationship between them. Richard Jones distinguishes ontological and epistemological reductionism, arguing that many ontological and epistemological reductionists affirm the need for different concepts for different degrees of complexity while affirming a reduction of theories.
The idea of reductionism can be expressed by "levels" of explanation, with higher levels reducible if need be to lower levels. This use of levels of understanding in part expresses our human limitations in remembering detail. However, "most philosophers would insist that our role in conceptualizing reality [our need for a hierarchy of "levels" of understanding] does not change the fact that different levels of organization in reality do have different 'properties'."
Reductionism does not preclude the existence of what might be termed emergent phenomena, but it does imply the ability to understand those phenomena completely in terms of the processes from which they are composed. This reductionist understanding is very different from ontological or strong emergentism, which intends that what emerges in "emergence" is more than the sum of the processes from which it emerges, respectively either in the ontological sense or in the epistemological sense.
Ontological reductionism
Richard Jones divides ontological reductionism into two: the reductionism of substances (e.g., the reduction of mind to matter) and the reduction of the number of structures operating in nature (e.g., the reduction of one physical force to another). This permits scientists and philosophers to affirm the former while being anti-reductionists regarding the latter.
Nancey Murphy has claimed that there are two species of ontological reductionism: one that claims that wholes are nothing more than their parts; and atomist reductionism, claiming that wholes are not "really real". She admits that the phrase "really real" is apparently senseless but she has tried to explicate the supposed difference between the two.
Ontological reductionism denies the idea of ontological emergence, and claims that emergence is an epistemological phenomenon that only exists through analysis or description of a system, and does not exist fundamentally.
In some scientific disciplines, ontological reductionism takes two forms: token-identity theory and type-identity theory. In this case, "token" refers to a biological process.
Token ontological reductionism is the idea that every item that exists is a sum item. For perceivable items, it affirms that every perceivable item is a sum of items with a lesser degree of complexity. Token ontological reduction of biological things to chemical things is generally accepted.
Type ontological reductionism is the idea that every type of item is a sum type of item, and that every perceivable type of item is a sum of types of items with a lesser degree of complexity. Type ontological reduction of biological things to chemical things is often rejected.
Michael Ruse has criticized ontological reductionism as an improper argument against vitalism.
Methodological reductionism
In a biological context, methodological reductionism means attempting to explain all biological phenomena in terms of their underlying biochemical and molecular processes.
In religion
Anthropologists Edward Burnett Tylor and James George Frazer employed some religious reductionist arguments.
Theory reductionism
Theory reduction is the process by which a more general theory absorbs a special theory. It can be further divided into translation, derivation, and explanation. For example, both Kepler's laws of the motion of the planets and Galileo's theories of motion formulated for terrestrial objects are reducible to Newtonian theories of mechanics because all the explanatory power of the former are contained within the latter. Furthermore, the reduction is considered beneficial because Newtonian mechanics is a more general theory—that is, it explains more events than Galileo's or Kepler's. Besides scientific theories, theory reduction more generally can be the process by which one explanation subsumes another.
In mathematics
In mathematics, reductionism can be interpreted as the philosophy that all mathematics can (or ought to) be based on a common foundation, which for modern mathematics is usually axiomatic set theory. Ernst Zermelo was one of the major advocates of such an opinion; he also developed much of axiomatic set theory. It has been argued that the generally accepted method of justifying mathematical axioms by their usefulness in common practice can potentially weaken Zermelo's reductionist claim.
Jouko Väänänen has argued for second-order logic as a foundation for mathematics instead of set theory, whereas others have argued for category theory as a foundation for certain aspects of mathematics.
The incompleteness theorems of Kurt Gödel, published in 1931, caused doubt about the attainability of an axiomatic foundation for all of mathematics. Any such foundation would have to include axioms powerful enough to describe the arithmetic of the natural numbers (a subset of all mathematics). Yet Gödel proved that, for any consistent recursively enumerable axiomatic system powerful enough to describe the arithmetic of the natural numbers, there are (model-theoretically) true propositions about the natural numbers that cannot be proved from the axioms. Such propositions are known as formally undecidable propositions. For example, the continuum hypothesis is undecidable in the Zermelo–Fraenkel set theory as shown by Cohen.
In science
Reductionist thinking and methods form the basis for many of the well-developed topics of modern science, including much of physics, chemistry and molecular biology. Classical mechanics in particular is seen as a reductionist framework. For instance, we understand the solar system in terms of its components (the sun and the planets) and their interactions. Statistical mechanics can be considered as a reconciliation of macroscopic thermodynamic laws with the reductionist method of explaining macroscopic properties in terms of microscopic components, although it has been argued that reduction in physics 'never goes all the way in practice'.
In computer science
The role of reduction in computer science can be thought as a precise and unambiguous mathematical formalization of the philosophical idea of "theory reductionism". In a general sense, a problem (or set) is said to be reducible to another problem (or set), if there is a computable/feasible method to translate the questions of the former into the latter, so that, if one knows how to computably/feasibly solve the latter problem, then one can computably/feasibly solve the former. Thus, the latter can only be at least as "hard" to solve as the former.
Reduction in theoretical computer science is pervasive in both: the mathematical abstract foundations of computation; and in real-world performance or capability analysis of algorithms. More specifically, reduction is a foundational and central concept, not only in the realm of mathematical logic and abstract computation in computability (or recursive) theory, where it assumes the form of e.g. Turing reduction, but also in the realm of real-world computation in time (or space) complexity analysis of algorithms, where it assumes the form of e.g. polynomial-time reduction.
Criticism
Free will
Philosophers of the Enlightenment worked to insulate human free will from reductionism. Descartes separated the material world of mechanical necessity from the world of mental free will. German philosophers introduced the concept of the "noumenal" realm that is not governed by the deterministic laws of "phenomenal" nature, where every event is completely determined by chains of causality. The most influential formulation was by Immanuel Kant, who distinguished between the causal deterministic framework the mind imposes on the world—the phenomenal realm—and the world as it exists for itself, the noumenal realm, which, as he believed, included free will. To insulate theology from reductionism, 19th century post-Enlightenment German theologians, especially Friedrich Schleiermacher and Albrecht Ritschl, used the Romantic method of basing religion on the human spirit, so that it is a person's feeling or sensibility about spiritual matters that comprises religion.
Causation
Most common philosophical understandings of causation involve reducing it to some collection of non-causal facts. Opponents of these reductionist views have given arguments that the non-causal facts in question are insufficient to determine the causal facts.
Alfred North Whitehead's metaphysics opposed reductionism. He refers to this as the "fallacy of the misplaced concreteness". His scheme was to frame a rational, general understanding of phenomena, derived from our reality.
In science
An alternative term for ontological reductionism is fragmentalism, often used in a pejorative sense. In cognitive psychology, George Kelly developed "constructive alternativism" as a form of personal construct psychology and an alternative to what he considered "accumulative fragmentalism". For this theory, knowledge is seen as the construction of successful mental models of the exterior world, rather than the accumulation of independent "nuggets of truth". Others argue that inappropriate use of reductionism limits our understanding of complex systems. In particular, ecologist Robert Ulanowicz says that science must develop techniques to study ways in which larger scales of organization influence smaller ones, and also ways in which feedback loops create structure at a given level, independently of details at a lower level of organization. He advocates and uses information theory as a framework to study propensities in natural systems. The limits of the application of reductionism are claimed to be especially evident at levels of organization with greater complexity, including living cells, neural networks (biology), ecosystems, society, and other systems formed from assemblies of large numbers of diverse components linked by multiple feedback loops.
See also
Antireductionism
Eliminative materialism
Emergentism
Further facts
Materialism
Multiple realizability
Physicalism
Technological determinism
References
Further reading
Churchland, Patricia (1986), Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press.
Dawkins, Richard (1976), The Selfish Gene. Oxford University Press; 2nd edition, December 1989.
Dennett, Daniel C. (1995) Darwin's Dangerous Idea. Simon & Schuster.
Descartes (1637), Discourses, Part V.
Dupre, John (1993), The Disorder of Things. Harvard University Press.
Galison, Peter and David J. Stump, eds. (1996), The Disunity of the Sciences: Boundaries, Contexts, and Power. Stanford University Press.
Jones, Richard H. (2013), Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.
Laughlin, Robert (2005), A Different Universe: Reinventing Physics from the Bottom Down. Basic Books.
Nagel, Ernest (1961), The Structure of Science. New York.
Pinker, Steven (2002), The Blank Slate: The Modern Denial of Human Nature. Viking Penguin.
Ruse, Michael (1988), Philosophy of Biology. Albany, NY.
Rosenberg, Alexander (2006), Darwinian Reductionism or How to Stop Worrying and Love Molecular Biology. University of Chicago Press.
Eric Scerri The reduction of chemistry to physics has become a central aspect of the philosophy of chemistry. See several articles by this author.
Weinberg, Steven (1992), Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature, Pantheon Books.
Weinberg, Steven (2002) describes what he terms the culture war among physicists in his review of A New Kind of Science.
Capra, Fritjof (1982), The Turning Point.
Lopez, F., Il pensiero olistico di Ippocrate. Riduzionismo, antiriduzionismo, scienza della complessità nel trattato sull'Antica Medicina, vol. IIA, Ed. Pubblisfera, Cosenza Italy 2008.
Maureen L Pope, Personal construction of formal knowledge, Humanities Social Science and Law, 13.4, December, 1982, pp. 3–14
Tara W. Lumpkin, Perceptual Diversity: Is Polyphasic Consciousness Necessary for Global Survival? December 28, 2006, bioregionalanimism.com
Vandana Shiva, 1995, Monocultures, Monopolies and the Masculinisation of Knowledge. International Development Research Centre (IDRC) Reports: Gender Equity. 23: 15–17. Gender and Equity (v. 23, no. 2, July 1995)
The Anti-Realist Side of the Debate: A Theory's Predictive Success does not Warrant Belief in the Unobservable Entities it Postulates Andre Kukla and Joel Walmsley.
External links
Alyssa Ney, "Reductionism" in: Internet Encyclopedia of Philosophy.
Ingo Brigandt and Alan Love, "Reductionism in Biology" in: The Stanford Encyclopedia of Philosophy.
John Dupré: The Disunity of Science—an interview at the Galilean Library covering criticisms of reductionism.
Monica Anderson: Reductionism Considered Harmful
Reduction and Emergence in Chemistry, Internet Encyclopedia of Philosophy.
Metatheory of science
Metaphysical theories
Sociological theories
Analytic philosophy
Epistemology of science
Cognition
Epistemological theories
Emergence | 0.765369 | 0.996425 | 0.762632 |
Spacecraft propulsion | Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. In-space propulsion exclusively deals with propulsion systems used in the vacuum of space and should not be confused with space launch or atmospheric entry.
Several methods of pragmatic spacecraft propulsion have been developed, each having its own drawbacks and advantages. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping, while a few use momentum wheels for attitude control. Russian and antecedent Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north–south station-keeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used electric propulsion such as ion thrusters and Hall-effect thrusters. Various technologies need to support everything from small satellites and robotic deep space exploration to space stations and human missions to Mars.
Hypothetical in-space propulsion technologies describe propulsion technologies that could meet future space science and exploration needs. These propulsion technologies are intended to provide effective exploration of the Solar System and may permit mission designers to plan missions to "fly anytime, anywhere, and complete a host of science objectives at the destinations" and with greater reliability and safety. With a wide range of possible missions and candidate propulsion technologies, the question of which technologies are "best" for future missions is a difficult one; expert opinion now holds that a portfolio of propulsion technologies should be developed to provide optimum solutions for a diverse set of missions and destinations.
Purpose and function
Space exploration is about reaching the destination safely (mission enabling), quickly (reduced transit times), with a large quantity of payload mass, and relatively inexpensively (lower cost). The act of reaching the destination requires an in-space propulsion system, and the other metrics are modifiers to this fundamental action. Propulsion technologies can significantly improve a number of critical aspects of the mission.
When launching a spacecraft from Earth, a propulsion method must overcome a higher gravitational pull to provide a positive net acceleration. When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft.
In-space propulsion begins where the upper stage of the launch vehicle leaves off, performing the functions of primary propulsion, reaction control, station keeping, precision pointing, and orbital maneuvering. The main engines used in space provide the primary propulsive force for orbit transfer, planetary trajectories, and extra planetary landing and ascent. The reaction control and orbital maneuvering systems provide the propulsive force for orbit maintenance, position control, station keeping, and spacecraft attitude control.
In orbit, any additional impulse, even tiny, will result in a change in the orbit path, in two ways:
Prograde/retrograde (i.e. acceleration in the tangential/opposite in tangential direction), which increases/decreases altitude of orbit
Perpendicular to orbital plane, which changes orbital inclination.
Earth's surface is situated fairly deep in a gravity well; the escape velocity required to leave its orbit is 11.2 kilometers/second. Thus for destinations beyond, propulsion systems need enough propellant and to be of high enough efficiency. The same is true for other planets and moons, albeit some have lower gravity wells.
As human beings evolved in a gravitational field of "one g" (9.81m/s²), it would be most comfortable for a human spaceflight propulsion system to provide that acceleration continuously, (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.
Theory
The Tsiolkovsky rocket equation shows, using the law of conservation of momentum, that for a rocket engine propulsion method to change the momentum of a spacecraft, it must change the momentum of something else in the opposite direction. In other words, the rocket must exhaust mass opposite the spacecraft's acceleration direction, with such exhausted mass called propellant or reaction mass. For this to happen, both reaction mass and energy are needed. The impulse provided by launching a particle of reaction mass with mass m at velocity v is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, fuel is burned, providing the energy, and the reaction products are allowed to flow out of the engine nozzle, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions behind the spacecraft. Here other sources must provide the electrical energy (e.g. a solar panel or a nuclear reactor), whereas the ions provide the reaction mass.
The rate of change of velocity is called acceleration and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or a large acceleration over a short time; similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations for a long time can often produce the same impulse as another which produces large accelerations for a short time. However, when launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.
Some designs however, operate without internal reaction mass by taking advantage of magnetic fields or light pressure to change the spacecraft's momentum.
Efficiency
When discussing the efficiency of a propulsion system, designers often focus on the effective use of the reaction mass, which must be carried along with the rocket and is irretrievably consumed when used. Spacecraft performance can be quantified in amount of change in momentum per unit of propellant consumed, also called specific impulse. This is a measure of the amount of impulse that can be obtained from a fixed amount of reaction mass. The higher the specific impulse, the better the efficiency. Ion propulsion engines have high specific impulse (~3000 s) and low thrust whereas chemical rockets like monopropellant or bipropellant rocket engines have a low specific impulse (~300 s) but high thrust.
The impulse per unit weight-on-Earth (typically designated by ) has units of seconds. Because the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass, with the same units as velocity (e.g., meters per second). This measure is equivalent to the effective exhaust velocity of the engine, and is typically designated . Either the change in momentum per unit of propellant used by a spacecraft, or the velocity of the propellant exiting the spacecraft, can be used to measure its "specific impulse." The two values differ by a factor of the standard acceleration due to gravity, gn, 9.80665 m/s².
In contrast to chemical rockets, electrodynamic rockets use
electric or magnetic fields to accelerate a charged propellant. The benefit of this method is that it can achieve exhaust velocities, and therefore , more than 10 times greater than those of a chemical engine, producing steady thrust with far less fuel. With a conventional chemical propulsion system, 2% of a rocket's total mass might make it to the destination, with the other 98% having been consumed as fuel. With an electric propulsion system, 70% of what's aboard in low Earth orbit can make it to a deep-space destination.
However, there is a trade-off. Chemical rockets transform propellants into most of the energy needed to propel them, but their electromagnetic equivalents must carry or produce the power required to create and accelerate propellants. Because there are currently practical limits on the amount of power available on a spacecraft, these engines are not suitable for launch vehicles or when a spacecraft needs a quick, large
impulse, such as when it brakes to enter a capture orbit. Even so, because
electrodynamic rockets offer very high , mission planners are
increasingly willing to sacrifice power and thrust (and the extra time it will
take to get a spacecraft where it needs to go) in order to save large amounts
of propellant mass.
Operating domains
Spacecraft operate in many areas of space. These include orbital maneuvering, interplanetary travel, and interstellar travel.
Orbital
Artificial satellites are first launched into the desired altitude by conventional liquid/solid propelled rockets, after which the satellite may use onboard propulsion systems for orbital stationkeeping. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital station-keeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. A satellite's useful life is usually over once it has exhausted its ability to adjust its orbit.
Interplanetary
For interplanetary travel, a spacecraft can use its engines to leave Earth's orbit. It is not explicitly necessary as the initial boost given by the rocket, gravity slingshot, monopropellant/bipropellent attitude control propulsion system are enough for the exploration of the solar system (see New Horizons). Once it has done so, it must make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. In between these adjustments, the spacecraft typically moves along its trajectory without accelerating. The most fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking or aerocapture are sometimes used for this final orbital adjustment.
Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun, or constantly thrusting along its direction of motion to increase its distance from the Sun. The concept has been successfully tested by the Japanese IKAROS solar sail spacecraft.
Interstellar
Because interstellar distances are great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival remains a formidable challenge for spacecraft designers. No spacecraft capable of short duration (compared to human lifetime) interstellar travel has yet been built, but many hypothetical designs have been discussed.
Propulsion technology
Spacecraft propulsion technology can be of several types, such as chemical, electric or nuclear. They are distinguished based on the physics of the propulsion system and how thrust is generated. Other experimental and more theoretical types are also included, depending on their technical maturity. Additionally, there may be credible meritorious in-space propulsion concepts not foreseen or reviewed at the time of publication, and which may be shown to be beneficial to future mission applications.
Almost all types are reaction engines, which produce thrust by expelling reaction mass, in accordance with Newton's third law of motion. Examples include jet engines, rocket engines, pump-jet, and more uncommon variations such as Hall–effect thrusters, ion drives, mass drivers, and nuclear pulse propulsion.
Chemical propulsion
A large fraction of rocket engines in use today are chemical rockets; that is, they obtain the energy needed to generate thrust by chemical reactions to create a hot gas that is expanded to produce thrust. Many different propellant combinations are used to obtain these chemical reactions, including, for example, hydrazine, liquid oxygen, liquid hydrogen, nitrous oxide, and hydrogen peroxide. They can be used as a monopropellant or in bi-propellant configurations.
Rocket engines provide essentially the highest specific powers and high specific thrusts of any engine used for spacecraft propulsion. Most rocket engines are internal combustion heat engines (although non-combusting forms exist). Rocket engines generally produce a high-temperature reaction mass, as a hot gas, which is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio bell-shaped nozzle, a feature that gives a rocket engine its characteristic shape. The effect of the nozzle is to accelerate the mass, converting most of the thermal energy into kinetic energy, where exhaust speeds reaching as high as 10 times the speed of sound at sea level are common.
Green chemical propulsion
The dominant form of chemical propulsion for satellites has historically been hydrazine, however, this fuel is highly toxic and at risk of being banned across Europe. Non-toxic 'green' alternatives are now being developed to replace hydrazine. Nitrous oxide-based alternatives are garnering traction and government support, with development being led by commercial companies Dawn Aerospace, Impulse Space, and Launcher. The first nitrous oxide-based system flown in space was by D-Orbit onboard their ION Satellite Carrier (space tug) in 2021, using six Dawn Aerospace B20 thrusters, launched upon a SpaceX Falcon 9 rocket.
Electric propulsion
Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly, where the reaction mass is usually a stream of ions.
Ion propulsion rockets typically heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle so that no solid matter needs to come in contact with the plasma. Such an engine uses electric power, first to ionize atoms, and then to create a voltage gradient to accelerate the ions to high exhaust velocities. For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel.
Electric propulsion is commonly used for station keeping on commercial communications satellites and for prime propulsion on some scientific space missions because of their high specific impulse. However, they generally have very small values of thrust and therefore must be operated for long durations to provide the total impulse required by a mission.
The idea of electric propulsion dates to 1906, when Robert Goddard considered the possibility in his personal notebook. Konstantin Tsiolkovsky published the idea in 1911.
Electric propulsion methods include:
Ion thrusters, which accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer;
Electrostatic ion thrusters
Field-emission electric propulsion
MagBeam thrusters
Hall-effect thrusters
Colloid thrusters
Electrothermal thrusters, wherein electromagnetic fields are used to generate a plasma to increase the heat of the bulk propellant, the thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either physical material construction or by magnetic means;
Arcjets using DC current or microwaves
Helicon double-layer thrusters
Resistojets
Electromagnetic thrusters, wherein ions are accelerated either by the Lorentz Force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration;
Plasma propulsion engines
Magnetoplasmadynamic thrusters
Electrodeless plasma thrusters
Pulsed inductive thrusters
Pulsed plasma thrusters
Variable specific impulse magnetoplasma rockets (VASIMR)
Vacuum arc thrusters
Mass drivers designed for propulsion.
Power sources
For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has often been used, but for others further out or at higher power, nuclear energy is necessary; engines drawing their power from a nuclear source are called nuclear electric rockets.
Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft is considered to have potential, according to NASA and the University of Colorado Boulder.
With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle.
Nuclear propulsion
Nuclear fuels typically have very high specific energy, much higher than chemical fuels, which means that they can generate large amounts of energy per unit mass. This makes them valuable in spaceflight, as it can enable high specific impulses, sometimes even at high thrusts. The machinery to do this is complex, but research has developed methods for their use in propulsion systems, and some have been tested in a laboratory.
Here, nuclear propulsion moreso refers to the source of propulsion being nuclear, instead of a nuclear electric rocket where a nuclear reactor would provide power (instead of solar panels) for other types of electrical propulsion.
Nuclear propulsion methods include:
Fission-fragment rockets
Fission sails
Fusion rockets
Nuclear thermal rockets (NTR)
Nuclear pulse propulsion
Nuclear salt-water rockets
Radioisotope rockets
Without internal reaction mass
There are several different space drives that need little or no reaction mass to function.
Reaction wheels
Many spacecraft use reaction wheels or control moment gyroscopes to control orientation in space. A satellite or other space vehicle is subject to the law of conservation of angular momentum, which constrains a body from a net change in angular velocity. Thus, for a vehicle to change its relative orientation without expending reaction mass, another part of the vehicle may rotate in the opposite direction. Non-conservative external forces, primarily gravitational and atmospheric, can contribute up to several degrees per day to angular momentum, so such systems are designed to "bleed off" undesired rotational energies built up over time.
EM wave-based propulsion
The law of conservation of momentum is usually taken to imply that any engine which uses no reaction mass cannot accelerate the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, electromagnetic waves, solar wind and solar radiation. Electromagnetic waves in particular are known to contain momentum, despite being massless; specifically the momentum flux density P of an EM wave is quantitatively 1/c2 times the Poynting vector S, i.e. P = S/c2, where c is the velocity of light. Field propulsion methods which do not rely on reaction mass thus must try to take advantage of this fact by coupling to a momentum-bearing field such as an EM wave that exists in the vicinity of the craft; however, because many of these phenomena are diffuse in nature, corresponding propulsion structures must be proportionately large.
Solar and magnetic sails
The concept of solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. E-sails propose to use very thin and lightweight wires holding an electric charge to deflect particles, which may have more controllable directionality.
Magnetic sails deflect charged particles from the solar wind with a magnetic field, thereby imparting momentum to the spacecraft. For instance, the so-called Magsail is a large superconducting loop proposed for acceleration/deceleration in the solar wind and deceleration in the Interstellar medium. A variant is the mini-magnetospheric plasma propulsion system and its successor, the magnetoplasma sail, which inject plasma at a low rate to enhance the magnetic field to more effectively deflect charged particles in a plasma wind.
Japan launched a solar sail-powered spacecraft, IKAROS in May 2010, which successfully demonstrated propulsion and guidance (and is still active as of this date). As further proof of the solar sail concept, NanoSail-D became the first such powered satellite to orbit Earth. As of August 2017, NASA confirmed the Sunjammer solar sail project was concluded in 2014 with lessons learned for future space sail projects. The U.K. Cubesail programme will be the first mission to demonstrate solar sailing in low Earth orbit, and the first mission to demonstrate full three-axis attitude control of a solar sail.
Other propulsion types
The concept of a gravitational slingshot is a form of propulsion to carry a space probe onward to other destinations without the expense of reaction mass; harnessing the gravitational energy of other celestial objects allows the spacecraft to gain kinetic energy. However, more energy can be obtained from the gravity assist if rockets are used via the Oberth effect.
A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object.
Beam-powered propulsion is another method of propulsion without reaction mass, and includes sails pushed by laser, microwave, or particle beams.
Advanced propulsion technology
Advanced, and in some cases theoretical, propulsion technologies may use chemical or nonchemical physics to produce thrust but are generally considered to be of lower technical maturity with challenges that have not been overcome. For both human and robotic exploration, traversing the solar system is a struggle against time and distance. The most distant planets are 4.5–6 billion kilometers from the Sun and to reach them in any reasonable time requires much more capable propulsion systems than conventional chemical rockets. Rapid inner solar system missions with flexible launch dates are difficult, requiring propulsion systems that are beyond today's current state of the art. The logistics, and therefore the total system mass required to support sustained human exploration beyond Earth to destinations such as the Moon, Mars, or near-Earth objects, are daunting unless more efficient in-space propulsion technologies are developed and fielded.
A variety of hypothetical propulsion techniques have been considered that require a deeper understanding of the properties of space, particularly inertial frames and the vacuum state. Such methods are highly speculative and include:
Black hole starship
Differential sail
Gravitational shielding
Field propulsion
Diametric drive
Disjunction drive
Pitch drive
Bias drive
Photon rocket
Quantum vacuum thruster
Nano electrokinetic thruster
Reactionless drive
Abraham—Minkowski drive
Alcubierre drive
Dean drive
EmDrive
Heim theory
Woodward effect
A NASA assessment of its Breakthrough Propulsion Physics Program divides such proposals into those that are non-viable for propulsion purposes, those that are of uncertain potential, and those that are not impossible according to current theories.
Table of methods
Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods. Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed which the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method; thrust and power consumption and other factors can be. However,
if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above), and
if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time.
The second and third are the typical amounts of thrust and the typical burn times of the method; outside a gravitational potential, small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period, if the object is not significantly influenced by gravity. The fourth is the maximum delta-v the technique can give without staging. For rocket-like propulsion systems, this is a function of mass fraction and exhaust velocity; mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower.
Table Notes
Planetary and atmospheric propulsion
Launch-assist mechanisms
There have been many ideas proposed for launch-assist mechanisms that have the potential of substantially reducing the cost of getting to orbit. Proposed non-rocket spacelaunch launch-assist mechanisms include:
Skyhook (requires reusable suborbital launch vehicle, not feasible using presently available materials)
Space elevator (tether from Earth's surface to geostationary orbit, cannot be built with existing materials)
Launch loop (a very fast enclosed rotating loop about 80 km tall)
Space fountain (a very tall building held up by a stream of masses fired from its base)
Orbital ring (a ring around Earth with spokes hanging down off bearings)
Electromagnetic catapult (railgun, coilgun) (an electric gun)
Rocket sled launch
Space gun (Project HARP, ram accelerator) (a chemically powered gun)
Beam-powered propulsion rockets and jets powered from the ground via a beam
High-altitude platforms to assist initial stage
Air-breathing engines
Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give significant performance improvement when installed on a launch vehicle. However, launch vehicles can be air launched from separate lift vehicles (e.g. B-29, Pegasus Rocket and White Knight) which do use such propulsion systems. Jet engines mounted on a launch rail could also be so used.
On the other hand, very lightweight or very high-speed engines have been proposed that take advantage of the air during ascent:
SABRE – a lightweight hydrogen fuelled turbojet with precooler
ATREX – a lightweight hydrogen fuelled turbojet with precooler
Liquid air cycle engine – a hydrogen-fuelled jet engine that liquifies the air before burning it in a rocket engine
Scramjet – jet engines that use supersonic combustion
Shcramjet – similar to a scramjet engine, however it takes advantage of shockwaves produced from the aircraft in the combustion chamber to assist in increasing overall efficiency.
Normal rocket launch vehicles fly almost vertically before rolling over at an altitude of some tens of kilometers before burning sideways for orbit; this initial vertical climb wastes propellant but is optimal as it greatly reduces airdrag. Airbreathing engines burn propellant much more efficiently and this would permit a far flatter launch trajectory. The vehicles would typically fly approximately tangentially to Earth's surface until leaving the atmosphere then perform a rocket burn to bridge the final delta-v to orbital velocity.
For spacecraft already in very low-orbit, air-breathing electric propulsion could use residual gases in the upper atmosphere as a propellant. Air-breathing electric propulsion could make a new class of long-lived, low-orbiting missions feasible on Earth, Mars or Venus.
Planetary arrival and landing
When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using any of the methods listed above (provided they can generate a high enough thrust), but there are methods that can take advantage of planetary atmospheres and/or surfaces.
Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel because it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Because the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, 2001 Mars Odyssey, and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan.
Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and more controlled navigation because it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, because they turned a hyperbolic orbit into an elliptical orbit. On these missions, because there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee.
A ballute is an inflatable drag device.
Parachutes can land a probe on a planet or moon with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield.
Airbags can soften the final landing.
Lithobraking, or stopping by impacting the surface, is usually done by accident. However, it may be done deliberately with the probe expected to survive (see, for example, the Deep Impact spacecraft), in which case very sturdy probes are required.
Research
Development of technologies will result in technical solutions that improve thrust levels, specific impulse, power, specific mass, (or specific power), volume, system mass, system complexity, operational complexity, commonality with other spacecraft systems, manufacturability, durability, and cost. These types of improvements will yield decreased transit times, increased payload mass, safer spacecraft, and decreased costs. In some instances, the development of technologies within this technology area will result in mission-enabling breakthroughs that will revolutionize space exploration. There is no single propulsion technology that will benefit all missions or mission types; the requirements for in-space propulsion vary widely according to their intended application.
One institution focused on developing primary propulsion technologies aimed at benefitting near and mid-term science missions by reducing cost, mass, and/or travel times is the Glenn Research Center (GRC). Electric propulsion architectures are of particular interest to the GRC, including ion and Hall thrusters. One system combines solar sails, a form of propellantless propulsion which relies on naturally-occurring starlight for propulsion energy, and Hall thrusters. Other propulsion technologies being developed include advanced chemical propulsion and aerocapture.
Defining technologies
The term "mission pull" defines a technology or a performance characteristic necessary to meet a planned NASA mission requirement. Any other relationship between a technology and a mission (an alternate propulsion system, for example) is categorized as "technology push." Also, a space demonstration refers to the spaceflight of a scaled version of a particular technology or of a critical technology subsystem. On the other hand, a space validation would serve as a qualification flight for future mission implementation. A successful validation flight would not require any additional space testing of a particular technology before it can be adopted for a science or exploration mission.
Testing
Spacecraft propulsion systems are often first statically tested on Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a moderately large vacuum chamber is needed. Static firing of engines are done at ground test facilities, and systems which cannot be adequately tested on the ground and require launches may be employed at a launch site.
In fiction
In science fiction, space ships use various means to travel, some of them scientifically plausible (like solar sails or ramjets), others, mostly or entirely fictitious (like anti-gravity, warp drive, spindizzy or hyperspace travel).
Further reading
See also:
See also
Anti-gravity
Artificial gravity
Atmospheric entry
Breakthrough Propulsion Physics Program
Flight dynamics (spacecraft)
Index of aerospace engineering articles
Interplanetary Transport Network
Interplanetary travel
List of aerospace engineering topics
Lists of rockets
Orbital maneuver
Orbital mechanics
Pulse detonation engine
Rocket
Rocket engine nozzles
Satellite
Spaceflight
Space launch
Space travel using constant acceleration
Specific impulse
Tsiolkovsky rocket equation
References
External links
NASA Breakthrough Propulsion Physics project
Different Rockets
Earth-to-Orbit Transportation Bibliography
Spaceflight Propulsion – a detailed survey by Greg Goebel, in the public domain
Johns Hopkins University, Chemical Propulsion Information Analysis Center
Tool for Liquid Rocket Engine Thermodynamic Analysis
Smithsonian National Air and Space Museum's How Things Fly website
Fullerton, Richard K. "Advanced EVA Roadmaps and Requirements." Proceedings of the 31st International Conference on Environmental Systems. 2001.
Atomic Rocket – Engines: A site listing and detailing real, theoretical and fantasy space engines.
Spacecraft components
Spaceflight technology
NASA programs
Glenn Research Center
Discovery and exploration of the Solar System | 0.766865 | 0.994476 | 0.762629 |
Particle image velocimetry | Particle image velocimetry (PIV) is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field) of the flow being studied.
Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry. The main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields, while the other techniques measure the velocity at a point. During PIV, the particle concentration is such that it is possible to identify individual particles in an image, but not with certainty to track it between images. When the particle concentration is so low that it is possible to follow an individual particle it is called particle tracking velocimetry, while laser speckle velocimetry is used for cases where the particle concentration is so high that it is difficult to observe individual particles in an image.
Typical PIV apparatus consists of a camera (normally a digital camera with a charge-coupled device (CCD) chip in modern systems), a strobe or laser with an optical arrangement to limit the physical region illuminated (normally a cylindrical lens to convert a light beam to a line), a synchronizer to act as an external trigger for control of the camera and laser, the seeding particles and the fluid under investigation. A fiber-optic cable or liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images.
History
Particle image velocimetry (PIV) is a non-intrusive optical flow measurement technique used to study fluid flow patterns and velocities. PIV has found widespread applications in various fields of science and engineering, including aerodynamics, combustion, oceanography, and biofluids. The development of PIV can be traced back to the early 20th century when researchers started exploring different methods to visualize and measure fluid flow.
The early days of PIV can be credited to the pioneering work of Ludwig Prandtl, a German physicist and engineer, who is often regarded as the father of modern aerodynamics. In the 1920s, Prandtl and his colleagues used shadowgraph and schlieren techniques to visualize and measure flow patterns in wind tunnels. These methods relied on the refractive index differences between the fluid regions of interest and the surrounding medium to generate contrast in the images. However, these methods were limited to qualitative observations and did not provide quantitative velocity measurements.
The early PIV setups were relatively simple and used photographic film as the image recording medium. A laser was used to illuminate particles, such as oil droplets or smoke, added to the flow, and the resulting particle motion was captured on film. The films were then developed and analyzed to obtain flow velocity information. These early PIV systems had limited spatial resolution and were labor-intensive, but they provided valuable insights into fluid flow behavior.
The advent of lasers in the 1960s revolutionized the field of flow visualization and measurement. Lasers provided a coherent and monochromatic light source that could be easily focused and directed, making them ideal for optical flow diagnostics. In the late 1960s and early 1970s, researchers such as Arthur L. Lavoie, Hervé L. J. H. Scohier, and Adrian Fouriaux independently proposed the concept of particle image velocimetry (PIV). PIV was initially used for studying air flows and measuring wind velocities, but its applications soon extended to other areas of fluid dynamics.
In the 1980s, the development of charge-coupled devices (CCDs) and digital image processing techniques revolutionized PIV. CCD cameras replaced photographic film as the image recording medium, providing higher spatial resolution, faster data acquisition, and real-time processing capabilities. Digital image processing techniques allowed for accurate and automated analysis of the PIV images, greatly reducing the time and effort required for data analysis.
The advent of digital imaging and computer processing capabilities in the 1980s and 1990s revolutionized PIV, leading to the development of advanced PIV techniques, such as multi-frame PIV, stereo-PIV, and time-resolved PIV. These techniques allowed for higher accuracy, higher spatial and temporal resolution, and three-dimensional measurements, expanding the capabilities of PIV and enabling its application in more complex flow systems.
In the following decades, PIV continued to evolve and advance in several key areas. One significant advancement was the use of dual or multiple exposures in PIV, which allowed for the measurement of both instantaneous and time-averaged velocity fields. Dual-exposure PIV (often referred to as "stereo PIV" or "stereo-PIV") uses two cameras to capture two consecutive images with a known time delay, allowing for the measurement of three-component velocity vectors in a plane. This provided a more complete picture of the flow field and enabled the study of complex flows, such as turbulence and vortices.
In the 2000s and beyond, PIV continued to evolve with the development of high-power lasers, high-speed cameras, and advanced image analysis algorithms. These advancements have enabled PIV to be used in extreme conditions, such as high-speed flows, combustion systems, and microscale flows, opening up new frontiers for PIV research. PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, and has been used in emerging fields, such as microscale and nanoscale flows, granular flows, and additive manufacturing.
The advancement of PIV has been driven by the development of new laser sources, cameras, and image analysis techniques. Advances in laser technology have led to the use of high-power lasers, such as Nd:YAG lasers and diode lasers, which provide increased illumination intensity and allow for measurements in more challenging environments, such as high-speed flows and combustion systems. High-speed cameras with improved sensitivity and frame rates have also been developed, enabling the capture of transient flow phenomena with high temporal resolution. Furthermore, advanced image analysis techniques, such as correlation-based algorithms, phase-based methods, and machine learning algorithms, have been developed to enhance the accuracy and efficiency of PIV measurements.
Another major advancement in PIV was the development of digital correlation algorithms for image analysis. These algorithms allowed for more accurate and efficient processing of PIV images, enabling higher spatial resolution and faster data acquisition rates. Various correlation algorithms, such as cross-correlation, Fourier-transform-based correlation, and adaptive correlation, were developed and widely used in PIV research.
PIV has also benefited from the development of computational fluid dynamics (CFD) simulations, which have become powerful tools for predicting and analyzing fluid flow behavior. PIV data can be used to validate and calibrate CFD simulations, and in turn, CFD simulations can provide insights into the interpretation and analysis of PIV data. The combination of experimental PIV measurements and numerical simulations has enabled researchers to gain a deeper understanding of fluid flow phenomena and has led to new discoveries and advancements in various scientific and engineering fields.
In addition to the technical advancements, PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, to provide more comprehensive and multi-parameter flow measurements. For example, combining PIV with thermographic phosphors or laser-induced fluorescence allows for simultaneous measurement of velocity and temperature or concentration fields, providing valuable data for studying heat transfer, mixing, and chemical reactions in fluid flows.
Applications
The historical development of PIV has been driven by the need for accurate and non-intrusive flow measurements in various fields of science and engineering. The early years of PIV were marked by the development of basic PIV techniques, such as two-frame PIV, and the application of PIV in fundamental fluid dynamics research, primarily in academic settings. As PIV gained popularity, researchers started using it in more practical applications, such as aerodynamics, combustion, and oceanography.
As PIV continues to advance and evolve, it is expected to find further applications in a wide range of fields, from fundamental research in fluid dynamics to practical applications in engineering, environmental science, and medicine. The continued development of PIV techniques, including advancements in lasers, cameras, image analysis algorithms, and integration with other measurement techniques, will further enhance its capabilities and broaden its applications.
In aerodynamics, PIV has been used to study the flow over aircraft wings, rotor blades, and other aerodynamic surfaces, providing insights into the flow behavior and aerodynamic performance of these systems.
As PIV gained popularity, it found applications in a wide range of fields beyond aerodynamics, including combustion, oceanography, biofluids, and microscale flows. In combustion research, PIV has been used to study the details of combustion processes, such as flame propagation, ignition, and fuel spray dynamics, providing valuable insights into the complex interactions between fuel and air in combustion systems. In oceanography, PIV has been used to study the motion of water currents, waves, and turbulence, aiding in the understanding of ocean circulation patterns and coastal erosion. In biofluids research, PIV has been applied to study blood flow in arteries and veins, respiratory flow, and the motion of cilia and flagella in microorganisms, providing important information for understanding physiological processes and disease mechanisms.
PIV has also been used in new and emerging fields, such as microscale and nanoscale flows, granular flows, and multiphase flows. Micro-PIV and nano-PIV have been used to study flows in microchannels, nanopores, and biological systems at the microscale and nanoscale, providing insights into the unique behaviors of fluids at these length scales. PIV has been applied to study the motion of particles in granular flows, such as avalanches and landslides, and to investigate multiphase flows, such as bubbly flows and oil-water flows, which are important in environmental and industrial processes. In microscale flows, conventional measurement techniques are challenging to apply due to the small length scales involved. Micro-PIV has been used to study flows in microfluidic devices, such as lab-on-a-chip systems, and to investigate phenomena such as droplet formation, mixing, and cell motion, with applications in drug delivery, biomedical diagnostics, and microscale engineering.
PIV has also found applications in advanced manufacturing processes, such as additive manufacturing, where understanding and optimizing fluid flow behavior is critical for achieving high-quality and high-precision products. PIV has been used to study the flow dynamics of gases, liquids, and powders in additive manufacturing processes, providing insights into the process parameters that affect the quality and properties of the manufactured products.
PIV has also been used in environmental science to study the dispersion of pollutants in air and water, sediment transport in rivers and coastal areas, and the behavior of pollutants in natural and engineered systems. In energy research, PIV has been used to study the flow behavior in wind turbines, hydroelectric power plants, and combustion processes in engines and turbines, aiding in the development of more efficient and environmentally friendly energy systems.
Equipment and apparatus
Seeding particles
The seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well. Otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the fluid system being used, and are spherical (these particles are called microspheres). While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene, polyethylene, aluminum flakes or oil droplets (if the fluid under investigation is a gas). Refractive index for the seeding particles should be different from the fluid which they are seeding, so that the laser sheet incident on the fluid flow will reflect off of the particles and be scattered towards the camera.
The particles are typically of a diameter in the order of 10 to 100 micrometers. As for sizing, the particles should be small enough so that response time of the particles to the motion of the fluid is reasonably short to accurately follow the flow, yet large enough to scatter a significant quantity of the incident laser light. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, to avoid the quenching effect that the inert particles may have on flames. Due to the small size of the particles, the particles' motion is dominated by Stokes' drag and settling or rising effects. In a model where particles are modeled as spherical (microspheres) at a very low Reynolds number, the ability of the particles to follow the fluid's flow is inversely proportional to the difference in density between the particles and the fluid, and also inversely proportional to the square of their diameter. The scattered light from the particles is dominated by Mie scattering and so is also proportional to the square of the particles' diameters. Thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow.
The seeding mechanism needs to also be designed so as to seed the flow to a sufficient degree without overly disturbing the flow.
Camera
To perform PIV analysis on the flow, two exposures of laser light are required upon the camera from the flow. Originally, with the inability of cameras to capture multiple frames at high speeds, both exposures were captured on the same frame and this single frame was used to determine the flow. A process called autocorrelation was used for this analysis. However, as a result of autocorrelation the direction of the flow becomes unclear, as it is not clear which particle spots are from the first pulse and which are from the second pulse. Faster digital cameras using CCD or CMOS chips were developed since then that can capture two frames at high speed with a few hundred ns difference between the frames. This has allowed each exposure to be isolated on its own frame for more accurate cross-correlation analysis. The limitation of typical cameras is that this fast speed is limited to a pair of shots. This is because each pair of shots must be transferred to the computer before another pair of shots can be taken. Typical cameras can only take a pair of shots at a much slower speed. High speed CCD or CMOS cameras are available but are much more expensive.
Laser and optics
For macro PIV setups, lasers are predominant due to their ability to produce high-power light beams with short pulse durations. This yields short exposure times for each frame. Nd:YAG lasers, commonly used in PIV setups, emit primarily at 1064 nm wavelength and its harmonics (532, 266, etc.) For safety reasons, the laser emission is typically bandpass filtered to isolate the 532 nm harmonics (this is green light, the only harmonic able to be seen by the naked eye). A fiber-optic cable or liquid light guide might be used to direct the laser light to the experimental setup.
The optics consist of a spherical lens and cylindrical lens combination. The cylindrical lens expands the laser into a plane while the spherical lens compresses the plane into a thin sheet. This is critical as the PIV technique cannot generally measure motion normal to the laser sheet and so ideally this is eliminated by maintaining an entirely 2-dimensional laser sheet. The spherical lens cannot compress the laser sheet into an actual 2-dimensional plane. The minimum thickness is on the order of the wavelength of the laser light and occurs at a finite distance from the optics setup (the focal point of the spherical lens). This is the ideal location to place the analysis area of the experiment.
The correct lens for the camera should also be selected to properly focus on and visualize the particles within the investigation area.
Synchronizer
The synchronizer acts as an external trigger for both the camera(s) and the laser. While analogue systems in the form of a photosensor, rotating aperture and a light source have been used in the past, most systems in use today are digital. Controlled by a computer, the synchronizer can dictate the timing of each frame of the CCD camera's sequence in conjunction with the firing of the laser to within 1 ns precision. Thus the time between each pulse of the laser and the placement of the laser shot in reference to the camera's timing can be accurately controlled. Knowledge of this timing is critical as it is needed to determine the velocity of the fluid in the PIV analysis. Stand-alone electronic synchronizers, called digital delay generators, offer variable resolution timing from as low as 250 ps to as high as several ms. With up to eight channels of synchronized timing, they offer the means to control several flash lamps and Q-switches as well as provide for multiple camera exposures.
Analysis
The frames are split into a large number of interrogation areas, or windows. It is then possible to calculate a displacement vector for each window with help of signal processing and autocorrelation or cross-correlation techniques. This is converted to a velocity using the time between laser shots and the physical size of each pixel on the camera. The size of the interrogation window should be chosen to have at least 6 particles per window on average. A visual example of PIV analysis can be seen here.
The synchronizer controls the timing between image exposures and also permits image pairs to be acquired at various times along the flow. For accurate PIV analysis, it is ideal that the region of the flow that is of interest should display an average particle displacement of about 8 pixels. This is a compromise between a longer time spacing which would allow the particles to travel further between frames, making it harder to identify which interrogation window traveled to which point, and a shorter time spacing, which could make it overly difficult to identify any displacement within the flow.
The scattered light from each particle should be in the region of 2 to 4 pixels across on the image. If too large an area is recorded, particle image size drops and peak locking might occur with loss of sub pixel precision. There are methods to overcome the peak locking effect, but they require some additional work.
If there is in house PIV expertise and time to develop a system, even though it is not trivial, it is possible to build a custom PIV system. Research grade PIV systems do, however, have high power lasers and high end camera specifications for being able to take measurements with the broadest spectrum of experiments required in research.
An example of PIV analysis without installation:
PIV is closely related to digital image correlation, an optical displacement measurement technique that uses correlation techniques to study the deformation of solid materials.
Pros and cons
Advantages
The method is, to a large degree, nonintrusive. The added tracers (if they are properly chosen) generally cause negligible distortion of the fluid flow.
Optical measurement avoids the need for Pitot tubes, hotwire anemometers or other intrusive Flow measurement probes. The method is capable of measuring an entire two-dimensional cross section (geometry) of the flow field simultaneously.
High speed data processing allows the generation of large numbers of image pairs which, on a personal computer may be analysed in real time or at a later time, and a high quantity of near-continuous information may be gained.
Sub pixel displacement values allow a high degree of accuracy, since each vector is the statistical average for many particles within a particular tile. Displacement can typically be accurate down to 10% of one pixel on the image plane.
Drawbacks
In some cases the particles will, due to their higher density, not perfectly follow the motion of the fluid (gas/liquid). If experiments are done in water, for instance, it is easily possible to find very cheap particles (e.g. plastic powder with a diameter of ~60 μm) with the same density as water. If the density still does not fit, the density of the fluid can be tuned by increasing/ decreasing its temperature. This leads to slight changes in the Reynolds number, so the fluid velocity or the size of the experimental object has to be changed to account for this.
Particle image velocimetry methods will in general not be able to measure components along the z-axis (towards to/away from the camera). These components might not only be missed, they might also introduce an interference in the data for the x/y-components caused by parallax. These problems do not exist in Stereoscopic PIV, which uses two cameras to measure all three velocity components.
Since the resulting velocity vectors are based on cross-correlating the intensity distributions over small areas of the flow, the resulting velocity field is a spatially averaged representation of the actual velocity field. This obviously has consequences for the accuracy of spatial derivatives of the velocity field, vorticity, and spatial correlation functions that are often derived from PIV velocity fields.
PIV systems used in research often use class IV lasers and high-resolution, high-speed cameras, which bring cost and safety constraints.
More complex PIV setups
Stereoscopic PIV
Stereoscopic PIV utilises two cameras with separate viewing angles to extract the z-axis displacement. Both cameras must be focused on the same spot in the flow and must be properly calibrated to have the same point in focus.
In fundamental fluid mechanics, displacement within a unit time in the X, Y and Z directions are commonly defined by the variables U, V and W. As was previously described, basic PIV extracts the U and V displacements as functions of the in-plane X and Y directions. This enables calculations of the , , and velocity gradients. However, the other 5 terms of the velocity gradient tensor are unable to be found from this information. The stereoscopic PIV analysis also grants the Z-axis displacement component, W, within that plane. Not only does this grant the Z-axis velocity of the fluid at the plane of interest, but two more velocity gradient terms can be determined: and . The velocity gradient components , , and can not be determined.
The velocity gradient components form the tensor:
Dual plane stereoscopic PIV
This is an expansion of stereoscopic PIV by adding a second plane of investigation directly offset from the first one. Four cameras are required for this analysis. The two planes of laser light are created by splitting the laser emission with a beam splitter into two beams. Each beam is then polarized orthogonally with respect to one another. Next, they are transmitted through a set of optics and used to illuminate one of the two planes simultaneously.
The four cameras are paired into groups of two. Each pair focuses on one of the laser sheets in the same manner as single-plane stereoscopic PIV. Each of the four cameras has a polarizing filter designed to only let pass the polarized scattered light from the respective planes of interest. This essentially creates a system by which two separate stereoscopic PIV analysis setups are run simultaneously with only a minimal separation distance between the planes of interest.
This technique allows the determination of the three velocity gradient components single-plane stereoscopic PIV could not calculate: , , and . With this technique, the entire velocity gradient tensor of the fluid at the 2-dimensional plane of interest can be quantified. A difficulty arises in that the laser sheets should be maintained close enough together so as to approximate a two-dimensional plane, yet offset enough that meaningful velocity gradients can be found in the z-direction.
Multi-plane stereoscopic PIV
There are several extensions of the dual-plane stereoscopic PIV idea available. There is an option to create several parallel laser sheets using a set of beamsplitters and quarter-wave plates, providing three or more planes, using a single laser unit and stereoscopic PIV setup, called XPIV.
Micro PIV
With the use of an epifluorescent microscope, microscopic flows can be analyzed. MicroPIV makes use of fluorescing particles that excite at a specific wavelength and emit at another wavelength. Laser light is reflected through a dichroic mirror, travels through an objective lens that focuses on the point of interest, and illuminates a regional volume. The emission from the particles, along with reflected laser light, shines back through the objective, the dichroic mirror and through an emission filter that blocks the laser light. Where PIV draws its 2-dimensional analysis properties from the planar nature of the laser sheet, microPIV utilizes the ability of the objective lens to focus on only one plane at a time, thus creating a 2-dimensional plane of viewable particles.
MicroPIV particles are on the order of several hundred nm in diameter, meaning they are extremely susceptible to Brownian motion. Thus, a special ensemble averaging analysis technique must be utilized for this technique. The cross-correlation of a series of basic PIV analyses are averaged together to determine the actual velocity field. Thus, only steady flows can be investigated. Special preprocessing techniques must also be utilized since the images tend to have a zero-displacement bias from background noise and low signal-noise ratios. Usually, high numerical aperture objectives are also used to capture the maximum emission light possible. Optic choice is also critical for the same reasons.
Holographic PIV
Holographic PIV (HPIV) encompasses a variety of experimental techniques which use the interference of coherent light scattered by a particle and a reference beam to encode information of the amplitude and phase of the scattered light incident on a sensor plane. This encoded information, known as a hologram, can then be used to reconstruct the original intensity field by illuminating the hologram with the original reference beam via optical methods or digital approximations. The intensity field is interrogated using 3-D cross-correlation techniques to yield a velocity field.
Off-axis HPIV uses separate beams to provide the object and reference waves. This setup is used to avoid speckle noise form being generated from interference of the two waves within the scattering medium, which would occur if they were both propagated through the medium. An off-axis experiment is a highly complex optical system comprising numerous optical elements, and the reader is referred to an example schematic in Sheng et al. for a more complete presentation.
In-line holography is another approach that provides some unique advantages for particle imaging. Perhaps the largest of these is the use of forward scattered light, which is orders of magnitude brighter than scattering oriented normal to the beam direction. Additionally, the optical setup of such systems is much simpler because the residual light does not need to be separated and recombined at a different location. The in-line configuration also provides a relatively easy extension to apply CCD sensors, creating a separate class of experiments known as digital in-line holography. The complexity of such setups shifts from the optical setup to image post-processing, which involves the use of simulated reference beams. Further discussion of these topics is beyond the scope of this article and is treated in Arroyo and Hinsch
A variety of issues degrade the quality of HPIV results. The first class of issues involves the reconstruction itself. In holography, the object wave of a particle is typically assumed to be spherical; however, due to Mie scattering theory, this wave is a complex shape which can distort the reconstructed particle. Another issue is the presence of substantial speckle noise which lowers the overall signal-to-noise ratio of particle images. This effect is of greater concern for in-line holographic systems because the reference beam is propagated through the volume along with the scattered object beam. Noise can also be introduced through impurities in the scattering medium, such as temperature variations and window blemishes. Because holography requires coherent imaging, these effects are much more severe than traditional imaging conditions. The combination of these factors increases the complexity of the correlation process. In particular, the speckle noise in an HPIV recording often prevents traditional image-based correlation methods from being used. Instead, single particle identification and correlation are implemented, which set limits on particle number density. A more comprehensive outline of these error sources is given in Meng et al.
In light of these issues, it may seem that HPIV is too complicated and error-prone to be used for flow measurements. However, many impressive results have been obtained with all holographic approaches. Svizher and Cohen used a hybrid HPIV system to study the physics of hairpin vortices. Tao et al. investigated the alignment of vorticity and strain rate tensors in high Reynolds number turbulence. As a final example, Sheng et al. used holographic microscopy to perform near-wall measurements of turbulent shear stress and velocity in turbulent boundary layers.
Scanning PIV
By using a rotating mirror, a high-speed camera and correcting for geometric changes, PIV can be performed nearly instantly on a set of planes throughout the flow field. Fluid properties between the planes can then be interpolated. Thus, a quasi-volumetric analysis can be performed on a target volume. Scanning PIV can be performed in conjunction with the other 2-dimensional PIV methods described to approximate a 3-dimensional volumetric analysis.
Tomographic PIV
Tomographic PIV is based on the illumination, recording, and reconstruction of tracer particles within a 3-D measurement volume. The technique uses several cameras to record simultaneous views of the illuminated volume, which is then reconstructed to yield a discretized 3-D intensity field. A pair of intensity fields are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity field within the volume. The technique was originally developed
by Elsinga et al. in 2006.
The reconstruction procedure is a complex under-determined inverse problem. The primary complication is that a single set of views can result from a large number of 3-D volumes. Procedures to properly determine the unique volume from a set of views are the foundation for the field of tomography. In most Tomo-PIV experiments, the multiplicative algebraic reconstruction technique (MART) is used. The advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify individual particles. Reconstructing the discretized 3-D intensity field is computationally intensive and, beyond MART, several developments have sought to significantly reduce this computational expense, for example the multiple line-of-sight simultaneous multiplicative algebraic reconstruction technique (MLOS-SMART)
which takes advantage of the sparsity of the 3-D intensity field to reduce memory storage and calculation requirements.
As a rule of thumb, at least four cameras are needed for acceptable reconstruction accuracy, and best results are obtained when the cameras are placed at approximately 30 degrees normal to the measurement volume. Many additional factors are necessary to consider for a successful experiment.
Tomo-PIV has been applied to a broad range of flows. Examples include the structure of a turbulent boundary layer/shock wave interaction, the vorticity of a cylinder wake or pitching airfoil,
rod-airfoil aeroacoustic experiments, and to measure small-scale, micro flows. More recently, Tomo-PIV has been used together with 3-D particle tracking velocimetry to understand predator-prey interactions, and portable version of Tomo-PIV has been used to study unique swimming organisms in Antarctica.
Thermographic PIV
Thermographic PIV is based on the use of thermographic phosphors as seeding particles. The use of these thermographic phosphors permits simultaneous measurement of velocity and temperature in a flow.
Thermographic phosphors consist of ceramic host materials doped with rare-earth or transition metal ions, which exhibit phosphorescence when they are illuminated with UV-light. The decay time and the spectra of this phosphorescence are temperature sensitive and offer two different methods to measure temperature. The decay time method consists on the fitting of the phosphorescence decay to an exponential function and is normally used in point measurements, although it has been demonstrated in surface measurements. The intensity ratio between two different spectral lines of the phosphorescence emission, tracked using spectral filters, is also temperature-dependent and can be employed for surface measurements.
The micrometre-sized phosphor particles used in thermographic PIV are seeded into the flow as a tracer and, after illumination with a thin laser light sheet, the temperature of the particles can be measured from the phosphorescence, normally using an intensity ratio technique. It is important that the particles are of small size so that not only they follow the flow satisfactorily but also they rapidly assume its temperature. For a diameter of 2 μm, the thermal slip between particle and gas is as small as the velocity slip.
Illumination of the phosphor is achieved using UV light. Most thermographic phosphors absorb light in a broad band in the UV and therefore can be excited using a YAG:Nd laser. Theoretically, the same light can be used both for PIV and temperature measurements, but this would mean that UV-sensitive cameras are needed. In practice, two different beams originated in separate lasers are overlapped. While one of the beams is used for velocity measurements, the other is used to measure the temperature.
The use of thermographic phosphors offers some advantageous features including ability to survive in reactive and high temperature environments, chemical stability and insensitivity of their phosphorescence emission to pressure and gas composition. In addition, thermographic phosphors emit light at different wavelengths, allowing spectral discrimination against excitation light and background.
Thermographic PIV has been demonstrated for time averaged
and single shot
measurements. Recently, also time-resolved high speed (3 kHz) measurements
have been successfully performed.
Artificial Intelligence PIV
With the development of artificial intelligence, there have been scientific publications and commercial software proposing PIV calculations based on deep learning and convolutional neural networks. The methodology used stems mainly from optical flow neural networks popular in machine vision. A data set that includes particle images is generated to train the parameters of the networks. The result is a deep neural network for PIV which can provide estimation of dense motion, down to a maximum of one vector for one pixel if the recorded images allow. AI PIV promises a dense velocity field, not limited by the size of the interrogation window, which limits traditional PIV to one vector per 16 x 16 pixels.
Real time processing and applications of PIV
With the advance of digital technologies, real time processing and applications of PIV became possible. For instance, GPUs can be used to speed up substantially the direct of Fourier transform based correlations of single interrogation windows. Similarly multi-processing, parallel or multi-threading processes on several CPUs or multi-core CPUs are beneficial for the distributed processing of multiple interrogation windows or multiple images. Some of the applications use real time image processing methods, such as FPGA based on-the-fly image compression or image processing. More recently, the PIV real time measurement and processing capabilities are implemented for the future use in active flow control with the flow based feedback.
Applications
PIV has been applied to a wide range of flow problems, varying from the flow over an aircraft wing in a wind tunnel to vortex formation in prosthetic heart valves. 3-dimensional techniques have been sought to analyze turbulent flow and jets.
Rudimentary PIV algorithms based on cross-correlation can be implemented in a matter of hours, while more sophisticated algorithms may require a significant investment of time. Several open source implementations are available. Application of PIV in the US education system has been limited due to high price and safety concerns of industrial research grade PIV systems.
Granular PIV: velocity measurement in granular flows and avalanches
PIV can also be used to measure the velocity field of the free surface and basal boundary in a granular flows such as those in shaken containers, tumblers and avalanches.
This analysis is particularly well-suited for nontransparent media such as sand, gravel, quartz, or other granular materials that are common in geophysics. This PIV approach is called "granular PIV". The set-up for granular PIV differs from the usual PIV setup in that the optical surface structure which is produced by illumination of the surface of the granular flow is already sufficient to detect the motion. This means one does not need to add tracer particles in the bulk material.
See also
Digital image correlation
Hot-wire anemometry
Laser Doppler velocimetry
Molecular tagging velocimetry
Particle tracking velocimetry
Notes
References
Katz, J.; Sheng, J. (2010). "Applications of Holography in Fluid Mechanics and Particle Dynamics". Annual Review of Fluid Mechanics. 42: 531-555. Bibcode: doi:10.1146/annurev-fluid-121108-145508.
Bibliography
External links
PIV research at the Laboratory for Experimental Fluid Dynamics (J. Katz lab)
Measurement
Fluid dynamics | 0.771027 | 0.989087 | 0.762613 |
Burgers' equation | Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field and diffusion coefficient (or kinematic viscosity, as in the original fluid mechanical context) , the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system:
The term can also rewritten as . When the diffusion term is absent (i.e. ), Burgers' equation becomes the inviscid Burgers' equation:
which is a prototype for conservation equations that can develop discontinuities (shock waves).
The reason for the formation of sharp gradients for small values of becomes intuitively clear when one examines the left-hand side of the equation. The term is evidently a wave operator describing a wave propagating in the positive -direction with a speed . Since the wave speed is , regions exhibiting large values of will be propagated rightwards quicker than regions exhibiting smaller values of ; in other words, if is decreasing in the -direction, initially, then larger 's that lie in the backside will catch up with smaller 's on the front side. The role of the right-side diffusive term is essentially to stop the gradient becoming infinite.
Inviscid Burgers' equation
The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition
can be constructed by the method of characteristics. Let be the parameter characterising any given characteristics in the - plane, then the characteristic equations are given by
Integration of the second equation tells us that is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e.,
where is the point (or parameter) on the x-axis (t = 0) of the x-t plane from which the characteristic curve is drawn. Since at -axis is known from the initial condition and the fact that is unchanged as we move along the characteristic emanating from each point , we write on each characteristic. Therefore, the family of trajectories of characteristics parametrized by is
Thus, the solution is given by
This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the breaking time before a shock wave can be formed is given by
Complete integral of the inviscid Burgers' equation
The implicit solution described above containing an arbitrary function is called the general integral. However, the inviscid Burgers' equation, being a first-order partial differential equation, also has a complete integral which contains two arbitrary constants (for the two independent variables). Subrahmanyan Chandrasekhar provided the complete integral in 1943, which is given by
where and are arbitrary constants. The complete integral satisfies a linear initial condition, i.e., . One can also construct the geneal integral using the above complete integral.
Viscous Burgers' equation
The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation,
which turns it into the equation
which can be integrated with respect to to obtain
where is an arbitrary function of time. Introducing the transformation (which does not affect the function ), the required equation reduces to that of the heat equation
The diffusion equation can be solved. That is, if , then
The initial function is related to the initial function by
where the lower limit is chosen arbitrarily. Inverting the Cole–Hopf transformation, we have
which simplifies, by getting rid of the time-dependent prefactor in the argument of the logarthim, to
This solution is derived from the solution of the heat equation for that decays to zero as ; other solutions for can be obtained starting from solutions of that satisfies different boundary conditions.
Some explicit solutions of the viscous Burgers' equation
Explicit expressions for the viscous Burgers' equation are available. Some of the physically relevant solutions are given below:
Steadily propagating traveling wave
If is such that and and , then we have a traveling-wave solution (with a constant speed ) given by
This solution, that was originally derived by Harry Bateman in 1915, is used to describe the variation of pressure across a weak shock wave. When and to
with .
Delta function as an initial condition
If , where (say, the Reynolds number) is a constant, then we have
In the limit , the limiting behaviour is a diffusional spreading of a source and therefore is given by
On the other hand, In the limit , the solution approaches that of the aforementioned Chandrasekhar's shock-wave solution of the inviscid Burgers' equation and is given by
The shock wave location and its speed are given by and
N-wave solution
The N-wave solution comprises a compression wave followed by a rarafaction wave. A solution of this type is given by
where may be regarded as an initial Reynolds number at time and with , may be regarded as the time-varying Reynold number.
Other forms
Multi-dimensional Burgers' equation
In two or more dimensions, the Burgers' equation becomes
One can also extend the equation for the vector field , as in
Generalized Burgers' equation
The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e.,
where is any arbitrary function of u. The inviscid equation is still a quasilinear hyperbolic equation for and its solution can be constructed using method of characteristics as before.
Stochastic Burgers' equation
Added space-time noise , where is an Wiener process, forms a stochastic Burgers' equation
This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field upon substituting .
See also
Chaplygin's equation
Conservation equation
Euler–Tricomi equation
Fokker–Planck equation
KdV-Burgers equation
References
External links
Burgers' Equation at EqWorld: The World of Mathematical Equations.
Burgers' Equation at NEQwiki, the nonlinear equations encyclopedia.
Conservation equations
Equations of fluid dynamics
Fluid dynamics | 0.767505 | 0.993626 | 0.762613 |
Condensation | Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applications of condensation
Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application.
Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation.
It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails.
Commercial applications of condensation, by consumers as well as industry, include power generation, water desalination, thermal management, refrigeration, and air conditioning.
Biological adaptation
Numerous living beings use water made accessible by condensation. A few examples of these are the Australian thorny devil, the darkling beetles of the Namibian coast, and the coast redwoods of the West Coast of the United States.
Condensation in building construction
Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion, weakening of mortar and masonry walls, and energy penalties due to increased heat transfer. To alleviate these issues, the indoor air humidity needs to be lowered, or air ventilation in the building needs to be improved. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, and move air throughout a building. The amount of water vapor that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, moisture heavy air comes into contact with a cool surface. As the air is cooled, it can no longer hold as much water vapor. This leads to deposition of water on the cool surface. This is very apparent when central heating is used in combination with single glazed windows in winter.
Interstructure condensation may be caused by thermal bridges, insufficient or lacking insulation, damp proofing or insulated glazing.
Table
See also
Air well (condenser)
Bose–Einstein condensate
Cloud physics
Condenser (heat transfer)
DNA condensation
Dropwise condensation
Groasis Waterboxx
Kelvin equation
Liquefaction of gases
Phase diagram
Phase transition
Retrograde condensation
Surface condenser
References
Sources
Phase transitions | 0.76487 | 0.997044 | 0.762609 |
Orthogenesis | Orthogenesis, also known as orthogenetic evolution, progressive evolution, evolutionary progress, or progressionism, is an obsolete biological hypothesis that organisms have an innate tendency to evolve in a definite direction towards some goal (teleology) due to some internal mechanism or "driving force". According to the theory, the largest-scale trends in evolution have an absolute goal such as increasing biological complexity. Prominent historical figures who have championed some form of evolutionary progress include Jean-Baptiste Lamarck, Pierre Teilhard de Chardin, and Henri Bergson.
The term orthogenesis was introduced by Wilhelm Haacke in 1893 and popularized by Theodor Eimer five years later. Proponents of orthogenesis had rejected the theory of natural selection as the organizing mechanism in evolution for a rectilinear (straight-line) model of directed evolution. With the emergence of the modern synthesis, in which genetics was integrated with evolution, orthogenesis and other alternatives to Darwinism were largely abandoned by biologists, but the notion that evolution represents progress is still widely shared; modern supporters include E. O. Wilson and Simon Conway Morris. The evolutionary biologist Ernst Mayr made the term effectively taboo in the journal Nature in 1948, by stating that it implied "some supernatural force". The American paleontologist George Gaylord Simpson (1953) attacked orthogenesis, linking it with vitalism by describing it as "the mysterious inner force". Despite this, many museum displays and textbook illustrations continue to give the impression that evolution is directed.
The philosopher of biology Michael Ruse notes that in popular culture, evolution and progress are synonyms, while the unintentionally misleading image of the March of Progress, from apes to modern humans, has been widely imitated.
Definition
The term orthogenesis (from Ancient orthós, "straight", and Ancient , "origin") was first used by the biologist Wilhelm Haacke in 1893. Theodor Eimer was the first to give the word a definition; he defined orthogenesis as "the general law according to which evolutionary development takes place in a noticeable direction, above all in specialized groups".
In 1922, the zoologist Michael F. Guyer wrote:
According to Susan R. Schrepfer in 1983:
In 1988, Francisco J. Ayala defined progress as "systematic change in a feature belonging to all the members of a sequence in such a way that posterior members of the sequence exhibit an improvement of that feature". He argued that there are two elements in this definition, directional change and improvement according to some standard. Whether a directional change constitutes an improvement is not a scientific question; therefore Ayala suggested that science should focus on the question of whether there is directional change, without regard to whether the change is "improvement". This may be compared to Stephen Jay Gould's suggestion of "replacing the idea of progress with an operational notion of directionality".
In 1989, Peter J. Bowler defined orthogenesis as:
In 1996, Michael Ruse defined orthogenesis as "the view that evolution has a kind of momentum of its own that carries organisms along certain tracks".
History
Medieval
The possibility of progress is embedded in the mediaeval great chain of being, with a linear sequence of forms from lowest to highest. The concept, indeed, had its roots in Aristotle's biology, from insects that produced only a grub, to fish that laid eggs, and on up to animals with blood and live birth. The medieval chain, as in Ramon Lull's Ladder of Ascent and Descent of the Mind, 1305, added steps or levels above humans, with orders of angels reaching up to God at the top.
Pre-Darwinian
The orthogenesis hypothesis had a significant following in the 19th century when evolutionary mechanisms such as Lamarckism were being proposed. The French zoologist Jean-Baptiste Lamarck (1744–1829) himself accepted the idea, and it had a central role in his theory of inheritance of acquired characteristics, the hypothesized mechanism of which resembled the "mysterious inner force" of orthogenesis. Orthogenesis was particularly accepted by paleontologists who saw in their fossils a directional change, and in invertebrate paleontology thought there was a gradual and constant directional change. Those who accepted orthogenesis in this way, however, did not necessarily accept that the mechanism that drove orthogenesis was teleological (had a definite goal). Charles Darwin himself rarely used the term "evolution" now so commonly used to describe his theory, because the term was strongly associated with orthogenesis, as had been common usage since at least 1647. His grandfather, the physician and polymath Erasmus Darwin, was both progressionist and vitalist, seeing "the whole cosmos [as] a living thing propelled by an internal vital force" towards "greater perfection". Robert Chambers, in his popular anonymously published 1844 book Vestiges of the Natural History of Creation presented a sweeping narrative account of cosmic transmutation, culminating in the evolution of humanity. Chambers included detailed analysis of the fossil record.
With Darwin
Ruse observed that "Progress (sic, his capitalisation) became essentially a nineteenth-century belief. It gave meaning to life—it offered inspiration—after the collapse [with Malthus's pessimism and the shock of the French Revolution] of the foundations of the past."
The Baltic German biologist Karl Ernst von Baer (1792–1876) argued for an orthogenetic force in nature, reasoning in a review of Darwin's 1859 On the Origin of Species that "Forces which are not directed—so-called blind forces—can never produce order."
In 1864, the Swiss anatomist Albert von Kölliker (1817–1905) presented his orthogenetic theory, heterogenesis, arguing for wholly separate lines of descent with no common ancestor.
In 1884, the Swiss botanist Carl Nägeli (1817–1891) proposed a version of orthogenesis involving an "inner perfecting principle". Gregor Mendel died that same year; Nägeli, who proposed that an "idioplasm" transmitted inherited characteristics, dissuaded Mendel from continuing to work on plant genetics. According to Nägeli many evolutionary developments were nonadaptive and variation was internally programmed. Charles Darwin saw this as a serious challenge, replying that "There must be some efficient cause for each slight individual difference", but was unable to provide a specific answer without knowledge of genetics. Further, Darwin was himself somewhat progressionist, believing for example that "Man" was "higher" than the barnacles he studied.
Darwin indeed wrote in his 1859 Origin of Species:
In 1898, after studying butterfly coloration, Theodor Eimer (1843–1898) introduced the term orthogenesis with a widely read book, On Orthogenesis: And the Impotence of Natural Selection in Species Formation. Eimer claimed there were trends in evolution with no adaptive significance that would be difficult to explain by natural selection. To supporters of orthogenesis, in some cases species could be led by such trends to extinction. Eimer linked orthogenesis to neo-Lamarckism in his 1890 book Organic Evolution as the Result of the Inheritance of Acquired Characteristics According to the Laws of Organic Growth. He used examples such as the evolution of the horse to argue that evolution had proceeded in a regular single direction that was difficult to explain by random variation. Gould described Eimer as a materialist who rejected any vitalist or teleological approach to orthogenesis, arguing that Eimer's criticism of natural selection was common amongst many evolutionists of his generation; they were searching for alternative mechanisms, as they had come to believe that natural selection could not create new species.
Nineteenth and twentieth centuries
Numerous versions of orthogenesis (see table) have been proposed. Debate centred on whether such theories were scientific, or whether orthogenesis was inherently vitalistic or essentially theological. For example, biologists such as Maynard M. Metcalf (1914), John Merle Coulter (1915), David Starr Jordan (1920) and Charles B. Lipman (1922) claimed evidence for orthogenesis in bacteria, fish populations and plants. In 1950, the German paleontologist Otto Schindewolf argued that variation tends to move in a predetermined direction. He believed this was purely mechanistic, denying any kind of vitalism, but that evolution occurs due to a periodic cycle of evolutionary processes dictated by factors internal to the organism. In 1964 George Gaylord Simpson argued that orthogenetic theories such as those promulgated by Du Noüy and Sinnott were essentially theology rather than biology.
Though evolution is not progressive, it does sometimes proceed in a linear way, reinforcing characteristics in certain lineages, but such examples are entirely consistent with the modern neo-Darwinian theory of evolution. These examples have sometimes been referred to as orthoselection but are not strictly orthogenetic, and simply appear as linear and constant changes because of environmental and molecular constraints on the direction of change. The term orthoselection was first used by Ludwig Hermann Plate, and was incorporated into the modern synthesis by Julian Huxley and Bernard Rensch.
Recent work has supported the mechanism and existence of mutation biased adaptation, meaning that limited local orthogenesis is now seen as possible.
Theories
For the columns for other philosophies of evolution (i.e., combined theories including any of Lamarckism, Mutationism, Natural selection, and Vitalism), "yes" means that person definitely supports the theory; "no" means explicit opposition to the theory; a blank means the matter is apparently not discussed, not part of the theory.
The various alternatives to Darwinian evolution by natural selection were not necessarily mutually exclusive. The evolutionary philosophy of the American palaeontologist Edward Drinker Cope is a case in point. Cope, a religious man, began his career denying the possibility of evolution. In the 1860s, he accepted that evolution could occur, but, influenced by Agassiz, rejected natural selection. Cope accepted instead the theory of recapitulation of evolutionary history during the growth of the embryo - that ontogeny recapitulates phylogeny, which Agassiz believed showed a divine plan leading straight up to man, in a pattern revealed both in embryology and palaeontology. Cope did not go so far, seeing that evolution created a branching tree of forms, as Darwin had suggested. Each evolutionary step was however non-random: the direction was determined in advance and had a regular pattern (orthogenesis), and steps were not adaptive but part of a divine plan (theistic evolution). This left unanswered the question of why each step should occur, and Cope switched his theory to accommodate functional adaptation for each change. Still rejecting natural selection as the cause of adaptation, Cope turned to Lamarckism to provide the force guiding evolution. Finally, Cope supposed that Lamarckian use and disuse operated by causing a vitalist growth-force substance, "bathmism", to be concentrated in the areas of the body being most intensively used; in turn, it made these areas develop at the expense of the rest. Cope's complex set of beliefs thus assembled five evolutionary philosophies: recapitulationism, orthogenesis, theistic evolution, Lamarckism, and vitalism. Other palaeontologists and field naturalists continued to hold beliefs combining orthogenesis and Lamarckism until the modern synthesis in the 1930s.
Status
In science
The stronger versions of the orthogenetic hypothesis began to lose popularity when it became clear that they were inconsistent with the patterns found by paleontologists in the fossil record, which were non-rectilinear (richly branching) with many complications. The hypothesis was abandoned by mainstream biologists when no mechanism could be found that would account for the process, and the theory of evolution by natural selection came to prevail. The historian of biology Edward J. Larson commented that
The modern synthesis of the 1930s and 1940s, in which the genetic mechanisms of evolution were incorporated, appeared to refute the hypothesis for good. As more was understood about these mechanisms it came to be held that there was no naturalistic way in which the newly discovered mechanism of heredity could be far-sighted or have a memory of past trends. Orthogenesis was seen to lie outside the methodological naturalism of the sciences.
By 1948, the evolutionary biologist Ernst Mayr, as editor of the journal Evolution, made the use of the term orthogenesis taboo: "It might be well to abstain from use of the word 'orthogenesis' .. since so many of the geneticists seem to be of the opinion that the use of the term implies some supernatural force." For these and other reasons, belief in evolutionary progress has remained "a persistent heresy", among evolutionary biologists including E. O. Wilson and Simon Conway Morris, although often denied or veiled. The philosopher of biology Michael Ruse wrote that "some of the most significant of today's evolutionists are progressionists, and that because of this we find (absolute) progressionism alive and well in their work." He argued that progressionism has harmed the status of evolutionary biology as a mature, professional science. Presentations of evolution remain characteristically progressionist, with humans at the top of the "Tower of Time" in the Smithsonian Institution in Washington D.C., while Scientific American magazine could illustrate the history of life leading progressively from mammals to dinosaurs to primates and finally man. Ruse noted that at the popular level, progress and evolution are simply synonyms, as they were in the nineteenth century, though confidence in the value of cultural and technological progress has declined.
The discipline of evolutionary developmental biology, however, is open to an expanded concept of heredity that incorporates the physics of self-organization. With its rise in the late 20th-early 21st centuries, ideas of constraint and preferred directions of morphological change have made a reappearance in evolutionary theory.
In popular culture
In popular culture, progressionist images of evolution are widespread. The historian Jennifer Tucker, writing in The Boston Globe, notes that Thomas Henry Huxley's 1863 illustration comparing the skeletons of apes and humans "has become an iconic and instantly recognizable visual shorthand for evolution." She calls its history extraordinary, saying that it is "one of the most intriguing, and most misleading, drawings in the modern history of science." Nobody, Tucker observes, supposes that the "monkey-to-man" sequence accurately depicts Darwinian evolution. The Origin of Species had only one illustration, a diagram showing that random events create a process of branching evolution, a view that Tucker notes is broadly acceptable to modern biologists. But Huxley's image recalled the great chain of being, implying with the force of a visual image a "logical, evenly paced progression" leading up to Homo sapiens, a view denounced by Stephen Jay Gould in Wonderful Life.
Popular perception, however, had seized upon the idea of linear progress. Edward Linley Sambourne's Man is But a Worm, drawn for Punch's Almanack, mocked the idea of any evolutionary link between humans and animals, with a sequence from chaos to earthworm to apes, primitive men, a Victorian beau, and Darwin in a pose that according to Tucker recalls Michelangelo's figure of Adam in his fresco adorning the ceiling of the Sistine Chapel. This was followed by a flood of variations on the evolution-as-progress theme, including The New Yorkers 1925 "The Rise and Fall of Man", the sequence running from a chimpanzee to Neanderthal man, Socrates, and finally the lawyer William Jennings Bryan who argued for the anti-evolutionist prosecution in the Scopes Trial on the State of Tennessee law limiting the teaching of evolution. Tucker noted that Rudolph Franz Zallinger's 1965 "The Road to Homo Sapiens" fold-out illustration in F. Clark Howell's Early Man, showing a sequence of 14 walking figures ending with modern man, fitted the palaeoanthropological discoveries "not into a branching Darwinian scheme, but into the framework of the original Huxley diagram." Howell ruefully commented that the "powerful and emotional" graphic had overwhelmed his Darwinian text.
Sliding between meanings
Scientists, Ruse argues, continue to slide easily from one notion of progress to another: even committed Darwinians like Richard Dawkins embed the idea of cultural progress in a theory of cultural units, memes, that act much like genes. Dawkins can speak of "progressive rather than random ... trends in evolution". Dawkins and John Krebs deny the "earlier [Darwinian] prejudice" that there is anything "inherently progressive about evolution", but, Ruse argues, the feeling of progress comes from evolutionary arms races which remain in Dawkins's words "by far the most satisfactory explanation for the existence of the advanced and complex machinery that animals and plants possess".
Ruse concludes his detailed analysis of the idea of Progress, meaning a progressionist philosophy, in evolutionary biology by stating that evolutionary thought came out of that philosophy. Before Darwin, Ruse argues, evolution was just a pseudoscience; Darwin made it respectable, but "only as popular science". "There it remained frozen, for nearly another hundred years", until mathematicians such as Fisher provided "both models and status", enabling evolutionary biologists to construct the modern synthesis of the 1930s and 1940s. That made biology a professional science, at the price of ejecting the notion of progress. That, Ruse argues, was a significant cost to "people [biologists] still firmly committed to Progress" as a philosophy.
Facilitated variation
Biology has largely rejected the idea that evolution is guided in any way, but the evolution of some features is indeed facilitated by the genes of the developmental-genetic toolkit studied in evolutionary developmental biology. An example is the development of wing pattern in some species of Heliconius butterfly, which have independently evolved similar patterns. These butterflies are Müllerian mimics of each other, so natural selection is the driving force, but their wing patterns, which arose in separate evolutionary events, are controlled by the same genes.
See also
Adaptive mutation
Convergent evolution (contrastable with orthogenesis, not involving teleology)
Devolution
Directed evolution (in protein engineering)
Directed evolution (transhumanism)
Evolutionism
Evolution of biological complexity
History of evolutionary thought
Structuralism
Teleonomy
Teleological argument
References
Sources
Further reading
Bateson, William (1909). "Heredity and variation in modern lights", in Darwin and Modern Science (A.C. Seward ed.) Cambridge University Press. Chapter V.
Dennett, Daniel (1995). Darwin's Dangerous Idea. Simon & Schuster. .
Huxley, Julian (1942). Evolution: The Modern Synthesis, London: George Allen and Unwin.
Simpson, George G. (1957). Life Of The Past: Introduction to Paleontology. Yale University Press, p. 119.
Wilkins, John (1997). "What is macroevolution?" 13 October 2004.
External links
What our most famous evolutionary cartoon gets wrong
Non-Darwinian evolution
History of evolutionary biology
Teleology
Vitalism
Obsolete biology theories | 0.770698 | 0.989498 | 0.762604 |
Hooke's law | In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, where is a constant factor characteristic of the spring (i.e., its stiffness), and is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ("as the extension, so the force" or "the extension is proportional to the force"). Hooke states in the 1678 work that he was aware of the law since 1660.
Hooke's equation holds (to some extent) in many other situations where an elastic body is deformed, such as wind blowing on a tall building, and a musician plucking a string of a guitar. An elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean.
Hooke's law is only a first-order linear approximation to the real response of springs and other elastic bodies to applied forces. It must eventually fail once the forces exceed some limit, since no material can be compressed beyond a certain minimum size, or stretched beyond a maximum size, without some permanent deformation or change of state. Many materials will noticeably deviate from Hooke's law well before those elastic limits are reached.
On the other hand, Hooke's law is an accurate approximation for most solid bodies, as long as the forces and deformations are small enough. For this reason, Hooke's law is extensively used in all branches of science and engineering, and is the foundation of many disciplines such as seismology, molecular mechanics and acoustics. It is also the fundamental principle behind the spring scale, the manometer, the galvanometer, and the balance wheel of the mechanical clock.
The modern theory of elasticity generalizes Hooke's law to say that the strain (deformation) of an elastic object or material is proportional to the stress applied to it. However, since general stresses and strains may have multiple independent components, the "proportionality factor" may no longer be just a single real number, but rather a linear map (a tensor) that can be represented by a matrix of real numbers.
In this general form, Hooke's law makes it possible to deduce the relation between strain and stress for complex objects in terms of intrinsic properties of the materials they are made of. For example, one can deduce that a homogeneous rod with uniform cross section will behave like a simple spring when stretched, with a stiffness directly proportional to its cross-section area and inversely proportional to its length.
Formal definition
Linear springs
Consider a simple helical spring that has one end attached to some fixed object, while the free end is being pulled by a force whose magnitude is . Suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let be the amount by which the free end of the spring was displaced from its "relaxed" position (when it is not being stretched). Hooke's law states that or, equivalently,
where is a positive real number, characteristic of the spring. A spring with spaces between the coils can be compressed, and the same formula holds for compression, with and both negative in that case.
According to this formula, the graph of the applied force as a function of the displacement will be a straight line passing through the origin, whose slope is .
Hooke's law for a spring is also stated under the convention that is the restoring force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes since the direction of the restoring force is opposite to that of the displacement.
Torsional springs
The torsional analog of Hooke's law applies to torsional springs. It states that the torque (τ) required to rotate an object is directly proportional to the angular displacement (θ) from the equilibrium position. It describes the relationship between the torque applied to an object and the resulting angular deformation due to torsion. Mathematically, it can be expressed as:
Where:
τ is the torque measured in Newton-meters or N·m.
k is the torsional constant (measured in N·m/radian), which characterizes the stiffness of the torsional spring or the resistance to angular displacement.
θ is the angular displacement (measured in radians) from the equilibrium position.
Just as in the linear case, this law shows that the torque is proportional to the angular displacement, and the negative sign indicates that the torque acts in a direction opposite to the angular displacement, providing a restoring force to bring the system back to equilibrium.
General "scalar" springs
Hooke's spring law usually applies to any elastic object, of arbitrary complexity, as long as both the deformation and the stress can be expressed by a single number that can be both positive and negative.
For example, when a block of rubber attached to two parallel plates is deformed by shearing, rather than stretching or compression, the shearing force and the sideways displacement of the plates obey Hooke's law (for small enough deformations).
Hooke's law also applies when a straight steel bar or concrete beam (like the one used in buildings), supported at both ends, is bent by a weight placed at some intermediate point. The displacement in this case is the deviation of the beam, measured in the transversal direction, relative to its unloaded shape.
Vector formulation
In the case of a helical spring that is stretched or compressed along its axis, the applied (or restoring) force and the resulting elongation or compression have the same direction (which is the direction of said axis). Therefore, if and are defined as vectors, Hooke's equation still holds and says that the force vector is the elongation vector multiplied by a fixed scalar.
General tensor form
Some elastic bodies will deform in one direction when subjected to a force with a different direction. One example is a horizontal wood beam with non-square rectangular cross section that is bent by a transverse load that is neither vertical nor horizontal. In such cases, the magnitude of the displacement will be proportional to the magnitude of the force , as long as the direction of the latter remains the same (and its value is not too large); so the scalar version of Hooke's law will hold. However, the force and displacement vectors will not be scalar multiples of each other, since they have different directions. Moreover, the ratio between their magnitudes will depend on the direction of the vector .
Yet, in such cases there is often a fixed linear relation between the force and deformation vectors, as long as they are small enough. Namely, there is a function from vectors to vectors, such that , and for any real numbers , and any displacement vectors , . Such a function is called a (second-order) tensor.
With respect to an arbitrary Cartesian coordinate system, the force and displacement vectors can be represented by 3 × 1 matrices of real numbers. Then the tensor connecting them can be represented by a 3 × 3 matrix of real coefficients, that, when multiplied by the displacement vector, gives the force vector:
That is, for . Therefore, Hooke's law can be said to hold also when and are vectors with variable directions, except that the stiffness of the object is a tensor , rather than a single real number .
Hooke's law for continuous media
The stresses and strains of the material inside a continuous elastic material (such as a block of rubber, the wall of a boiler, or a steel bar) are connected by a linear relationship that is mathematically similar to Hooke's spring law, and is often referred to by that name.
However, the strain state in a solid medium around some point cannot be described by a single vector. The same parcel of material, no matter how small, can be compressed, stretched, and sheared at the same time, along different directions. Likewise, the stresses in that parcel can be at once pushing, pulling, and shearing.
In order to capture this complexity, the relevant state of the medium around a point must be represented by two-second-order tensors, the strain tensor (in lieu of the displacement ) and the stress tensor (replacing the restoring force ). The analogue of Hooke's spring law for continuous media is then where is a fourth-order tensor (that is, a linear map between second-order tensors) usually called the stiffness tensor or elasticity tensor. One may also write it as where the tensor , called the compliance tensor, represents the inverse of said linear map.
In a Cartesian coordinate system, the stress and strain tensors can be represented by 3 × 3 matrices
Being a linear mapping between the nine numbers and the nine numbers , the stiffness tensor is represented by a matrix of real numbers . Hooke's law then says that
where .
All three tensors generally vary from point to point inside the medium, and may vary with time as well. The strain tensor merely specifies the displacement of the medium particles in the neighborhood of the point, while the stress tensor specifies the forces that neighboring parcels of the medium are exerting on each other. Therefore, they are independent of the composition and physical state of the material. The stiffness tensor , on the other hand, is a property of the material, and often depends on physical state variables such as temperature, pressure, and microstructure.
Due to the inherent symmetries of , , and , only 21 elastic coefficients of the latter are independent. This number can be further reduced by the symmetry of the material: 9 for an orthorhombic crystal, 5 for an hexagonal structure, and 3 for a cubic symmetry. For isotropic media (which have the same physical properties in any direction), can be reduced to only two independent numbers, the bulk modulus and the shear modulus , that quantify the material's resistance to changes in volume and to shearing deformations, respectively.
Analogous laws
Since Hooke's law is a simple proportionality between two quantities, its formulas and consequences are mathematically similar to those of many other physical laws, such as those describing the motion of fluids, or the polarization of a dielectric by an electric field.
In particular, the tensor equation relating elastic stresses to strains is entirely similar to the equation relating the viscous stress tensor and the strain rate tensor in flows of viscous fluids; although the former pertains to static stresses (related to amount of deformation) while the latter pertains to dynamical stresses (related to the rate of deformation).
Units of measurement
In SI units, displacements are measured in meters (m), and forces in newtons (N or kg·m/s2). Therefore, the spring constant , and each element of the tensor , is measured in newtons per meter (N/m), or kilograms per second squared (kg/s2).
For continuous media, each element of the stress tensor is a force divided by an area; it is therefore measured in units of pressure, namely pascals (Pa, or N/m2, or kg/(m·s2). The elements of the strain tensor are dimensionless (displacements divided by distances). Therefore, the entries of are also expressed in units of pressure.
General application to elastic materials
Objects that quickly regain their original shape after being deformed by a force, with the molecules or atoms of their material returning to the initial state of stable equilibrium, often obey Hooke's law.
Hooke's law only holds for some materials under certain loading conditions. Steel exhibits linear-elastic behavior in most engineering applications; Hooke's law is valid for it throughout its elastic range (i.e., for stresses below the yield strength). For some other materials, such as aluminium, Hooke's law is only valid for a portion of the elastic range. For these materials a proportional limit stress is defined, below which the errors associated with the linear approximation are negligible.
Rubber is generally regarded as a "non-Hookean" material because its elasticity is stress dependent and sensitive to temperature and loading rate.
Generalizations of Hooke's law for the case of large deformations is provided by models of neo-Hookean solids and Mooney–Rivlin solids.
Derived formulae
Tensional stress of a uniform bar
A rod of any elastic material may be viewed as a linear spring. The rod has length and cross-sectional area . Its tensile stress is linearly proportional to its fractional extension or strain by the modulus of elasticity :
The modulus of elasticity may often be considered constant. In turn,
(that is, the fractional change in length), and since
it follows that:
The change in length may be expressed as
Spring energy
The potential energy stored in a spring is given by which comes from adding up the energy it takes to incrementally compress the spring. That is, the integral of force over displacement. Since the external force has the same general direction as the displacement, the potential energy of a spring is always non-negative. Substituting gives
This potential can be visualized as a parabola on the -plane such that . As the spring is stretched in the positive -direction, the potential energy increases parabolically (the same thing happens as the spring is compressed). Since the change in potential energy changes at a constant rate:
Note that the change in the change in is constant even when the displacement and acceleration are zero.
Relaxed force constants (generalized compliance constants)
Relaxed force constants (the inverse of generalized compliance constants) are uniquely defined for molecular systems, in contradistinction to the usual "rigid" force constants, and thus their use allows meaningful correlations to be made between force fields calculated for reactants, transition states, and products of a chemical reaction. Just as the potential energy can be written as a quadratic form in the internal coordinates, so it can also be written in terms of generalized forces. The resulting coefficients are termed compliance constants. A direct method exists for calculating the compliance constant for any internal coordinate of a molecule, without the need to do the normal mode analysis. The suitability of relaxed force constants (inverse compliance constants) as covalent bond strength descriptors was demonstrated as early as 1980. Recently, the suitability as non-covalent bond strength descriptors was demonstrated too.
Harmonic oscillator
A mass attached to the end of a spring is a classic example of a harmonic oscillator. By pulling slightly on the mass and then releasing it, the system will be set in sinusoidal oscillating motion about the equilibrium position. To the extent that the spring obeys Hooke's law, and that one can neglect friction and the mass of the spring, the amplitude of the oscillation will remain constant; and its frequency will be independent of its amplitude, determined only by the mass and the stiffness of the spring:
This phenomenon made possible the construction of accurate mechanical clocks and watches that could be carried on ships and people's pockets.
Rotation in gravity-free space
If the mass were attached to a spring with force constant and rotating in free space, the spring tension would supply the required centripetal force:
Since and , then:
Given that , this leads to the same frequency equation as above:
Linear elasticity theory for continuous media
Isotropic materials
Isotropic materials are characterized by properties which are independent of direction in space. Physical equations involving isotropic materials must therefore be independent of the coordinate system chosen to represent them. The strain tensor is a symmetric tensor. Since the trace of any tensor is independent of any coordinate system, the most complete coordinate-free decomposition of a symmetric tensor is to represent it as the sum of a constant tensor and a traceless symmetric tensor. Thus in index notation:
where is the Kronecker delta. In direct tensor notation:
where is the second-order identity tensor.
The first term on the right is the constant tensor, also known as the volumetric strain tensor, and the second term is the traceless symmetric tensor, also known as the deviatoric strain tensor or shear tensor.
The most general form of Hooke's law for isotropic materials may now be written as a linear combination of these two tensors:
where is the bulk modulus and is the shear modulus.
Using the relationships between the elastic moduli, these equations may also be expressed in various other ways. A common form of Hooke's law for isotropic materials, expressed in direct tensor notation, is
where and are the Lamé constants, is the second-rank identity tensor, and I is the symmetric part of the fourth-rank identity tensor. In index notation:
The inverse relationship is
Therefore, the compliance tensor in the relation is
In terms of Young's modulus and Poisson's ratio, Hooke's law for isotropic materials can then be expressed as
This is the form in which the strain is expressed in terms of the stress tensor in engineering. The expression in expanded form is
where is Young's modulus and is Poisson's ratio. (See 3-D elasticity).
In matrix form, Hooke's law for isotropic materials can be written as
where is the engineering shear strain. The inverse relation may be written as
which can be simplified thanks to the Lamé constants:
In vector notation this becomes
where is the identity tensor.
Plane stress
Under plane stress conditions, . In that case Hooke's law takes the form
In vector notation this becomes
The inverse relation is usually written in the reduced form
Plane strain
Under plane strain conditions, . In this case Hooke's law takes the form
Anisotropic materials
The symmetry of the Cauchy stress tensor and the generalized Hooke's laws implies that . Similarly, the symmetry of the infinitesimal strain tensor implies that . These symmetries are called the minor symmetries of the stiffness tensor c. This reduces the number of elastic constants from 81 to 36.
If in addition, since the displacement gradient and the Cauchy stress are work conjugate, the stress–strain relation can be derived from a strain energy density functional, then
The arbitrariness of the order of differentiation implies that . These are called the major symmetries of the stiffness tensor. This reduces the number of elastic constants from 36 to 21. The major and minor symmetries indicate that the stiffness tensor has only 21 independent components.
Matrix representation (stiffness tensor)
It is often useful to express the anisotropic form of Hooke's law in matrix notation, also called Voigt notation. To do this we take advantage of the symmetry of the stress and strain tensors and express them as six-dimensional vectors in an orthonormal coordinate system as
Then the stiffness tensor (c) can be expressed as
and Hooke's law is written as
Similarly the compliance tensor (s) can be written as
Change of coordinate system
If a linear elastic material is rotated from a reference configuration to another, then the material is symmetric with respect to the rotation if the components of the stiffness tensor in the rotated configuration are related to the components in the reference configuration by the relation
where are the components of an orthogonal rotation matrix . The same relation also holds for inversions.
In matrix notation, if the transformed basis (rotated or inverted) is related to the reference basis by
then
In addition, if the material is symmetric with respect to the transformation then
Orthotropic materials
Orthotropic materials have three orthogonal planes of symmetry. If the basis vectors are normals to the planes of symmetry then the coordinate transformation relations imply that
The inverse of this relation is commonly written as
where
is the Young's modulus along axis
is the shear modulus in direction on the plane whose normal is in direction
is the Poisson's ratio that corresponds to a contraction in direction when an extension is applied in direction .
Under plane stress conditions, , Hooke's law for an orthotropic material takes the form
The inverse relation is
The transposed form of the above stiffness matrix is also often used.
Transversely isotropic materials
A transversely isotropic material is symmetric with respect to a rotation about an axis of symmetry. For such a material, if is the axis of symmetry, Hooke's law can be expressed as
More frequently, the axis is taken to be the axis of symmetry and the inverse Hooke's law is written as
Universal elastic anisotropy index
To grasp the degree of anisotropy of any class, a universal elastic anisotropy index (AU) was formulated. It replaces the Zener ratio, which is suited for cubic crystals.
Thermodynamic basis
Linear deformations of elastic materials can be approximated as adiabatic. Under these conditions and for quasistatic processes the first law of thermodynamics for a deformed body can be expressed as
where is the increase in internal energy and is the work done by external forces. The work can be split into two terms
where is the work done by surface forces while is the work done by body forces. If is a variation of the displacement field in the body, then the two external work terms can be expressed as
where is the surface traction vector, is the body force vector, represents the body and represents its surface. Using the relation between the Cauchy stress and the surface traction, (where is the unit outward normal to ), we have
Converting the surface integral into a volume integral via the divergence theorem gives
Using the symmetry of the Cauchy stress and the identity
we have the following
From the definition of strain and from the equations of equilibrium we have
Hence we can write
and therefore the variation in the internal energy density is given by
An elastic material is defined as one in which the total internal energy is equal to the potential energy of the internal forces (also called the elastic strain energy). Therefore, the internal energy density is a function of the strains, and the variation of the internal energy can be expressed as
Since the variation of strain is arbitrary, the stress–strain relation of an elastic material is given by
For a linear elastic material, the quantity is a linear function of , and can therefore be expressed as
where c is a fourth-rank tensor of material constants, also called the stiffness tensor. We can see why c must be a fourth-rank tensor by noting that, for a linear elastic material,
In index notation
The right-hand side constant requires four indices and is a fourth-rank quantity. We can also see that this quantity must be a tensor because it is a linear transformation that takes the strain tensor to the stress tensor. We can also show that the constant obeys the tensor transformation rules for fourth-rank tensors.
See also
Acoustoelastic effect
Elastic potential energy
Laws of science
List of scientific laws named after people
Quadratic form
Series and parallel springs
Spring system
Simple harmonic motion of a mass on a spring
Sine wave
Solid mechanics
Spring pendulum
Notes
References
Hooke's law - The Feynman Lectures on Physics
Hooke's Law - Classical Mechanics - Physics - MIT OpenCourseWare
External links
JavaScript Applet demonstrating Springs and Hooke's law
JavaScript Applet demonstrating Spring Force
1676 in science
Springs (mechanical)
Elasticity (physics)
Solid mechanics
Structural analysis | 0.763211 | 0.999149 | 0.762562 |
Moody chart | In engineering, the Moody chart or Moody diagram (also Stanton diagram) is a graph in non-dimensional form that relates the Darcy–Weisbach friction factor fD, Reynolds number Re, and surface roughness for fully developed flow in a circular pipe. It can be used to predict pressure drop or flow rate down such a pipe.
History
In 1944, Lewis Ferry Moody plotted the Darcy–Weisbach friction factor against Reynolds number Re for various values of relative roughness ε / D.
This chart became commonly known as the Moody chart or Moody diagram.
It adapts the work of Hunter Rouse
but uses the more practical choice of coordinates employed by R. J. S. Pigott, whose work was based upon an analysis of some 10,000 experiments from various sources.
Measurements of fluid flow in artificially roughened pipes by J. Nikuradse were at the time too recent to include in Pigott's chart.
The chart's purpose was to provide a graphical representation of the function of C. F. Colebrook in collaboration with C. M. White, which provided a practical form of transition curve to bridge the transition zone between smooth and rough pipes, the region of incomplete turbulence.
Description
Moody's team used the available data (including that of Nikuradse) to show that fluid flow in rough pipes could be described by four dimensionless quantities: Reynolds number, pressure loss coefficient, diameter ratio of the pipe and the relative roughness of the pipe. They then produced a single plot which showed that all of these collapsed onto a series of lines, now known as the Moody chart. This dimensionless chart is used to work out pressure drop, (Pa) (or head loss, (m)) and flow rate through pipes. Head loss can be calculated using the Darcy–Weisbach equation in which the Darcy friction factor appears :
Pressure drop can then be evaluated as:
or directly from
where is the density of the fluid, is the average velocity in the pipe, is the friction factor from the Moody chart, is the length of the pipe and is the pipe diameter.
The chart plots Darcy–Weisbach friction factor against Reynolds number Re for a variety of relative roughnesses, the ratio of the mean height of roughness of the pipe to the pipe diameter or .
The Moody chart can be divided into two regimes of flow: laminar and turbulent. For the laminar flow regime (< ~3000), roughness has no discernible effect, and the Darcy–Weisbach friction factor was determined analytically by Poiseuille:
For the turbulent flow regime, the relationship between the friction factor the Reynolds number Re, and the relative roughness is more complex. One model for this relationship is the Colebrook equation (which is an implicit equation in ):
Fanning friction factor
This formula must not be confused with the Fanning equation, using the Fanning friction factor , equal to one fourth the Darcy-Weisbach friction factor . Here the pressure drop is:
References
See also
Friction loss
Darcy friction factor formulae
Fluid dynamics
Hydraulics
Piping | 0.765699 | 0.995902 | 0.762561 |
GADGET | GADGET is free software for cosmological N-body/SPH simulations written by Volker Springel at the Max Planck Institute for Astrophysics. The name is an acronym of "GAlaxies with Dark matter and Gas intEracT". It is released under the GNU GPL. It can be used to study for example galaxy formation and dark matter.
Description
GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed-particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited.
GADGET can therefore be used to address a wide array of astrophysically interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes.
History
The first public version (GADGET-1, released in March 2000 was created as part of Volker's PhD project under the supervision of Simon White. Later, the code was continuously improved during postdocs of Volker Springel at the Center for Astrophysics Harvard & Smithsonian and the Max Planck Institute, in collaboration with Simon White and Lars Hernquist.
The second public version (GADGET-2, released in May 2005 contains most of these improvements, except for the numerous physics modules developed for the code that go beyond gravity and ordinary gas-dynamics. The most important changes lie in a new time integration model, a new tree-code module, a new communication scheme for gravitational and SPH forces, a new domain decomposition strategy, a novel SPH formulation based on entropy as independent variable, and finally, in the addition of the TreePM functionality.
See also
Computational physics
Millennium Run
References
External links
GADGET homepage
Free astronomy software
Cosmological simulation | 0.768843 | 0.991819 | 0.762553 |
State function | In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.
Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.
History
It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."
Overview
A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system. For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system. Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.
Generally, a state space is defined by an equation of the form , where denotes pressure, denotes temperature, denotes volume, and the ellipsis denotes other possible state variables like particle number and entropy . If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.
When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure and volume as functions of time from time to will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time to time , calculate . In order to calculate the work in the above integral, the functions and must be known at each time over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of over the path:
In the equation, can be expressed as the exact differential of the function . Therefore, the integral can be expressed as the difference in the value of at the end points of the integration. The product is therefore a state function of the system.
The notation will be used for an exact differential. In other words, the integral of will be equal to . The symbol will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, will be used to denote an infinitesimal increment of work.
State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function is proportional to the internal energy of an ideal gas, but the work is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.
List of state functions
The following are considered to be state functions in thermodynamics:
Mass
Energy
Enthalpy
Internal energy
Gibbs free energy
Helmholtz free energy
Exergy
Entropy
Pressure
Temperature
Volume
Chemical composition
Pressure altitude
Specific volume or its reciprocal density
Particle number
See also
Markov property
Conservative vector field
Nonholonomic system
Equation of state
State variable
Notes
References
External links
Thermodynamic properties
Continuum mechanics | 0.769082 | 0.991507 | 0.76255 |
History of physics | Physics is a branch of science whose primary objects of study are matter and energy. Discoveries of physics find applications throughout the natural sciences and in technology. Historically, physics emerged from the scientific revolution of the 17th century, grew rapidly in the 19th century, then was transformed by a series of discoveries in the 20th century. Physics today may be divided loosely into classical physics and modern physics.
Many detailed articles on specific topics are available through the Outline of the history of physics.
Ancient history
Elements of what became physics were drawn primarily from the fields of astronomy, optics, and mechanics, which were methodologically united through the study of geometry. These mathematical disciplines began in antiquity with the Babylonians and with Hellenistic writers such as Archimedes and Ptolemy. Ancient philosophy, meanwhile, included what was called "Physics".
Greek concept
The move towards a rational understanding of nature began at least since the Archaic period in Greece (650–480 BCE) with the Pre-Socratic philosophers. The philosopher Thales of Miletus (7th and 6th centuries BCE), dubbed "the Father of Science" for refusing to accept various supernatural, religious or mythological explanations for natural phenomena, proclaimed that every event had a natural cause. Thales also made advancements in 580 BCE by suggesting that water is the basic element, experimenting with the attraction between magnets and rubbed amber and formulating the first recorded cosmologies. Anaximander, developer of a proto-evolutionary theory, disputed Thales' ideas and proposed that rather than water, a substance called apeiron was the building block of all matter. Around 500 BCE, Heraclitus proposed that the only basic law governing the Universe was the principle of change and that nothing remains in the same state indefinitely. Along with his contemporary Parmenides were among the first scholars in ancient physics to contemplate on the role of time in the universe, a key concept that is still an issue in modern physics.
During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy slowly developed into an exciting and contentious field of study. Aristotle (, Aristotélēs) (384–322 BCE), a student of Plato, promoted the concept that observation of physical phenomena could ultimately lead to the discovery of the natural laws governing them. Aristotle's writings cover physics, metaphysics, poetry, theater, music, logic, rhetoric, linguistics, politics, government, ethics, biology and zoology. He wrote the first work which refers to that line of study as "Physics" – in the 4th century BCE, Aristotle founded the system known as Aristotelian physics. He attempted to explain ideas such as motion (and gravity) with the theory of four elements. Aristotle believed that all matter was made up of aether, or some combination of four elements: earth, water, air, and fire. According to Aristotle, these four terrestrial elements are capable of inter-transformation and move toward their natural place, so a stone falls downward toward the center of the cosmos, but flames rise upward toward the circumference. Eventually, Aristotelian physics became enormously popular for many centuries in Europe, informing the scientific and scholastic developments of the Middle Ages. It remained the mainstream scientific paradigm in Europe until the time of Galileo Galilei and Isaac Newton.
Early in Classical Greece, knowledge that the Earth is spherical ("round") was common. Around 240 BCE, as the result of a seminal experiment, Eratosthenes (276–194 BCE) accurately estimated its circumference. In contrast to Aristotle's geocentric views, Aristarchus of Samos (; ) presented an explicit argument for a heliocentric model of the Solar System, i.e. for placing the Sun, not the Earth, at its centre. Seleucus of Seleucia, a follower of Aristarchus' heliocentric theory, stated that the Earth rotated around its own axis, which, in turn, revolved around the Sun. Though the arguments he used were lost, Plutarch stated that Seleucus was the first to prove the heliocentric system through reasoning.
In the 3rd century BCE, the Greek mathematician Archimedes of Syracuse ( (287–212 BCE) – generally considered to be the greatest mathematician of antiquity and one of the greatest of all time – laid the foundations of hydrostatics, statics and calculated the underlying mathematics of the lever. A leading scientist of classical antiquity, Archimedes also developed elaborate systems of pulleys to move large objects with a minimum of effort. The Archimedes' screw underpins modern hydroengineering, and his machines of war helped to hold back the armies of Rome in the First Punic War. Archimedes even tore apart the arguments of Aristotle and his metaphysics, pointing out that it was impossible to separate mathematics and nature and proved it by converting mathematical theories into practical inventions. Furthermore, in his work On Floating Bodies, around 250 BCE, Archimedes developed the law of buoyancy, also known as Archimedes' principle. In mathematics, Archimedes used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution and an ingenious system for expressing very large numbers. He also developed the principles of equilibrium states and centers of gravity, ideas that would influence future scholars like Galileo, and Newton.
Hipparchus (190–120 BCE), focusing on astronomy and mathematics, used sophisticated geometrical techniques to map the motion of the stars and planets, even predicting the times that Solar eclipses would happen. He added calculations of the distance of the Sun and Moon from the Earth, based upon his improvements to the observational instruments used at that time. Another of the early physicists was Ptolemy (90–168 CE), one of the leading minds during the time of the Roman Empire. Ptolemy was the author of several scientific treatises, at least three of which were of continuing importance to later Islamic and European science. The first is the astronomical treatise now known as the Almagest (in Greek, Ἡ Μεγάλη Σύνταξις, "The Great Treatise", originally Μαθηματικὴ Σύνταξις, "Mathematical Treatise"). The second is the Geography, which is a thorough discussion of the geographic knowledge of the Greco-Roman world.
Much of the accumulated knowledge of the ancient world was lost. Even of the works of the many respectable thinkers, few fragments survived. Although he wrote at least fourteen books, almost nothing of Hipparchus' direct work survived. Of the 150 reputed Aristotelian works, only 30 exist, and some of those are "little more than lecture notes".
India and China
Important physical and mathematical traditions also existed in ancient Chinese and Indian sciences.
In Indian philosophy, Maharishi Kanada was the first to systematically develop a theory of atomism around 200 BCE though some authors have allotted him an earlier era in the 6th century BCE. It was further elaborated by the Buddhist atomists Dharmakirti and Dignāga during the 1st millennium CE. Pakudha Kaccayana, a 6th-century BCE Indian philosopher and contemporary of Gautama Buddha, had also propounded ideas about the atomic constitution of the material world. These philosophers believed that other elements (except ether) were physically palpable and hence comprised minuscule particles of matter. The last minuscule particle of matter that could not be subdivided further was termed Parmanu. These philosophers considered the atom to be indestructible and hence eternal. The Buddhists thought atoms to be minute objects unable to be seen to the naked eye that come into being and vanish in an instant. The Vaisheshika school of philosophers believed that an atom was a mere point in space. It was also first to depict relations between motion and force applied. Indian theories about the atom are greatly abstract and enmeshed in philosophy as they were based on logic and not on personal experience or experimentation. In Indian astronomy, Aryabhata's Aryabhatiya (499 CE) proposed the Earth's rotation, while Nilakantha Somayaji (1444–1544) of the Kerala school of astronomy and mathematics proposed a semi-heliocentric model resembling the Tychonic system.
The study of magnetism in Ancient China dates back to the 4th century BCE. (in the Book of the Devil Valley Master), A main contributor to this field was Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, as well as establishing the concept of true north. In optics, Shen Kuo independently developed a camera obscura.
Islamic world
In the 7th to 15th centuries, scientific progress occurred in the Muslim world. Many classic works in Indian, Assyrian, Sassanian (Persian) and Greek, including the works of Aristotle, were translated into Arabic. Important contributions were made by Ibn al-Haytham (965–1040), an Arab or Persian scientist, considered to be a founder of modern optics. Ptolemy and Aristotle theorised that light either shone from the eye to illuminate objects or that "forms" emanated from objects themselves, whereas al-Haytham (known by the Latin name "Alhazen") suggested that light travels to the eye in rays from different points on an object. The works of Ibn al-Haytham and al-Biruni (973–1050), a Persian scientist, eventually passed on to Western Europe where they were studied by scholars such as Roger Bacon and Vitello.
Ibn al-Haytham used controlled experiments in his work on optics, although to what extent it differed from Ptolemy is up to debate. Arabic mechanics like Bīrūnī and Al-Khazini developed sophisticated "science of weight", carrying out measurements of specific weights and volumes
Ibn Sīnā (980–1037), known as "Avicenna", was a polymath from Bukhara (in present-day Uzbekistan) responsible for important contributions to physics, optics, philosophy and medicine. He published his theory of motion in Book of Healing (1020), where he argued that an impetus is imparted to a projectile by the thrower. He viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made a distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. This conception of motion is consistent with Newton's first law of motion, inertia, which states that an object in motion will stay in motion unless it is acted on by an external force. This idea which dissented from the Aristotelian view was later described as "impetus" by John Buridan, who was likely influenced by Ibn Sina's Book of Healing.
Hibat Allah Abu'l-Barakat al-Baghdaadi adopted and modified Ibn Sina's theory on projectile motion. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. He also proposed an explanation of the acceleration of falling bodies by the accumulation of successive increments of power with successive increments of velocity. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Jean Buridan and Albert of Saxony later referred to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus.
Ibn Bajjah (– 1138), known as "Avempace" in Europe, proposed that for every force there is always a reaction force. Ibn Bajjah was a critic of Ptolemy and he worked on creating a new theory of velocity to replace the one theorized by Aristotle. Two future philosophers supported the theories Avempace created, known as Avempacean dynamics. These philosophers were Thomas Aquinas, a Catholic priest, and John Duns Scotus. Galileo went on to adopt Avempace's formula "that the velocity of a given object is the difference of the motive power of that object and the resistance of the medium of motion".
Nasir al-Din al-Tusi (1201–1274), a Persian astronomer and mathematician who died in Baghdad, introduced the Tusi couple. Copernicus later drew heavily on the work of al-Din al-Tusi and his students, but without acknowledgment.
Medieval Europe
Awareness of ancient works re-entered the West through translations from Arabic to Latin. Their re-introduction, combined with Judeo-Islamic theological commentaries, had a great influence on Medieval philosophers such as Thomas Aquinas. Scholastic European scholars, who sought to reconcile the philosophy of the ancient classical philosophers with Christian theology, proclaimed Aristotle the greatest thinker of the ancient world. In cases where they did not directly contradict the Bible, Aristotelian physics became the foundation for the physical explanations of the European Churches. Quantification became a core element of medieval physics.
Based on Aristotelian physics, Scholastic physics described things as moving according to their essential nature. Celestial objects were described as moving in circles, because perfect circular motion was considered an innate property of objects that existed in the uncorrupted realm of the celestial spheres. The theory of impetus, the ancestor to the concepts of inertia and momentum, was developed along similar lines by medieval philosophers such as John Philoponus and Jean Buridan. Motions below the lunar sphere were seen as imperfect, and thus could not be expected to exhibit consistent motion. More idealized motion in the "sublunary" realm could only be achieved through artifice, and prior to the 17th century, many did not view artificial experiments as a valid means of learning about the natural world. Physical explanations in the sublunary realm revolved around tendencies. Stones contained the element earth, and earthly objects tended to move in a straight line toward the centre of the earth (and the universe in the Aristotelian geocentric view) unless otherwise prevented from doing so.
Scientific Revolution
During the 16th and 17th centuries, a large advancement of scientific progress known as the Scientific Revolution took place in Europe. Dissatisfaction with older philosophical approaches had begun earlier and had produced other changes in society, such as the Protestant Reformation, but the revolution in science began when natural philosophers began to mount a sustained attack on the Scholastic philosophical programme and supposed that mathematical descriptive schemes adopted from such fields as mechanics and astronomy could actually yield universally valid characterizations of motion and other concepts.
Nicolaus Copernicus
A breakthrough in astronomy was made by Polish astronomer Nicolaus Copernicus (1473–1543) when, in 1543, he gave strong arguments for the heliocentric model of the Solar System, ostensibly as a means to render tables charting planetary motion more accurate and to simplify their production. In heliocentric models of the Solar system, the Earth orbits the Sun along with other bodies in Earth's galaxy, a contradiction according to the Greek-Egyptian astronomer Ptolemy (2nd century CE; see above), whose system placed the Earth at the center of the Universe and had been accepted for over 1,400 years. The Greek astronomer Aristarchus of Samos had suggested that the Earth revolves around the Sun, but Copernicus's reasoning led to lasting general acceptance of this "revolutionary" idea. Copernicus's book presenting the theory (De revolutionibus orbium coelestium, "On the Revolutions of the Celestial Spheres") was published just before his death in 1543 and, as it is now generally considered to mark the beginning of modern astronomy, is also considered to mark the beginning of the Scientific Revolution. Copernicus's new perspective, along with the accurate observations made by Tycho Brahe, enabled German astronomer Johannes Kepler (1571–1630) to formulate his laws regarding planetary motion that remain in use today.
Galileo Galilei
The Italian mathematician, astronomer, and physicist Galileo Galilei (1564–1642) was a supporter of Copernicanism who made numerous astronomical discoveries, carried out empirical experiments and improved the telescope. As a mathematician, Galileo's role in the university culture of his era was subordinated to the three major topics of study: law, medicine, and theology (which was closely allied to philosophy). Galileo, however, felt that the descriptive content of the technical disciplines warranted philosophical interest, particularly because mathematical analysis of astronomical observations – notably, Copernicus's analysis of the relative motions of the Sun, Earth, Moon, and planets – indicated that philosophers' statements about the nature of the universe could be shown to be in error. Galileo also performed mechanical experiments, insisting that motion itself – regardless of whether it was produced "naturally" or "artificially" (i.e. deliberately) – had universally consistent characteristics that could be described mathematically.
Galileo's early studies at the University of Pisa were in medicine, but he was soon drawn to mathematics and physics. At 19, he discovered (and, subsequently, verified) the isochronal nature of the pendulum when, using his pulse, he timed the oscillations of a swinging lamp in Pisa's cathedral and found that it remained the same for each swing regardless of the swing's amplitude. He soon became known through his invention of a hydrostatic balance and for his treatise on the center of gravity of solid bodies. While teaching at the University of Pisa (1589–92), he initiated his experiments concerning the laws of bodies in motion that brought results so contradictory to the accepted teachings of Aristotle that strong antagonism was aroused. He found that bodies do not fall with velocities proportional to their weights. The story in which Galileo is said to have dropped weights from the Leaning Tower of Pisa is apocryphal, but he did find that the path of a projectile is a parabola and is credited with conclusions that anticipated Newton's laws of motion (e.g. the notion of inertia). Among these is what is now called Galilean relativity, the first precisely formulated statement about properties of space and time outside three-dimensional geometry.
Galileo has been called the "father of modern observational astronomy", the "father of modern physics", the "father of science", and "the father of modern science". According to Stephen Hawking, "Galileo, perhaps more than any other single person, was responsible for the birth of modern science." As religious orthodoxy decreed a geocentric or Tychonic understanding of the Solar system, Galileo's support for heliocentrism provoked controversy and he was tried by the Inquisition. Found "vehemently suspect of heresy", he was forced to recant and spent the rest of his life under house arrest.
The contributions that Galileo made to observational astronomy include the telescopic confirmation of the phases of Venus; his discovery, in 1609, of Jupiter's four largest moons (subsequently given the collective name of the "Galilean moons"); and the observation and analysis of sunspots. Galileo also pursued applied science and technology, inventing, among other instruments, a military compass. His discovery of the Jovian moons was published in 1610 and enabled him to obtain the position of mathematician and philosopher to the Medici court. As such, he was expected to engage in debates with philosophers in the Aristotelian tradition and received a large audience for his own publications such as the Discourses and Mathematical Demonstrations Concerning Two New Sciences (published abroad following his arrest for the publication of Dialogue Concerning the Two Chief World Systems) and The Assayer. Galileo's interest in experimenting with and formulating mathematical descriptions of motion established experimentation as an integral part of natural philosophy. This tradition, combining with the non-mathematical emphasis on the collection of "experimental histories" by philosophical reformists such as William Gilbert and Francis Bacon, drew a significant following in the years leading up to and following Galileo's death, including Evangelista Torricelli and the participants in the Accademia del Cimento in Italy; Marin Mersenne and Blaise Pascal in France; Christiaan Huygens in the Netherlands; and Robert Hooke and Robert Boyle in England.
René Descartes
The French philosopher René Descartes (1596–1650) was well-connected to, and influential within, the experimental philosophy networks of the day. Descartes had a more ambitious agenda, however, which was geared toward replacing the Scholastic philosophical tradition altogether. Questioning the reality interpreted through the senses, Descartes sought to re-establish philosophical explanatory schemes by reducing all perceived phenomena to being attributable to the motion of an invisible sea of "corpuscles". (Notably, he reserved human thought and God from his scheme, holding these to be separate from the physical universe). In proposing this philosophical framework, Descartes supposed that different kinds of motion, such as that of planets versus that of terrestrial objects, were not fundamentally different, but were merely different manifestations of an endless chain of corpuscular motions obeying universal principles. Particularly influential were his explanations for circular astronomical motions in terms of the vortex motion of corpuscles in space (Descartes argued, in accord with the beliefs, if not the methods, of the Scholastics, that a vacuum could not exist), and his explanation of gravity in terms of corpuscles pushing objects downward.
Descartes, like Galileo, was convinced of the importance of mathematical explanation, and he and his followers were key figures in the development of mathematics and geometry in the 17th century. Cartesian mathematical descriptions of motion held that all mathematical formulations had to be justifiable in terms of direct physical action, a position held by Huygens and the German philosopher Gottfried Leibniz, who, while following in the Cartesian tradition, developed his own philosophical alternative to Scholasticism, which he outlined in his 1714 work, the Monadology. Descartes has been dubbed the "Father of Modern Philosophy", and much subsequent Western philosophy is a response to his writings, which are studied closely to this day. In particular, his Meditations on First Philosophy continues to be a standard text at most university philosophy departments. Descartes' influence in mathematics is equally apparent; the Cartesian coordinate system – allowing algebraic equations to be expressed as geometric shapes in a two-dimensional coordinate system – was named after him. He is credited as the father of analytical geometry, the bridge between algebra and geometry, important to the discovery of calculus and analysis.
Christiaan Huygens
The Dutch physicist, mathematician, astronomer and inventor Christiaan Huygens (1629–1695) was the leading scientist in Europe between Galileo and Newton. Huygens came from a family of nobility that had an important position in the Dutch society of the 17th century; a time in which the Dutch Republic flourished economically and culturally. This period – roughly between 1588 and 1702 – of the history of the Netherlands is also referred to as the Dutch Golden Age, an era during the Scientific Revolution when Dutch science was among the most acclaimed in Europe. At this time, intellectuals and scientists like René Descartes, Baruch Spinoza, Pierre Bayle, Antonie van Leeuwenhoek, John Locke and Hugo Grotius resided in the Netherlands. It was in this intellectual environment where Christiaan Huygens grew up. Christiaan's father, Constantijn Huygens, was, apart from an important poet, the secretary and diplomat for the Princes of Orange. He knew many scientists of his time because of his contacts and intellectual interests, including René Descartes and Marin Mersenne, and it was because of these contacts that Christiaan Huygens became aware of their work. Especially Descartes, whose mechanistic philosophy was going to have a huge influence on Huygens' own work. Descartes was later impressed by the skills Christiaan Huygens showed in geometry, as was Mersenne, who christened him "the new Archimedes" (which led Constantijn to refer to his son as "my little Archimedes").
A child prodigy, Huygens began his correspondence with Marin Mersenne when he was 17 years old. Huygens became interested in games of chance when he encountered the work of Fermat, Blaise Pascal and Girard Desargues. It was Blaise Pascal who encourages him to write Van Rekeningh in Spelen van Gluck, which Frans van Schooten translated and published as De Ratiociniis in Ludo Aleae in 1657. The book is the earliest known scientific treatment of the subject, and at the time the most coherent presentation of a mathematical approach to games of chance. Two years later Huygens derived geometrically the now standard formulae in classical mechanics for the centripetal- and centrifugal force in his work De vi Centrifuga (1659). Around the same time Huygens' research in horology resulted in the invention of the pendulum clock; a breakthrough in timekeeping and the most accurate timekeeper for almost 300 years. The theoretical research of the way the pendulum works eventually led to the publication of one of his most important achievements: the Horologium Oscillatorium. This work was published in 1673 and became one of the three most important 17th century works on mechanics (the other two being Galileo’s Discourses and Mathematical Demonstrations Relating to Two New Sciences (1638) and Newton’s Philosophiæ Naturalis Principia Mathematica (1687)). The Horologium Oscillatorium is the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics. It is for this reason, Huygens has been called the first theoretical physicist and one of the founders of modern mathematical physics. Huygens' Horologium Oscillatorium had a tremendous influence on the history of physics, especially on the work of Isaac Newton, who greatly admired the work. For instance, the laws Huygens described in the Horologium Oscillatorium are structurally the same as Newton's first two laws of motion.
Five years after the publication of his Horologium Oscillatorium, Huygens described his wave theory of light. Though proposed in 1678, it was not published until 1690 in his Traité de la Lumière. His mathematical theory of light was initially rejected in favour of Newton's corpuscular theory of light, until Augustin-Jean Fresnel adopted Huygens' principle to give a complete explanation of the rectilinear propagation and diffraction effects of light in 1821. Today this principle is known as the Huygens–Fresnel principle.
As an astronomer, Huygens began grinding lenses with his brother Constantijn jr. to build telescopes for astronomical research. He was the first to identify the rings of Saturn as "a thin, flat ring, nowhere touching, and inclined to the ecliptic," and discovered the first of Saturn's moons, Titan, using a refracting telescope.
Apart from the many important discoveries Huygens made in physics and astronomy, and his inventions of ingenious devices, he was also the first who brought mathematical rigor to the description of physical phenomena. Because of this, and the fact that he developed institutional frameworks for scientific research on the continent, he has been referred to as "the leading actor in 'the making of science in Europe
Isaac Newton
The late 17th and early 18th centuries saw the achievements of Cambridge University physicist and mathematician Sir Isaac Newton (1642–1727). Newton, a fellow of the Royal Society of England, combined his own discoveries in mechanics and astronomy to earlier ones to create a single system for describing the workings of the universe. Newton formulated three laws of motion which formulated the relationship between motion and objects and also the law of universal gravitation, the latter of which could be used to explain the behavior not only of falling bodies on the earth but also planets and other celestial bodies. To arrive at his results, Newton invented one form of an entirely new branch of mathematics: calculus (also invented independently by Gottfried Leibniz), which was to become an essential tool in much of the later development in most branches of physics. Newton's findings were set forth in his Philosophiæ Naturalis Principia Mathematica ("Mathematical Principles of Natural Philosophy"), the publication of which in 1687 marked the beginning of the modern period of mechanics and astronomy.
Newton was able to refute the Cartesian mechanical tradition that all motions should be explained with respect to the immediate force exerted by corpuscles. Using his three laws of motion and law of universal gravitation, Newton removed the idea that objects followed paths determined by natural shapes and instead demonstrated that not only regularly observed paths, but all the future motions of any body could be deduced mathematically based on knowledge of their existing motion, their mass, and the forces acting upon them. However, observed celestial motions did not precisely conform to a Newtonian treatment, and Newton, who was also deeply interested in theology, imagined that God intervened to ensure the continued stability of the solar system.
Newton's principles (but not his mathematical treatments) proved controversial with Continental philosophers, who found his lack of metaphysical explanation for movement and gravitation philosophically unacceptable. Beginning around 1700, a bitter rift opened between the Continental and British philosophical traditions, which were stoked by heated, ongoing, and viciously personal disputes between the followers of Newton and Leibniz concerning priority over the analytical techniques of calculus, which each had developed independently. Initially, the Cartesian and Leibnizian traditions prevailed on the Continent (leading to the dominance of the Leibnizian calculus notation everywhere except Britain). Newton himself remained privately disturbed at the lack of a philosophical understanding of gravitation while insisting in his writings that none was necessary to infer its reality. As the 18th century progressed, Continental natural philosophers increasingly accepted the Newtonians' willingness to forgo ontological metaphysical explanations for mathematically described motions.
Newton built the first functioning reflecting telescope and developed a theory of color, published in Opticks, based on the observation that a prism decomposes white light into the many colours forming the visible spectrum. While Newton explained light as being composed of tiny particles, a rival theory of light which explained its behavior in terms of waves was presented in 1690 by Christiaan Huygens. However, the belief in the mechanistic philosophy coupled with Newton's reputation meant that the wave theory saw relatively little support until the 19th century. Newton also formulated an empirical law of cooling, studied the speed of sound, investigated power series, demonstrated the generalised binomial theorem and developed a method for approximating the roots of a function. His work on infinite series was inspired by Simon Stevin's decimals. Most importantly, Newton showed that the motions of objects on Earth and of celestial bodies are governed by the same set of natural laws, which were neither capricious nor malevolent. By demonstrating the consistency between Kepler's laws of planetary motion and his own theory of gravitation, Newton also removed the last doubts about heliocentrism. By bringing together all the ideas set forth during the Scientific Revolution, Newton effectively established the foundation for modern society in mathematics and science.
Other achievements
Other branches of physics also received attention during the period of the Scientific Revolution. William Gilbert, court physician to Queen Elizabeth I, published an important work on magnetism in 1600, describing how the earth itself behaves like a giant magnet. Robert Boyle (1627–1691) studied the behavior of gases enclosed in a chamber and formulated the gas law named for him; he also contributed to physiology and to the founding of modern chemistry. Another important factor in the scientific revolution was the rise of learned societies and academies in various countries. The earliest of these were in Italy and Germany and were short-lived. More influential were the Royal Society of England (1660) and the Academy of Sciences in France (1666). The former was a private institution in London and included such scientists as John Wallis, William Brouncker, Thomas Sydenham, John Mayow, and Christopher Wren (who contributed not only to architecture but also to astronomy and anatomy); the latter, in Paris, was a government institution and included as a foreign member the Dutchman Huygens. In the 18th century, important royal academies were established at Berlin (1700) and at St. Petersburg (1724). The societies and academies provided the principal opportunities for the publication and discussion of scientific results during and after the scientific revolution. In 1690, James Bernoulli showed that the cycloid is the solution to the tautochrone problem; and the following year, in 1691, Johann Bernoulli showed that a chain freely suspended from two points will form a catenary, the curve with the lowest possible center of gravity available to any chain hung between two fixed points. He then showed, in 1696, that the cycloid is the solution to the brachistochrone problem.
Early thermodynamics
A precursor of the engine was designed by the German scientist Otto von Guericke who, in 1650, designed and built the world's first vacuum pump to create a vacuum as demonstrated in the Magdeburg hemispheres experiment. He was driven to make a vacuum to disprove Aristotle's long-held supposition that 'Nature abhors a vacuum'. Shortly thereafter, Irish physicist and chemist Boyle had learned of Guericke's designs and in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation for a gas: PV = k, where P is pressure, V is volume and k is a constant: this relationship is known as Boyle's Law. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law. But, already before the establishment of the ideal gas law, an associate of Boyle's named Denis Papin built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated.
Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Hence, prior to 1698 and the invention of the Savery Engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen Engine, and later the Watt Engine. In time, these early engines would eventually be used in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born.
18th-century developments
During the 18th century, the mechanics founded by Newton was developed by several scientists as more mathematicians learned calculus and elaborated upon its initial formulation. The application of mathematical analysis to problems of motion was known as rational mechanics, or mixed mathematics (and was later termed classical mechanics).
Mechanics
In 1714, Brook Taylor derived the fundamental frequency of a stretched vibrating string in terms of its tension and mass per unit length by solving a differential equation. The Swiss mathematician Daniel Bernoulli (1700–1782) made important mathematical studies of the behavior of gases, anticipating the kinetic theory of gases developed more than a century later, and has been referred to as the first mathematical physicist. In 1733, Daniel Bernoulli derived the fundamental frequency and harmonics of a hanging chain by solving a differential equation. In 1734, Bernoulli solved the differential equation for the vibrations of an elastic bar clamped at one end. Bernoulli's treatment of fluid dynamics and his examination of fluid flow was introduced in his 1738 work Hydrodynamica.
Rational mechanics dealt primarily with the development of elaborate mathematical treatments of observed motions, using Newtonian principles as a basis, and emphasized improving the tractability of complex calculations and developing of legitimate means of analytical approximation. A representative contemporary textbook was published by Johann Baptiste Horvath. By the end of the century analytical treatments were rigorous enough to verify the stability of the Solar System solely on the basis of Newton's laws without reference to divine intervention – even as deterministic treatments of systems as simple as the three body problem in gravitation remained intractable. In 1705, Edmond Halley predicted the periodicity of Halley's Comet, William Herschel discovered Uranus in 1781, and Henry Cavendish measured the gravitational constant and determined the mass of the Earth in 1798. In 1783, John Michell suggested that some objects might be so massive that not even light could escape from them.
In 1739, Leonhard Euler solved the ordinary differential equation for a forced harmonic oscillator and noticed the resonance phenomenon. In 1742, Colin Maclaurin discovered his uniformly rotating self-gravitating spheroids. In 1742, Benjamin Robins published his New Principles in Gunnery, establishing the science of aerodynamics. British work, carried on by mathematicians such as Taylor and Maclaurin, fell behind Continental developments as the century progressed. Meanwhile, work flourished at scientific academies on the Continent, led by such mathematicians as Bernoulli and Euler, as well as Joseph-Louis Lagrange, Pierre-Simon Laplace, and Adrien-Marie Legendre. In 1743, Jean le Rond d'Alembert published his Traité de dynamique, in which he introduced the concept of generalized forces for accelerating systems and systems with constraints, and applied the new idea of virtual work to solve dynamical problem, now known as D'Alembert's principle, as a rival to Newton's second law of motion. In 1747, Pierre Louis Maupertuis applied minimum principles to mechanics. In 1759, Euler solved the partial differential equation for the vibration of a rectangular drum. In 1764, Euler examined the partial differential equation for the vibration of a circular drum and found one of the Bessel function solutions. In 1776, John Smeaton published a paper on experiments relating power, work, momentum and kinetic energy, and supporting the conservation of energy. In 1788, Lagrange presented his equations of motion in Mécanique analytique, in which the whole of mechanics was organized around the principle of virtual work. In 1789, Antoine Lavoisier stated the law of conservation of mass. The rational mechanics developed in the 18th century received expositions in both Lagrange's Mécanique analytique and Laplace's Traité de mécanique céleste (1799–1825).
Thermodynamics
During the 18th century, thermodynamics was developed through the theories of weightless "imponderable fluids", such as heat ("caloric"), electricity, and phlogiston (which was rapidly overthrown as a concept following Lavoisier's identification of oxygen gas late in the century). Assuming that these concepts were real fluids, their flow could be traced through a mechanical apparatus or chemical reactions. This tradition of experimentation led to the development of new kinds of experimental apparatus, such as the Leyden Jar; and new kinds of measuring instruments, such as the calorimeter, and improved versions of old ones, such as the thermometer. Experiments also produced new concepts, such as the University of Glasgow experimenter Joseph Black's notion of latent heat and Philadelphia intellectual Benjamin Franklin's characterization of electrical fluid as flowing between places of excess and deficit (a concept later reinterpreted in terms of positive and negative charges). Franklin also showed that lightning is electricity in 1752.
The accepted theory of heat in the 18th century viewed it as a kind of fluid, called caloric; although this theory was later shown to be erroneous, a number of scientists adhering to it nevertheless made important discoveries useful in developing the modern theory, including Joseph Black (1728–1799) and Henry Cavendish (1731–1810). Opposed to this caloric theory, which had been developed mainly by the chemists, was the less accepted theory dating from Newton's time that heat is due to the motions of the particles of a substance. This mechanical theory gained support in 1798 from the cannon-boring experiments of Count Rumford (Benjamin Thompson), who found a direct relationship between heat and mechanical energy.
While it was recognized early in the 18th century that finding absolute theories of electrostatic and magnetic force akin to Newton's principles of motion would be an important achievement, none were forthcoming. This impossibility only slowly disappeared as experimental practice became more widespread and more refined in the early years of the 19th century in places such as the newly established Royal Institution in London. Meanwhile, the analytical methods of rational mechanics began to be applied to experimental phenomena, most influentially with the French mathematician Joseph Fourier's analytical treatment of the flow of heat, as published in 1822. Joseph Priestley proposed an electrical inverse-square law in 1767, and Charles-Augustin de Coulomb introduced the inverse-square law of electrostatics in 1798.
At the end of the century, the members of the French Academy of Sciences had attained clear dominance in the field. At the same time, the experimental tradition established by Galileo and his followers persisted. The Royal Society and the French Academy of Sciences were major centers for the performance and reporting of experimental work. Experiments in mechanics, optics, magnetism, static electricity, chemistry, and physiology were not clearly distinguished from each other during the 18th century, but significant differences in explanatory schemes and, thus, experiment design were emerging. Chemical experimenters, for instance, defied attempts to enforce a scheme of abstract Newtonian forces onto chemical affiliations, and instead focused on the isolation and classification of chemical substances and reactions.
19th century
Mechanics
In 1821, William Hamilton began his analysis of Hamilton's characteristic function.
In 1835, he stated Hamilton's canonical equations of motion.
In 1813, Peter Ewart supported the idea of the conservation of energy in his paper On the measure of moving force.
In 1829, Gaspard Coriolis introduced the terms of work (force times distance) and kinetic energy with the meanings they have today.
In 1841, Julius Robert von Mayer, an amateur scientist, wrote a paper on the conservation of energy, although his lack of academic training led to its rejection.
In 1847, Hermann von Helmholtz formally stated the law of conservation of energy.
Electromagnetism
In 1800, Alessandro Volta invented the electric battery (known as the voltaic pile) and thus improved the way electric currents could also be studied. A year later, Thomas Young demonstrated the wave nature of light – which received strong experimental support from the work of Augustin-Jean Fresnel – and the principle of interference.
In 1820, Hans Christian Ørsted found that a current-carrying conductor gives rise to a magnetic force surrounding it, and within a week after Ørsted's discovery reached France, André-Marie Ampère discovered that two parallel electric currents will exert forces on each other.
In 1821, Michael Faraday built an electricity-powered motor, while Georg Ohm stated his law of electrical resistance in 1826, expressing the relationship between voltage, current, and resistance in an electric circuit.
In 1831, Faraday (and independently Joseph Henry) discovered the reverse effect, the production of an electric potential or current through magnetism – known as electromagnetic induction; these two discoveries are the basis of the electric motor and the electric generator, respectively.
Laws of thermodynamics
In the 19th century, the connection between heat and mechanical energy was established quantitatively by Julius Robert von Mayer and James Prescott Joule, who measured the mechanical equivalent of heat in the 1840s. In 1849, Joule published results from his series of experiments (including the paddlewheel experiment) which show that heat is a form of energy, a fact that was accepted in the 1850s. The relation between heat and energy was important for the development of steam engines, and in 1824 the experimental and theoretical work of Sadi Carnot was published. Carnot captured some of the ideas of thermodynamics in his discussion of the efficiency of an idealized engine. Sadi Carnot's work provided a basis for the formulation of the first law of thermodynamics – a restatement of the law of conservation of energy – which was stated around 1850 by William Thomson, later known as Lord Kelvin, and Rudolf Clausius. Lord Kelvin, who had extended the concept of absolute zero from gases to all substances in 1848, drew upon the engineering theory of Lazare Carnot, Sadi Carnot, and Émile Clapeyron–as well as the experimentation of James Prescott Joule on the interchangeability of mechanical, chemical, thermal, and electrical forms of work – to formulate the first law.
Kelvin and Clausius also stated the second law of thermodynamics, which was originally formulated in terms of the fact that heat does not spontaneously flow from a colder body to a hotter. Other formulations followed quickly (for example, the second law was expounded in Thomson and Peter Guthrie Tait's influential work Treatise on Natural Philosophy) and Kelvin in particular understood some of the law's general implications. The second Law–the idea that gases consist of molecules in motion–had been discussed in some detail by Daniel Bernoulli in 1738, but had fallen out of favor, and was revived by Clausius in 1857. In 1850, Hippolyte Fizeau and Léon Foucault measured the speed of light in water and find that it is slower than in air, in support of the wave model of light. In 1852, Joule and Thomson demonstrated that a rapidly expanding gas cools, later named the Joule–Thomson effect or Joule–Kelvin effect. Hermann von Helmholtz puts forward the idea of the heat death of the universe in 1854, the same year that Clausius established the importance of dQ/T (Clausius's theorem) (though he did not yet name the quantity).
Statistical mechanics (a fundamentally new approach to science)
In 1859, James Clerk Maxwell discovered the distribution law of molecular velocities. Maxwell showed that electric and magnetic fields are propagated outward from their source at a speed equal to that of light and that light is one of several kinds of electromagnetic radiation, differing only in frequency and wavelength from the others. In 1859, Maxwell worked out the mathematics of the distribution of velocities of the molecules of a gas. The wave theory of light was widely accepted by the time of Maxwell's work on the electromagnetic field, and afterward the study of light and that of electricity and magnetism were closely related. In 1864 James Maxwell published his papers on a dynamical theory of the electromagnetic field, and stated that light is an electromagnetic phenomenon in the 1873 publication of Maxwell's Treatise on Electricity and Magnetism. This work drew upon theoretical work by German theoreticians such as Carl Friedrich Gauss and Wilhelm Weber. The encapsulation of heat in particulate motion, and the addition of electromagnetic forces to Newtonian dynamics established an enormously robust theoretical underpinning to physical observations.
The prediction that light represented a transmission of energy in wave form through a "luminiferous ether", and the seeming confirmation of that prediction with Helmholtz student Heinrich Hertz's 1888 detection of electromagnetic radiation, was a major triumph for physical theory and raised the possibility that even more fundamental theories based on the field could soon be developed. Experimental confirmation of Maxwell's theory was provided by Hertz, who generated and detected electric waves in 1886 and verified their properties, at the same time foreshadowing their application in radio, television, and other devices. In 1887, Heinrich Hertz discovered the photoelectric effect. Research on the electromagnetic waves began soon after, with many scientists and inventors conducting experiments on their properties. In the mid to late 1890s Guglielmo Marconi developed a radio wave based wireless telegraphy system (see invention of radio).
The atomic theory of matter had been proposed again in the early 19th century by the chemist John Dalton and became one of the hypotheses of the kinetic-molecular theory of gases developed by Clausius and James Clerk Maxwell to explain the laws of thermodynamics.
The kinetic theory in turn led to a revolutionary approach to science, the statistical mechanics of Ludwig Boltzmann (1844–1906) and Josiah Willard Gibbs (1839–1903), which studies the statistics of microstates of a system and uses statistics to determine the state of a physical system. Interrelating the statistical likelihood of certain states of organization of these particles with the energy of those states, Clausius reinterpreted the dissipation of energy to be the statistical tendency of molecular configurations to pass toward increasingly likely, increasingly disorganized states (coining the term "entropy" to describe the disorganization of a state). The statistical versus absolute interpretations of the second law of thermodynamics set up a dispute that would last for several decades (producing arguments such as "Maxwell's demon"), and that would not be held to be definitively resolved until the behavior of atoms was firmly established in the early 20th century. In 1902, James Jeans found the length scale required for gravitational perturbations to grow in a static nearly homogeneous medium.
Other developments
In 1822, botanist Robert Brown discovered Brownian motion: pollen grains in water undergoing movement resulting from their bombardment by the fast-moving atoms or molecules in the liquid.
In 1834, Carl Jacobi discovered his uniformly rotating self-gravitating ellipsoids (the Jacobi ellipsoid).
In 1834, John Russell observed a nondecaying solitary water wave (soliton) in the Union Canal near Edinburgh and used a water tank to study the dependence of solitary water wave velocities on wave amplitude and water depth.
In 1835, Gaspard Coriolis examined theoretically the mechanical efficiency of waterwheels, and deduced the Coriolis effect.
In 1842, Christian Doppler proposed the Doppler effect.
In 1851, Léon Foucault showed the Earth's rotation with a huge pendulum (Foucault pendulum).
There were important advances in continuum mechanics in the first half of the century, namely formulation of laws of elasticity for solids and discovery of Navier–Stokes equations for fluids.
20th century: birth of modern physics
At the end of the 19th century, physics had evolved to the point at which classical mechanics could cope with highly complex problems involving macroscopic situations; thermodynamics and kinetic theory were well established; geometrical and physical optics could be understood in terms of electromagnetic waves; and the conservation laws for energy and momentum (and mass) were widely accepted. So profound were these and other developments that it was generally accepted that all the important laws of physics had been discovered and that, henceforth, research would be concerned with clearing up minor problems and particularly with improvements of method and measurement.
However, around 1900 serious doubts arose about the completeness of the classical theories – the triumph of Maxwell's theories, for example, was undermined by inadequacies that had already begun to appear – and their inability to explain certain physical phenomena, such as the energy distribution in blackbody radiation and the photoelectric effect, while some of the theoretical formulations led to paradoxes when pushed to the limit. Prominent physicists such as Hendrik Lorentz, Emil Cohn, Ernst Wiechert and Wilhelm Wien believed that some modification of Maxwell's equations might provide the basis for all physical laws. These shortcomings of classical physics were never to be resolved and new ideas were required. At the beginning of the 20th century a major revolution shook the world of physics, which led to a new era, generally referred to as modern physics.
Radiation experiments
In the 19th century, experimenters began to detect unexpected forms of radiation: Wilhelm Röntgen caused a sensation with his discovery of X-rays in 1895; in 1896 Henri Becquerel discovered that certain kinds of matter emit radiation on their own accord. In 1897, J. J. Thomson discovered the electron, and new radioactive elements found by Marie and Pierre Curie raised questions about the supposedly indestructible atom and the nature of matter. Marie and Pierre coined the term "radioactivity" to describe this property of matter, and isolated the radioactive elements radium and polonium. Ernest Rutherford and Frederick Soddy identified two of Becquerel's forms of radiation with electrons and the element helium. Rutherford identified and named two types of radioactivity and in 1911 interpreted experimental evidence as showing that the atom consists of a dense, positively charged nucleus surrounded by negatively charged electrons. Classical theory, however, predicted that this structure should be unstable. Classical theory had also failed to explain successfully two other experimental results that appeared in the late 19th century. One of these was the demonstration by Albert A. Michelson and Edward W. Morley – known as the Michelson–Morley experiment – which showed there did not seem to be a preferred frame of reference, at rest with respect to the hypothetical luminiferous ether, for describing electromagnetic phenomena. Studies of radiation and radioactive decay continued to be a preeminent focus for physical and chemical research through the 1930s, when the discovery of nuclear fission by Lise Meitner and Otto Frisch opened the way to the practical exploitation of what came to be called "atomic" energy.
Albert Einstein's theory of relativity
In 1905, a 26-year-old German physicist named Albert Einstein (then a patent clerk in Bern, Switzerland) showed how measurements of time and space are affected by motion between an observer and what is being observed. Einstein's radical theory of relativity revolutionized science. Although Einstein made many other important contributions to science, the theory of relativity alone represents one of the greatest intellectual achievements of all time. Although the concept of relativity was not introduced by Einstein, he recognised that the speed of light in vacuum is constant, i.e., the same for all observers, and an absolute upper limit to speed. This does not impact a person's day-to-day life since most objects travel at speeds much slower than light speed. For objects travelling near light speed, however, the theory of relativity shows that clocks associated with those objects will run more slowly and that the objects shorten in length according to measurements of an observer on Earth. Einstein also derived the equation, , which expresses the equivalence of mass and energy.
Special relativity
Einstein argued that the speed of light was a constant in all inertial reference frames and that electromagnetic laws should remain valid independent of reference frame – assertions which rendered the ether "superfluous" to physical theory, and that held that observations of time and length varied relative to how the observer was moving with respect to the object being measured (what came to be called the "special theory of relativity"). It also followed that mass and energy were interchangeable quantities according to the equation E=mc2. In another paper published the same year, Einstein asserted that electromagnetic radiation was transmitted in discrete quantities ("quanta"), according to a constant that the theoretical physicist Max Planck had posited in 1900 to arrive at an accurate theory for the distribution of blackbody radiation – an assumption that explained the strange properties of the photoelectric effect.
The special theory of relativity is a formulation of the relationship between physical observations and the concepts of space and time. The theory arose out of contradictions between electromagnetism and Newtonian mechanics and had great impact on both those areas. The original historical issue was whether it was meaningful to discuss the electromagnetic wave-carrying "ether" and motion relative to it and also whether one could detect such motion, as was unsuccessfully attempted in the Michelson–Morley experiment. Einstein demolished these questions and the ether concept in his special theory of relativity. However, his basic formulation does not involve detailed electromagnetic theory. It arises out of the question: "What is time?" Newton, in the Principia (1686), had given an unambiguous answer: "Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external, and by another name is called duration." This definition is basic to all classical physics.
Einstein had the genius to question it, and found that it was incomplete. Instead, each "observer" necessarily makes use of his or her own scale of time, and for two observers in relative motion, their time-scales will differ. This induces a related effect on position measurements. Space and time become intertwined concepts, fundamentally dependent on the observer. Each observer presides over his or her own space-time framework or coordinate system. There being no absolute frame of reference, all observers of given events make different but equally valid (and reconcilable) measurements. What remains absolute is stated in Einstein's relativity postulate: "The basic laws of physics are identical for two observers who have a constant relative velocity with respect to each other."
Special relativity had a profound effect on physics: started as a rethinking of the theory of electromagnetism, it found a new symmetry law of nature, now called Poincaré symmetry, that replaced the old Galilean symmetry.
Special relativity exerted another long-lasting effect on dynamics. Although initially it was credited with the "unification of mass and energy", it became evident that relativistic dynamics established a firm distinction between rest mass, which is an invariant (observer independent) property of a particle or system of particles, and the energy and momentum of a system. The latter two are separately conserved in all situations but not invariant with respect to different observers. The term mass in particle physics underwent a semantic change, and since the late 20th century it almost exclusively denotes the rest (or invariant) mass.
General relativity
By 1916, Einstein was able to generalize this further, to deal with all states of motion including non-uniform acceleration, which became the general theory of relativity. In this theory Einstein also specified a new concept, the curvature of space-time, which described the gravitational effect at every point in space. In fact, the curvature of space-time completely replaced Newton's universal law of gravitation. According to Einstein, gravitational force in the normal sense is a kind of illusion caused by the geometry of space. The presence of a mass causes a curvature of space-time in the vicinity of the mass, and this curvature dictates the space-time path that all freely-moving objects must follow. It was also predicted from this theory that light should be subject to gravity – all of which was verified experimentally. This aspect of relativity explained the phenomena of light bending around the sun, predicted black holes as well as properties of the Cosmic microwave background radiation – a discovery rendering fundamental anomalies in the classic Steady-State hypothesis. For his work on relativity, the photoelectric effect and blackbody radiation, Einstein received the Nobel Prize in 1921.
The gradual acceptance of Einstein's theories of relativity and the quantized nature of light transmission, and of Niels Bohr's model of the atom created as many problems as they solved, leading to a full-scale effort to reestablish physics on new fundamental principles. Expanding relativity to cases of accelerating reference frames (the "general theory of relativity") in the 1910s, Einstein posited an equivalence between the inertial force of acceleration and the force of gravity, leading to the conclusion that space is curved and finite in size, and the prediction of such phenomena as gravitational lensing and the distortion of time in gravitational fields.
Quantum mechanics
Although relativity resolved the electromagnetic phenomena conflict demonstrated by Michelson and Morley, a second theoretical problem was the explanation of the distribution of electromagnetic radiation emitted by a black body; experiment showed that at shorter wavelengths, toward the ultraviolet end of the spectrum, the energy approached zero, but classical theory predicted it should become infinite. This glaring discrepancy, known as the ultraviolet catastrophe, was solved by the new theory of quantum mechanics. Quantum mechanics is the theory of atoms and subatomic systems. Approximately the first 30 years of the 20th century represent the time of the conception and evolution of the theory. The basic ideas of quantum theory were introduced in 1900 by Max Planck (1858–1947), who was awarded the Nobel Prize for Physics in 1918 for his discovery of the quantified nature of energy. The quantum theory (which previously relied in the "correspondence" at large scales between the quantized world of the atom and the continuities of the "classical" world) was accepted when the Compton Effect established that light carries momentum and can scatter off particles, and when Louis de Broglie asserted that matter can be seen as behaving as a wave in much the same way as electromagnetic waves behave like particles (wave–particle duality).
In 1905, Einstein used the quantum theory to explain the photoelectric effect, and in 1913 the Danish physicist Niels Bohr used the same constant to explain the stability of Rutherford's atom as well as the frequencies of light emitted by hydrogen gas. The quantized theory of the atom gave way to a full-scale quantum mechanics in the 1920s. New principles of a "quantum" rather than a "classical" mechanics, formulated in matrix-form by Werner Heisenberg, Max Born, and Pascual Jordan in 1925, were based on the probabilistic relationship between discrete "states" and denied the possibility of causality. Quantum mechanics was extensively developed by Heisenberg, Wolfgang Pauli, Paul Dirac, and Erwin Schrödinger, who established an equivalent theory based on waves in 1926; but Heisenberg's 1927 "uncertainty principle" (indicating the impossibility of precisely and simultaneously measuring position and momentum) and the "Copenhagen interpretation" of quantum mechanics (named after Bohr's home city) continued to deny the possibility of fundamental causality, though opponents such as Einstein would metaphorically assert that "God does not play dice with the universe". The new quantum mechanics became an indispensable tool in the investigation and explanation of phenomena at the atomic level. Also in the 1920s, the Indian scientist Satyendra Nath Bose's work on photons and quantum mechanics provided the foundation for Bose–Einstein statistics, the theory of the Bose–Einstein condensate.
The spin–statistics theorem established that any particle in quantum mechanics may be either a boson (statistically Bose–Einstein) or a fermion (statistically Fermi–Dirac). It was later found that all fundamental bosons transmit forces, such as the photon that transmits electromagnetism.
Fermions are particles "like electrons and nucleons" and are the usual constituents of matter. Fermi–Dirac statistics later found numerous other uses, from astrophysics (see Degenerate matter) to semiconductor design.
Contemporary physics
Quantum field theory
As the philosophically inclined continued to debate the fundamental nature of the universe, quantum theories continued to be produced, beginning with Paul Dirac's formulation of a relativistic quantum theory in 1928. However, attempts to quantize electromagnetic theory entirely were stymied throughout the 1930s by theoretical formulations yielding infinite energies. This situation was not considered adequately resolved until after World War II ended, when Julian Schwinger, Richard Feynman and Sin-Itiro Tomonaga independently posited the technique of renormalization, which allowed for an establishment of a robust quantum electrodynamics (QED).
Meanwhile, new theories of fundamental particles proliferated with the rise of the idea of the quantization of fields through "exchange forces" regulated by an exchange of short-lived "virtual" particles, which were allowed to exist according to the laws governing the uncertainties inherent in the quantum world. Notably, Hideki Yukawa proposed that the positive charges of the nucleus were kept together courtesy of a powerful but short-range force mediated by a particle with a mass between that of the electron and proton. This particle, the "pion", was identified in 1947 as part of what became a slew of particles discovered after World War II. Initially, such particles were found as ionizing radiation left by cosmic rays, but increasingly came to be produced in newer and more powerful particle accelerators.
Outside particle physics, significant advances of the time were:
the invention of the laser (1964 Nobel Prize in Physics);
the theoretical and experimental research of superconductivity, especially the invention of a quantum theory of superconductivity by Vitaly Ginzburg and Lev Landau (1962 Nobel Prize in Physics) and, later, its explanation via Cooper pairs (1972 Nobel Prize in Physics). The Cooper pair was an early example of quasiparticles.
Unified field theories
Einstein deemed that all fundamental interactions in nature can be explained in a single theory. Unified field theories were numerous attempts to "merge" several interactions. One of many formulations of such theories (as well as field theories in general) is a gauge theory, a generalization of the idea of symmetry. Eventually the Standard Model (see below) succeeded in unification of strong, weak, and electromagnetic interactions. All attempts to unify gravitation with something else failed.
Particle physics and the Standard Model
When parity was broken in weak interactions by Chien-Shiung Wu in her experiment, a series of discoveries were created thereafter. The interaction of these particles by scattering and decay provided a key to new fundamental quantum theories. Murray Gell-Mann and Yuval Ne'eman brought some order to these new particles by classifying them according to certain qualities, beginning with what Gell-Mann referred to as the "Eightfold Way". While its further development, the quark model, at first seemed inadequate to describe strong nuclear forces, allowing the temporary rise of competing theories such as the S-Matrix, the establishment of quantum chromodynamics in the 1970s finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravitation, and which remains generally accepted within its domain of application.
The Standard Model, based on the Yang–Mills theory groups the electroweak interaction theory and quantum chromodynamics into a structure denoted by the gauge group SU(3)×SU(2)×U(1). The formulation of the unification of the electromagnetic and weak interactions in the standard model is due to Abdus Salam, Steven Weinberg and, subsequently, Sheldon Glashow. Electroweak theory was later confirmed experimentally (by observation of neutral weak currents), and distinguished by the 1979 Nobel Prize in Physics.
Since the 1970s, fundamental particle physics has provided insights into early universe cosmology, particularly the Big Bang theory proposed as a consequence of Einstein's general theory of relativity. However, starting in the 1990s, astronomical observations have also provided new challenges, such as the need for new explanations of galactic stability ("dark matter") and the apparent acceleration in the expansion of the universe ("dark energy").
While accelerators have confirmed most aspects of the Standard Model by detecting expected particle interactions at various collision energies, no theory reconciling general relativity with the Standard Model has yet been found, although supersymmetry and string theory were believed by many theorists to be a promising avenue forward. The Large Hadron Collider, however, which began operating in 2008, has failed to find any evidence that is supportive of supersymmetry and string theory.
Cosmology
Cosmology may be said to have become a serious research question with the publication of Einstein's General Theory of Relativity in 1915 although it did not enter the scientific mainstream until the period known as the "Golden age of general relativity".
About a decade later, in the midst of what was dubbed the "Great Debate", Hubble and Slipher discovered the expansion of universe in the 1920s measuring the redshifts of Doppler spectra from galactic nebulae. Using Einstein's general relativity, Lemaître and Gamow formulated what would become known as the big bang theory. A rival, called the steady state theory, was devised by Hoyle, Gold, Narlikar and Bondi.
Cosmic microwave background radiation was verified in the 1960s by Penzias and Wilson, and this discovery favoured the big bang at the expense of the steady state scenario. Later work was by Smoot et al. (1989), among other contributors, using data from the Cosmic Background explorer (CoBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) satellites that refined these observations. The 1980s (the same decade of the COBE measurements) also saw the proposal of inflation theory by Alan Guth.
Recently the problems of dark matter and dark energy have risen to the top of the cosmology agenda.
Higgs boson
On July 4, 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, a potential key to an understanding of why elementary particles have mass and indeed to the existence of diversity and life in the universe. For now, some physicists are calling it a "Higgslike" particle. Joe Incandela, of the University of California, Santa Barbara, said, "It's something that may, in the end, be one of the biggest observations of any new phenomena in our field in the last 30 or 40 years, going way back to the discovery of quarks, for example." Michael Turner, a cosmologist at the University of Chicago and the chairman of the physics center board, said:
Peter Higgs was one of six physicists, working in three independent groups, who, in 1964, invented the notion of the Higgs field ("cosmic molasses"). The others were Tom Kibble of Imperial College, London; Carl Hagen of the University of Rochester; Gerald Guralnik of Brown University; and François Englert and Robert Brout, both of Université libre de Bruxelles.
Although they have never been seen, Higgslike fields play an important role in theories of the universe and in string theory. Under certain conditions, according to the strange accounting of Einsteinian physics, they can become suffused with energy that exerts an antigravitational force. Such fields have been proposed as the source of an enormous burst of expansion, known as inflation, early in the universe and, possibly, as the secret of the dark energy that now seems to be speeding up the expansion of the universe.
Physical sciences
With increased accessibility to and elaboration upon advanced analytical techniques in the 19th century, physics was defined as much, if not more, by those techniques than by the search for universal principles of motion and energy, and the fundamental nature of matter. Fields such as acoustics, geophysics, astrophysics, aerodynamics, plasma physics, low-temperature physics, and solid-state physics joined optics, fluid dynamics, electromagnetism, and mechanics as areas of physical research. In the 20th century, physics also became closely allied with such fields as electrical, aerospace and materials engineering, and physicists began to work in government and industrial laboratories as much as in academic settings. Following World War II, the population of physicists increased dramatically, and came to be centered on the United States, while, in more recent decades, physics has become a more international pursuit than at any time in its previous history.
Articles on the history of physics
On branches of physics
History of astronomy (timeline)
History of condensed matter (timeline)
History of aerodynamics
History of materials science (timeline)
History of fluid mechanics (timeline)
History of metamaterials
History of nanotechnology
History of superconductivity
History of computational physics (timeline)
History of electromagnetic theory (timeline)
History of electrical engineering
History of classical field theory
History of Maxwell's equations
History of optics
History of spectroscopy
History of geophysics
History of gravity, spacetime and cosmology
History of the Big Bang theory
History of cosmology (timeline)
History of gravitational theory (timeline)
History of general relativity
History of special relativity (timeline)
History of Lorentz transformations
History of classical mechanics (timeline)
History of variational principles in physics
History of nuclear physics
Discovery of nuclear fission
History of nuclear fusion
History of nuclear power
History of nuclear weapons
History of quantum mechanics (timeline)
Atomic theory
History of molecular theory
History of quantum field theory
History of quantum information (timeline)
History of subatomic physics (timeline)
History of thermodynamics (timeline)
History of energy
History of entropy
History of perpetual motion machines
On specific discoveries
Discovery of cosmic microwave background radiation
History of graphene
First observation of gravitational waves
Subatomic particles (timeline)
Search for the Higgs boson
Discovery of the neutron
Historical periods
Classical physics
Copernican Revolution
Golden age of physics
Golden age of cosmology
Modern physics
Physics in the medieval Islamic world
Astronomy in the medieval Islamic world
Noisy intermediate-scale quantum era
See also
Notes
References
Sources
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Further reading
Buchwald, Jed Z. and Robert Fox, eds. The Oxford Handbook of the History of Physics (2014) 976pp; excerpt
.
.
.
.
A selection of 56 articles, written by physicists. Commentaries and notes by Lloyd Motz and Dale McAdoo.
de Haas, Paul, "Historic Papers in Physics (20th Century)"
External links
"Selected Works about Isaac Newton and His Thought" from The Newton Project.
Physics | 0.765826 | 0.995717 | 0.762546 |
Solving the geodesic equations | Solving the geodesic equations is a procedure used in mathematics, particularly Riemannian geometry, and in physics, particularly in general relativity, that results in obtaining geodesics. Physically, these represent the paths of (usually ideal) particles with no proper acceleration, their motion satisfying the geodesic equations. Because the particles are subject to no proper acceleration, the geodesics generally represent the straightest path between two points in a curved spacetime.
The differential geodesic equation
On an n-dimensional Riemannian manifold , the geodesic equation written in a coordinate chart with coordinates is:
where the coordinates xa(s) are regarded as the coordinates of a curve γ(s) in and are the Christoffel symbols. The Christoffel symbols are functions of the metric and are given by:
where the comma indicates a partial derivative with respect to the coordinates:
As the manifold has dimension , the geodesic equations are a system of ordinary differential equations for the coordinate variables. Thus, allied with initial conditions, the system can, according to the Picard–Lindelöf theorem, be solved. One can also use a Lagrangian approach to the problem: defining
and applying the Euler–Lagrange equation.
Heuristics
As the laws of physics can be written in any coordinate system, it is convenient to choose one that simplifies the geodesic equations. Mathematically, this means a coordinate chart is chosen in which the geodesic equations have a particularly tractable form.
Effective potentials
When the geodesic equations can be separated into terms containing only an undifferentiated variable and terms containing only its derivative, the former may be consolidated into an effective potential dependent only on position. In this case, many of the heuristic methods of analysing energy diagrams apply, in particular the location of turning points.
Solution techniques
Solving the geodesic equations means obtaining an exact solution, possibly even the general solution, of the geodesic equations. Most attacks secretly employ the point symmetry group of the system of geodesic equations. This often yields a result giving a family of solutions implicitly, but in many examples does yield the general solution in explicit form.
In general relativity, to obtain timelike geodesics it is often simplest to start from the spacetime metric, after dividing by to obtain the form
where the dot represents differentiation with respect to . Because timelike geodesics are maximal, one may apply the Euler–Lagrange equation directly, and thus obtain a set of equations equivalent to the geodesic equations. This method has the advantage of bypassing a tedious calculation of Christoffel symbols.
See also
Geodesics of the Schwarzschild vacuum
Mathematics of general relativity
Transition from special relativity to general relativity
References
General relativity
Mathematical methods in general relativity | 0.782497 | 0.974472 | 0.762521 |
ISO 31 | ISO 31 (Quantities and units, International Organization for Standardization, 1992) is a superseded international standard concerning physical quantities, units of measurement, their interrelationships and their presentation. It was revised and replaced by ISO/IEC 80000.
Parts
The standard comes in 14 parts:
ISO 31-0: General principles (replaced by ISO/IEC 80000-1:2009)
ISO 31-1: Space and time (replaced by ISO/IEC 80000-3:2007)
ISO 31-2: Periodic and related phenomena (replaced by ISO/IEC 80000-3:2007)
ISO 31-3: Mechanics (replaced by ISO/IEC 80000-4:2006)
ISO 31-4: Heat (replaced by ISO/IEC 80000-5)
ISO 31-5: Electricity and magnetism (replaced by ISO/IEC 80000-6)
ISO 31-6: Light and related electromagnetic radiations (replaced by ISO/IEC 80000-7)
ISO 31-7: Acoustics (replaced by ISO/IEC 80000-8:2007)
ISO 31-8: Physical chemistry and molecular physics (replaced by ISO/IEC 80000-9)
ISO 31-9: Atomic and nuclear physics (replaced by ISO/IEC 80000-10)
ISO 31-10: Nuclear reactions and ionizing radiations (replaced by ISO/IEC 80000-10)
ISO 31-11: Mathematical signs and symbols for use in the physical sciences and technology (replaced by ISO 80000-2:2009)
ISO 31-12: Characteristic numbers (replaced by ISO/IEC 80000-11)
ISO 31-13: Solid state physics (replaced by ISO/IEC 80000-12)
A second international standard on quantities and units was IEC 60027. The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration (, ) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027.
Coined words
ISO 31-0 introduced several new words into the English language that are direct spelling-calques from the French. Some of these words have been used in scientific literature.
Related national standards
Canada: CAN/CSA-Z234-1-89 Canadian Metric Practice Guide (covers some aspects of ISO 31-0, but is not a comprehensive list of physical quantities comparable to ISO 31)
United States: There are several national SI guidance documents, such as NIST SP 811, NIST SP 330, NIST SP 814, IEEE/ASTM SI 10, SAE J916. These cover many aspects of the ISO 31-0 standard, but lack the comprehensive list of quantities and units defined in the remaining parts of ISO 31.
See also
SI – the international system of units
BIPM – publishes freely available information on SI units , which overlaps with some of the material covered in ISO 31-0
IUPAP – much of the material in ISO 31 comes originally from Document IUPAP-25 of the Commission for Symbols, Units and Nomenclature (SUN Commission) of the International Union of Pure and Applied Physics
IUPAC – some of the material in ISO 31 originates from the Interdivisional Committee on Terminology, Nomenclature and Symbols of the International Union of Pure and Applied Chemistry
Quantities, Units and Symbols in Physical Chemistry – this IUPAC "Green Book" covers many ISO 31 definitions
IEC 60027 Letter symbols to be used in electrical technology
ISO 1000 SI Units and Recommendations for the use of their multiples and of certain other units (bundled with ISO 31 as the ISO Standards Handbook – Quantities and units)
Notes
References
(contains both ISO 31 and ISO 1000)
External links
ISO TC12 standards – Quantities, units, symbols, conversion factors
00031
00031
+
Measurement | 0.790578 | 0.964506 | 0.762517 |
Holonomic constraints | In classical mechanics, holonomic constraints are relations between the position variables (and possibly time) that can be expressed in the following form:
where are generalized coordinates that describe the system (in unconstrained configuration space). For example, the motion of a particle constrained to lie on the surface of a sphere is subject to a holonomic constraint, but if the particle is able to fall off the sphere under the influence of gravity, the constraint becomes non-holonomic. For the first case, the holonomic constraint may be given by the equation
where is the distance from the centre of a sphere of radius , whereas the second non-holonomic case may be given by
Velocity-dependent constraints (also called semi-holonomic constraints) such as
are not usually holonomic.
Holonomic system
In classical mechanics a system may be defined as holonomic if all constraints of the system are holonomic. For a constraint to be holonomic it must be expressible as a function:
i.e. a holonomic constraint depends only on the coordinates and maybe time . It does not depend on the velocities or any higher-order derivative with respect to . A constraint that cannot be expressed in the form shown above is a nonholonomic constraint.
Introduction
As described above, a holonomic system is (simply speaking) a system in which one can deduce the state of a system by knowing only the change of positions of the components of the system over time, but not needing to know the velocity or in what order the components moved relative to each other. In contrast, a nonholonomic system is often a system where the velocities of the components over time must be known to be able to determine the change of state of the system, or a system where a moving part is not able to be bound to a constraint surface, real or imaginary. Examples of holonomic systems are gantry cranes, pendulums, and robotic arms. Examples of nonholonomic systems are Segways, unicycles, and automobiles.
Terminology
The configuration space lists the displacement of the components of the system, one for each degree of freedom. A system that can be described using a configuration space is called scleronomic.
The event space is identical to the configuration space except for the addition of a variable to represent the change in the system over time (if needed to describe the system). A system that must be described using an event space, instead of only a configuration space, is called rheonomic. Many systems can be described either scleronomically or rheonomically. For example, the total allowable motion of a pendulum can be described with a scleronomic constraint, but the motion over time of a pendulum must be described with a rheonomic constraint.
The state space is the configuration space, plus terms describing the velocity of each term in the configuration space.
The state-time space adds time .
Examples
Gantry crane
As shown on the right, a gantry crane is an overhead crane that is able to move its hook in 3 axes as indicated by the arrows. Intuitively, we can deduce that the crane should be a holonomic system as, for a given movement of its components, it doesn't matter what order or velocity the components move: as long as the total displacement of each component from a given starting condition is the same, all parts and the system as a whole will end up in the same state. Mathematically we can prove this as such:
We can define the configuration space of the system as:
We can say that the deflection of each component of the crane from its "zero" position are , , and , for the blue, green, and orange components, respectively. The orientation and placement of the coordinate system does not matter in whether a system is holonomic, but in this example the components happen to move parallel to its axes. If the origin of the coordinate system is at the back-bottom-left of the crane, then we can write the position constraint equation as:
Where is the height of the crane. Optionally, we may simplify to the standard form where all constants are placed after the variables:
Because we have derived a constraint equation in holonomic form (specifically, our constraint equation has the form where ), we can see that this system must be holonomic.
Pendulum
As shown on the right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. This system is holonomic because it obeys the holonomic constraint
where is the position of the weight and is length of the string.
Rigid body
The particles of a rigid body obey the holonomic constraint
where , are respectively the positions of particles and , and is the distance between them. If a given system is holonomic, rigidly attaching additional parts to components of the system in question cannot make it non-holonomic, assuming that the degrees of freedom are not reduced (in other words, assuming the configuration space is unchanged).
Pfaffian form
Consider the following differential form of a constraint:
where are the coefficients of the differentials for the ith constraint equation. This form is called the Pfaffian form or the differential form.
If the differential form is integrable, i.e., if there is a function satisfying the equality
then this constraint is a holonomic constraint; otherwise, it is nonholonomic. Therefore, all holonomic and some nonholonomic constraints can be expressed using the differential form. Examples of nonholonomic constraints that cannot be expressed this way are those that are dependent on generalized velocities. With a constraint equation in Pfaffian form, whether the constraint is holonomic or nonholonomic depends on whether the Pfaffian form is integrable. See Universal test for holonomic constraints below for a description of a test to verify the integrability (or lack of) of a Pfaffian form constraint.
Universal test for holonomic constraints
When the constraint equation of a system is written in Pfaffian constraint form, there exists a mathematical test to determine whether the system is holonomic.
For a constraint equation, or sets of constraint equations (note that variable(s) representing time can be included, as from above and in the following form):
we can use the test equation:
where in combinations of test equations per constraint equation, for all sets of constraint equations.
In other words, a system of three variables would have to be tested once with one test equation with the terms being terms in the constraint equation (in any order), but to test a system of four variables the test would have to be performed up to four times with four different test equations, with the terms being terms , , , and in the constraint equation (each in any order) in four different tests. For a system of five variables, ten tests would have to be performed on a holonomic system to verify that fact, and for a system of five variables with three sets of constraint equations, thirty tests (assuming a simplification like a change-of-variable could not be performed to reduce that number). For this reason, it is advisable when using this method on systems of more than three variables to use common sense as to whether the system in question is holonomic, and only pursue testing if the system likely is not. Additionally, it is likewise best to use mathematical intuition to try to predict which test would fail first and begin with that one, skipping tests at first that seem likely to succeed.
If every test equation is true for the entire set of combinations for all constraint equations, the system is holonomic. If it is untrue for even one test combination, the system is nonholonomic.
Example
Consider this dynamical system described by a constraint equation in Pfaffian form.
The configuration space, by inspection, is . Because there are only three terms in the configuration space, there will be only one test equation needed.
We can organize the terms of the constraint equation as such, in preparation for substitution:
Substituting the terms, our test equation becomes:
After calculating all partial derivatives, we get:
Simplifying, we find that:
We see that our test equation is true, and thus, the system must be holonomic.
We have finished our test, but now knowing that the system is holonomic, we may wish to find the holonomic constraint equation. We can attempt to find it by integrating each term of the Pfaffian form and attempting to unify them into one equation, as such:
It's easy to see that we can combine the results of our integrations to find the holonomic constraint equation:
where C is the constant of integration.
Constraints of constant coefficients
For a given Pfaffian constraint where every coefficient of every differential is a constant, in other words, a constraint in the form:
the constraint must be holonomic.
We may prove this as follows: consider a system of constraints in Pfaffian form where every coefficient of every differential is a constant, as described directly above. To test whether this system of constraints is holonomic, we use the universal test. We can see that in the test equation, there are three terms that must sum to zero. Therefore, if each of those three terms in every possible test equation are each zero, then all test equations are true and this the system is holonomic. Each term of each test equation is in the form:
where:
, , and are some combination (with total combinations) of and for a given constraint .
, , and are the corresponding combination of and .
Additionally, there are sets of test equations.
We can see that, by definition, all are constants. It is well-known in calculus that any derivative (full or partial) of any constant is . Hence, we can reduce each partial derivative to:
and hence each term is zero, the left side each test equation is zero, each test equation is true, and the system is holonomic.
Configuration spaces of two or one variable
Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
We may prove this as such: consider a dynamical system with a configuration space or state space described as:
if the system is described by a state space, we simply say that equals our time variable . This system will be described in Pfaffian form:
with sets of constraints. The system will be tested by using the universal test. However, the universal test requires three variables in the configuration or state space. To accommodate this, we simply add a dummy variable to the configuration or state space to form:
Because the dummy variable is by definition not a measure of anything in the system, its coefficient in the Pfaffian form must be . Thus we revise our Pfaffian form:
Now we may use the test as such, for a given constraint if there are a set of constraints:
Upon realizing that : because the dummy variable cannot appear in the coefficients used to describe the system, we see that the test equation must be true for all sets of constraint equations and thus the system must be holonomic. A similar proof can be conducted with one actual variable in the configuration or state space and two dummy variables to confirm that one-degree-of-freedom systems describable in Pfaffian form are also always holonomic.
In conclusion, we realize that even though it is possible to model nonholonomic systems in Pfaffian form, any system modellable in Pfaffian form with two or fewer degrees of freedom (the number of degrees of freedom is equal to the number of terms in the configuration space) must be holonomic.
Important note: realize that the test equation failed because the dummy variable, and hence the dummy differential included in the test, will differentiate anything that is a function of the actual configuration or state space variables to . Having a system with a configuration or state space of:
and a set of constraints where one or more constraints are in the Pfaffian form:
does not guarantee the system is holonomic, as even though one differential has a coefficient of , there are still three degrees of freedom described in the configuration or state space.
Transformation to independent generalized coordinates
The holonomic constraint equations can help us easily remove some of the dependent variables in our system. For example, if we want to remove , which is a parameter in the constraint equation , we can rearrange the equation into the following form, assuming it can be done,
and replace the in every equation of the system using the above function. This can always be done for general physical systems, provided that the derivative of is continuous, then by the implicit function theorem, the solution , is guaranteed in some open set. Thus, it is possible to remove all occurrences of the dependent variable .
Suppose that a physical system has degrees of freedom. Now, holonomic constraints are imposed on the system. Then, the number of degrees of freedom is reduced to . We can use independent generalized coordinates to completely describe the motion of the system. The transformation equation can be expressed as follows:
Classification of physical systems
In order to study classical physics rigorously and methodically, we need to classify systems. Based on previous discussion, we can classify physical systems into holonomic systems and non-holonomic systems. One of the conditions for the applicability of many theorems and equations is that the system must be a holonomic system. For example, if a physical system is a holonomic system and a monogenic system, then Hamilton's principle is the necessary and sufficient condition for the correctness of Lagrange's equation.
See also
Nonholonomic system
Goryachev–Chaplygin top
Pfaffian constraint
Udwadia–Kalaba equation
References
Classical mechanics | 0.770141 | 0.990072 | 0.762494 |
Field propulsion | Field propulsion is the concept of spacecraft propulsion where no propellant is necessary but instead momentum of the spacecraft is changed by an interaction of the spacecraft with external force fields, such as gravitational and magnetic fields from stars and planets. Proposed drives that use field propulsion are often called a reactionless or propellantless drive.
Types
Practical methods
Although not presently in wide use for space, there exist proven terrestrial examples of "field propulsion", in which electromagnetic fields act upon a conducting medium such as seawater or plasma for propulsion, is known as magnetohydrodynamics or MHD. MHD is similar in operation to electric motors, however rather than using moving parts or metal conductors, fluid or plasma conductors are employed. The EMS-1 and more recently the Yamato 1 are examples of such electromagnetic Field propulsion systems, first described in 1994. There is potential to apply MHD to the space environment such as in experiments like NASA's electrodynamic tether, Lorentz Actuated Orbits, the wingless electromagnetic air vehicle, and magnetoplasmadynamic thruster (which does use propellant).
Electrohydrodynamics is another method whereby electrically charged fluids are used for propulsion and boundary layer control such as ion propulsion
Other practical methods which could be loosely considered as field propulsion include: The gravity assist trajectory, which uses planetary gravity fields and orbital momentum; Solar sails and magnetic sails use respectively the radiation pressure and solar wind for spacecraft thrust; aerobraking uses the atmosphere of a planet to change relative velocity of a spacecraft. The last two actually involve the exchange of momentum with physical particles and are not usually expressed as an interaction with fields, but they are sometimes included as examples of field propulsion since no spacecraft propellant is required. An example is the Magsail magnetic sail design.
Speculative methods
Other concepts that have been proposed are speculative, using "frontier physics" and concepts from modern physics. So far none of these methods have been unambiguously demonstrated, much less proven practical.
The Woodward effect is based on a controversial concept of inertia and certain solutions to the equations for General Relativity. Experiments attempting to conclusively demonstrate this effect have been conducted since the 1990s.
In contrast, examples of proposals for field propulsion that rely on physics outside the present paradigms are various schemes for faster-than-light, warp drive and antigravity, and often amount to little more than catchy descriptive phrases, with no known physical basis. Until it is shown that the conservation of energy and momentum break down under certain conditions (or scales), any such schemes worthy of discussion must rely on energy and momentum transfer to the spacecraft from some external source such as a local force field, which in turn must obtain it from still other momentum and/or energy sources in the cosmos (in order to satisfy conservation of both energy and momentum).
Several people have speculated that the Casimir effect could be used to create a propellantless drive, often described as the "Casimir Sail", or a "Quantum Sail".
Field propulsion based on physical structure of space
This concept is based on the general relativity theory and the quantum field theory from which the idea that space has a physical structure can be proposed. The macroscopic structure is described by the general relativity theory and the microscopic structure by the quantum field theory.
The idea is to deform space around the space craft. By deforming the space it would be possible to create a region with higher pressure behind the space craft than before it. Due to the pressure gradient a force would be exerted on the space craft which in turn creates thrust for propulsion. Due to the purely theoretical nature of this propulsion concept it is hard to determine the amount of thrust and the maximum velocity that could be achieved. Currently there are two different concepts for such a field propulsion system one that is purely based on the general relativity theory and one based on the quantum field theory.
In the general relativistic field propulsion system space is considered to be an elastic field similar to rubber which means that space itself can be treated as an infinite elastic body. If the space-time curves, a normal inwards surface stress is generated which serves as a pressure field. By creating a great number of those curve surfaces behind the space craft it is possible to achieve a unidirectional surface force which can be use for the acceleration of the space craft.
For the quantum field theoretical propulsion system it is assumed, as stated by the quantum field theory and quantum Electrodynamics, that the quantum vacuum consists out of a zero-radiating electromagnetic field in a non-radiating mode and at a zero-point energy state, the lowest possible energy state. It is also theorized that matter is composed out of elementary primary charged entities, partons, which are bound together as elementary oscillators. By applying an electromagnetic zero point field a Lorentz force is applied on the partons. Using this on a dielectric material could affect the inertia of the mass and that way create an acceleration of the material without creating stress or strain inside the material.
Conservation Laws
Conservation of momentum is a fundamental requirement of propulsion systems because in experiments momentum is always conserved. This conservation law is implicit in the published work of Newton and Galileo, but arises on a fundamental level from the spatial translation symmetry of the laws of physics, as given by Noether's theorem. In each of the propulsion technologies, some form of energy exchange is required with momentum directed backward at the speed of light 'c' or some lesser velocity 'v' to balance the forward change of momentum. In absence of interaction with an external field, the power 'P' that is required to create a thrust force 'F' is given by when mass is ejected or if mass-free energy is ejected.
For a photon rocket the efficiency is too small to be competitive. Other technologies may have better efficiency if the ejection velocity is less than speed of light, or a local field can interact with another large scale field of the same type residing in space, which is the intent of field effect propulsion.
Advantages
The main advantage of a field propulsion systems is that no propellant is needed, only an energy source. This means that no propellant has to be stored and transported with the space craft which makes it attractive for long term interplanetary or even interstellar crewed missions. With current technology a large amount of fuel meant for the way back has to be brought to the destination which increases the payload of the overall space craft significantly. The increased payload of fuel, thus requires more force to accelerate it, requiring even more fuel which is the primary drawback of current rocket technology. Approximately 83% of a Hydrogen-Oxygen powered rocket, which can achieve orbit, is fuel.
Limits
The idea that with field propulsion no fuel tank would be required is technically inaccurate. The energy required to reach the high speeds involved begins to be non-neglectable for interstellar travel. For example, a 1-tonne spaceship traveling at 1/10 of the speed of light carries a kinetic energy of 4.5 × 1017 joules, equal to 5 kg according to the mass–energy equivalence. This means that for accelerating to such speed, no matter how this is achieved, the spaceship must have converted at least 5 kg of mass/energy into momentum, imagining 100% efficiency. Although such mass has not been "expelled" it has still been "disposed".
See also
References
External links
Examples of current field propulsion systems for ships.
Example of a possible field propulsion system based on existing physics and links to papers on the topic. broken link
Y. Minami., An Introduction to Concepts of Field Propulsion, JBIS,56,350-359(2003).
Minami Y., Musha T., Field Propulsion Systems for Space Travel, the Seventh IAA Symposium on Realistic Near-Term Advanced Scientific Space Missions, 11–13 July 2011, Aosta, Italy
Ed.T.Musha, Y.Minami, Field Propulsion System for Space Travel: Physics of Non-Conventional Propulsion Methods for Interstellar Travel, 2011 .
Field Resonance Propulsion Concept - NASA
ASPS
Biasing Nature's Omni-Vector Tensors via Dense, Co-aligned, Asymmetric Angular-Acceleration of Energy
Spacecraft propulsion
Science fiction themes
Hypothetical technology | 0.781875 | 0.975175 | 0.762465 |
Mohr's circle | Mohr's circle is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor.
Mohr's circle is often used in calculations relating to mechanical engineering for materials' strength, geotechnical engineering for strength of soils, and structural engineering for strength of built structures. It is also used for calculating stresses in many planes by reducing them to vertical and horizontal components. These are called principal planes in which principal stresses are calculated; Mohr's circle can also be used to find the principal planes and the principal stresses in a graphical representation, and is one of the easiest ways to do so.
After performing a stress analysis on a material body assumed as a continuum, the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system. The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa and ordinate (,) of each point on the circle are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
19th-century German engineer Karl Culmann was the first to conceive a graphical representation for stresses while considering longitudinal and vertical stresses in horizontal beams during bending. His work inspired fellow German engineer Christian Otto Mohr (the circle's namesake), who extended it to both two- and three-dimensional stresses and developed a failure criterion based on the stress circle.
Alternative graphical methods for the representation of the stress state at a point include the Lamé's stress ellipsoid and Cauchy's stress quadric.
The Mohr circle can be applied to any symmetric 2x2 tensor matrix, including the strain and moment of inertia tensors.
Motivation
Internal forces are produced between the particles of a deformable object, assumed as a continuum, as a reaction to applied external forces, i.e., either surface forces or body forces. This reaction follows from Euler's laws of motion for a continuum, which are equivalent to Newton's laws of motion for a particle. A measure of the intensity of these internal forces is called stress. Because the object is assumed as a continuum, these internal forces are distributed continuously within the volume of the object.
In engineering, e.g., structural, mechanical, or geotechnical, the stress distribution within an object, for instance stresses in a rock mass around a tunnel, airplane wings, or building columns, is determined through a stress analysis. Calculating the stress distribution implies the determination of stresses at every point (material particle) in the object. According to Cauchy, the stress at any point in an object (Figure 2), assumed as a continuum, is completely defined by the nine stress components of a second order tensor of type (2,0) known as the Cauchy stress tensor, :
After the stress distribution within the object has been determined with respect to a coordinate system , it may be necessary to calculate the components of the stress tensor at a particular material point with respect to a rotated coordinate system , i.e., the stresses acting on a plane with a different orientation passing through that point of interest —forming an angle with the coordinate system (Figure 3). For example, it is of interest to find the maximum normal stress and maximum shear stress, as well as the orientation of the planes where they act upon. To achieve this, it is necessary to perform a tensor transformation under a rotation of the coordinate system. From the definition of tensor, the Cauchy stress tensor obeys the tensor transformation law. A graphical representation of this transformation law for the Cauchy stress tensor is the Mohr circle for stress.
Mohr's circle for two-dimensional state of stress
In two dimensions, the stress tensor at a given material point with respect to any two perpendicular directions is completely defined by only three stress components. For the particular coordinate system these stress components are: the normal stresses and , and the shear stress . From the balance of angular momentum, the symmetry of the Cauchy stress tensor can be demonstrated. This symmetry implies that . Thus, the Cauchy stress tensor can be written as:
The objective is to use the Mohr circle to find the stress components and on a rotated coordinate system , i.e., on a differently oriented plane passing through and perpendicular to the - plane (Figure 4). The rotated coordinate system makes an angle with the original coordinate system .
Equation of the Mohr circle
To derive the equation of the Mohr circle for the two-dimensional cases of plane stress and plane strain, first consider a two-dimensional infinitesimal material element around a material point (Figure 4), with a unit area in the direction parallel to the - plane, i.e., perpendicular to the page or screen.
From equilibrium of forces on the infinitesimal element, the magnitudes of the normal stress and the shear stress are given by:
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of Mohr's circle parametric equations - Equilibrium of forces
|-
|From equilibrium of forces in the direction of (-axis) (Figure 4), and knowing that the area of the plane where acts is , we have:
However, knowing that
we obtain
Now, from equilibrium of forces in the direction of (-axis) (Figure 4), and knowing that the area of the plane where acts is , we have:
However, knowing that
we obtain
|}
Both equations can also be obtained by applying the tensor transformation law on the known Cauchy stress tensor, which is equivalent to performing the static equilibrium of forces in the direction of and .
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of Mohr's circle parametric equations - Tensor transformation
|-
|The stress tensor transformation law can be stated as
Expanding the right hand side, and knowing that and , we have:
However, knowing that
we obtain
However, knowing that
we obtain
It is not necessary at this moment to calculate the stress component acting on the plane perpendicular to the plane of action of as it is not required for deriving the equation for the Mohr circle.
|}
These two equations are the parametric equations of the Mohr circle. In these equations, is the parameter, and and are the coordinates. This means that by choosing a coordinate system with abscissa and ordinate , giving values to the parameter will place the points obtained lying on a circle.
Eliminating the parameter from these parametric equations will yield the non-parametric equation of the Mohr circle. This can be achieved by rearranging the equations for and , first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have
where
This is the equation of a circle (the Mohr circle) of the form
with radius centered at a point with coordinates in the coordinate system.
Sign conventions
There are two separate sets of sign conventions that need to be considered when using the Mohr Circle: One sign convention for stress components in the "physical space", and another for stress components in the "Mohr-Circle-space". In addition, within each of the two set of sign conventions, the engineering mechanics (structural engineering and mechanical engineering) literature follows a different sign convention from the geomechanics literature. There is no standard sign convention, and the choice of a particular sign convention is influenced by convenience for calculation and interpretation for the particular problem in hand. A more detailed explanation of these sign conventions is presented below.
The previous derivation for the equation of the Mohr Circle using Figure 4 follows the engineering mechanics sign convention. The engineering mechanics sign convention will be used for this article.
Physical-space sign convention
From the convention of the Cauchy stress tensor (Figure 3 and Figure 4), the first subscript in the stress components denotes the face on which the stress component acts, and the second subscript indicates the direction of the stress component. Thus is the shear stress acting on the face with normal vector in the positive direction of the -axis, and in the positive direction of the -axis.
In the physical-space sign convention, positive normal stresses are outward to the plane of action (tension), and negative normal stresses are inward to the plane of action (compression) (Figure 5).
In the physical-space sign convention, positive shear stresses act on positive faces of the material element in the positive direction of an axis. Also, positive shear stresses act on negative faces of the material element in the negative direction of an axis. A positive face has its normal vector in the positive direction of an axis, and a negative face has its normal vector in the negative direction of an axis. For example, the shear stresses and are positive because they act on positive faces, and they act as well in the positive direction of the -axis and the -axis, respectively (Figure 3). Similarly, the respective opposite shear stresses and acting in the negative faces have a negative sign because they act in the negative direction of the -axis and -axis, respectively.
Mohr-circle-space sign convention
In the Mohr-circle-space sign convention, normal stresses have the same sign as normal stresses in the physical-space sign convention: positive normal stresses act outward to the plane of action, and negative normal stresses act inward to the plane of action.
Shear stresses, however, have a different convention in the Mohr-circle space compared to the convention in the physical space. In the Mohr-circle-space sign convention, positive shear stresses rotate the material element in the counterclockwise direction, and negative shear stresses rotate the material in the clockwise direction. This way, the shear stress component is positive in the Mohr-circle space, and the shear stress component is negative in the Mohr-circle space.
Two options exist for drawing the Mohr-circle space, which produce a mathematically correct Mohr circle:
Positive shear stresses are plotted upward (Figure 5, sign convention #1)
Positive shear stresses are plotted downward, i.e., the -axis is inverted (Figure 5, sign convention #2).
Plotting positive shear stresses upward makes the angle on the Mohr circle have a positive rotation clockwise, which is opposite to the physical space convention. That is why some authors prefer plotting positive shear stresses downward, which makes the angle on the Mohr circle have a positive rotation counterclockwise, similar to the physical space convention for shear stresses.
To overcome the "issue" of having the shear stress axis downward in the Mohr-circle space, there is an alternative sign convention where positive shear stresses are assumed to rotate the material element in the clockwise direction and negative shear stresses are assumed to rotate the material element in the counterclockwise direction (Figure 5, option 3). This way, positive shear stresses are plotted upward in the Mohr-circle space and the angle has a positive rotation counterclockwise in the Mohr-circle space. This alternative sign convention produces a circle that is identical to the sign convention #2 in Figure 5 because a positive shear stress is also a counterclockwise shear stress, and both are plotted downward. Also, a negative shear stress is a clockwise shear stress, and both are plotted upward.
This article follows the engineering mechanics sign convention for the physical space and the alternative sign convention for the Mohr-circle space (sign convention #3 in Figure 5)
Drawing Mohr's circle
Assuming we know the stress components , , and at a point in the object under study, as shown in Figure 4, the following are the steps to construct the Mohr circle for the state of stresses at :
Draw the Cartesian coordinate system with a horizontal -axis and a vertical -axis.
Plot two points and in the space corresponding to the known stress components on both perpendicular planes and , respectively (Figure 4 and 6), following the chosen sign convention.
Draw the diameter of the circle by joining points and with a straight line .
Draw the Mohr Circle. The centre of the circle is the midpoint of the diameter line , which corresponds to the intersection of this line with the axis.
Finding principal normal stresses
The magnitude of the principal stresses are the abscissas of the points and (Figure 6) where the circle intersects the -axis. The magnitude of the major principal stress is always the greatest absolute value of the abscissa of any of these two points. Likewise, the magnitude of the minor principal stress is always the lowest absolute value of the abscissa of these two points. As expected, the ordinates of these two points are zero, corresponding to the magnitude of the shear stress components on the principal planes. Alternatively, the values of the principal stresses can be found by
where the magnitude of the average normal stress is the abscissa of the centre , given by
and the length of the radius of the circle (based on the equation of a circle passing through two points), is given by
Finding maximum and minimum shear stresses
The maximum and minimum shear stresses correspond to the ordinates of the highest and lowest points on the circle, respectively. These points are located at the intersection of the circle with the vertical line passing through the center of the circle, . Thus, the magnitude of the maximum and minimum shear stresses are equal to the value of the circle's radius
Finding stress components on an arbitrary plane
As mentioned before, after the two-dimensional stress analysis has been performed we know the stress components , , and at a material point . These stress components act in two perpendicular planes and passing through as shown in Figure 5 and 6. The Mohr circle is used to find the stress components and , i.e., coordinates of any point on the circle, acting on any other plane passing through making an angle with the plane . For this, two approaches can be used: the double angle, and the Pole or origin of planes.
Double angle
As shown in Figure 6, to determine the stress components acting on a plane at an angle counterclockwise to the plane on which acts, we travel an angle in the same counterclockwise direction around the circle from the known stress point to point , i.e., an angle between lines and in the Mohr circle.
The double angle approach relies on the fact that the angle between the normal vectors to any two physical planes passing through (Figure 4) is half the angle between two lines joining their corresponding stress points on the Mohr circle and the centre of the circle.
This double angle relation comes from the fact that the parametric equations for the Mohr circle are a function of . It can also be seen that the planes and in the material element around of Figure 5 are separated by an angle , which in the Mohr circle is represented by a angle (double the angle).
Pole or origin of planes
The second approach involves the determination of a point on the Mohr circle called the pole or the origin of planes. Any straight line drawn from the pole will intersect the Mohr circle at a point that represents the state of stress on a plane inclined at the same orientation (parallel) in space as that line. Therefore, knowing the stress components and on any particular plane, one can draw a line parallel to that plane through the particular coordinates and on the Mohr circle and find the pole as the intersection of such line with the Mohr circle. As an example, let's assume we have a state of stress with stress components , , and , as shown on Figure 7. First, we can draw a line from point parallel to the plane of action of , or, if we choose otherwise, a line from point parallel to the plane of action of . The intersection of any of these two lines with the Mohr circle is the pole. Once the pole has been determined, to find the state of stress on a plane making an angle with the vertical, or in other words a plane having its normal vector forming an angle with the horizontal plane, then we can draw a line from the pole parallel to that plane (See Figure 7). The normal and shear stresses on that plane are then the coordinates of the point of intersection between the line and the Mohr circle.
Finding the orientation of the principal planes
The orientation of the planes where the maximum and minimum principal stresses act, also known as principal planes, can be determined by measuring in the Mohr circle the angles ∠BOC and ∠BOE, respectively, and taking half of each of those angles. Thus, the angle ∠BOC between and is double the angle which the major principal plane makes with plane .
Angles and can also be found from the following equation
This equation defines two values for which are apart (Figure). This equation can be derived directly from the geometry of the circle, or by making the parametric equation of the circle for equal to zero (the shear stress in the principal planes is always zero).
Example
Assume a material element under a state of stress as shown in Figure 8 and Figure 9, with the plane of one of its sides oriented 10° with respect to the horizontal plane.
Using the Mohr circle, find:
The orientation of their planes of action.
The maximum shear stresses and orientation of their planes of action.
The stress components on a horizontal plane.
Check the answers using the stress transformation formulas or the stress transformation law.
Solution:
Following the engineering mechanics sign convention for the physical space (Figure 5), the stress components for the material element in this example are:
.
Following the steps for drawing the Mohr circle for this particular state of stress, we first draw a Cartesian coordinate system with the -axis upward.
We then plot two points A(50,40) and B(-10,-40), representing the state of stress at plane A and B as show in both Figure 8 and Figure 9. These points follow the engineering mechanics sign convention for the Mohr-circle space (Figure 5), which assumes positive normals stresses outward from the material element, and positive shear stresses on each plane rotating the material element clockwise. This way, the shear stress acting on plane B is negative and the shear stress acting on plane A is positive.
The diameter of the circle is the line joining point A and B. The centre of the circle is the intersection of this line with the -axis. Knowing both the location of the centre and length of the diameter, we are able to plot the Mohr circle for this particular state of stress.
The abscissas of both points E and C (Figure 8 and Figure 9) intersecting the -axis are the magnitudes of the minimum and maximum normal stresses, respectively; the ordinates of both points E and C are the magnitudes of the shear stresses acting on both the minor and major principal planes, respectively, which is zero for principal planes.
Even though the idea for using the Mohr circle is to graphically find different stress components by actually measuring the coordinates for different points on the circle, it is more convenient to confirm the results analytically. Thus, the radius and the abscissa of the centre of the circle are
and the principal stresses are
The coordinates for both points H and G (Figure 8 and Figure 9) are the magnitudes of the minimum and maximum shear stresses, respectively; the abscissas for both points H and G are the magnitudes for the normal stresses acting on the same planes where the minimum and maximum shear stresses act, respectively.
The magnitudes of the minimum and maximum shear stresses can be found analytically by
and the normal stresses acting on the same planes where the minimum and maximum shear stresses act are equal to
We can choose to either use the double angle approach (Figure 8) or the Pole approach (Figure 9) to find the orientation of the principal normal stresses and principal shear stresses.
Using the double angle approach we measure the angles ∠BOC and ∠BOE in the Mohr Circle (Figure 8) to find double the angle the major principal stress and the minor principal stress make with plane B in the physical space. To obtain a more accurate value for these angles, instead of manually measuring the angles, we can use the analytical expression
One solution is: .
From inspection of Figure 8, this value corresponds to the angle ∠BOE. Thus, the minor principal angle is
Then, the major principal angle is
Remember that in this particular example and are angles with respect to the plane of action of (oriented in the -axis)and not angles with respect to the plane of action of (oriented in the -axis).
Using the Pole approach, we first localize the Pole or origin of planes. For this, we draw through point A on the Mohr circle a line inclined 10° with the horizontal, or, in other words, a line parallel to plane A where acts. The Pole is where this line intersects the Mohr circle (Figure 9). To confirm the location of the Pole, we could draw a line through point B on the Mohr circle parallel to the plane B where acts. This line would also intersect the Mohr circle at the Pole (Figure 9).
From the Pole, we draw lines to different points on the Mohr circle. The coordinates of the points where these lines intersect the Mohr circle indicate the stress components acting on a plane in the physical space having the same inclination as the line. For instance, the line from the Pole to point C in the circle has the same inclination as the plane in the physical space where acts. This plane makes an angle of 63.435° with plane B, both in the Mohr-circle space and in the physical space. In the same way, lines are traced from the Pole to points E, D, F, G and H to find the stress components on planes with the same orientation.
Mohr's circle for a general three-dimensional state of stresses
To construct the Mohr circle for a general three-dimensional case of stresses at a point, the values of the principal stresses and their principal directions must be first evaluated.
Considering the principal axes as the coordinate system, instead of the general , , coordinate system, and assuming that , then the normal and shear components of the stress vector , for a given plane with unit vector , satisfy the following equations
Knowing that , we can solve for , , , using the Gauss elimination method which yields
Since , and is non-negative, the numerators from these equations satisfy
as the denominator and
as the denominator and
as the denominator and
These expressions can be rewritten as
which are the equations of the three Mohr's circles for stress , , and , with radii , , and , and their centres with coordinates , , , respectively.
These equations for the Mohr circles show that all admissible stress points lie on these circles or within the shaded area enclosed by them (see Figure 10). Stress points satisfying the equation for circle lie on, or outside circle . Stress points satisfying the equation for circle lie on, or inside circle . And finally, stress points satisfying the equation for circle lie on, or outside circle .
See also
Critical plane analysis
References
Bibliography
External links
Mohr's Circle and more circles by Rebecca Brannon
DoITPoMS Teaching and Learning Package- "Stress Analysis and Mohr's Circle"
Classical mechanics
Elasticity (physics)
Solid mechanics
Mechanics
Circles | 0.766438 | 0.994802 | 0.762454 |
Potential gradient | In physics, chemistry , a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux.
Definition
One dimension
The simplest definition for a potential gradient F in one dimension is the following:
where is some type of scalar potential and is displacement (not distance) in the direction, the subscripts label two different positions , and potentials at those points, . In the limit of infinitesimal displacements, the ratio of differences becomes a ratio of differentials:
The direction of the electric potential gradient is from to .
Three dimensions
In three dimensions, Cartesian coordinates make it clear that the resultant potential gradient is the sum of the potential gradients in each direction:
where are unit vectors in the directions. This can be compactly written in terms of the gradient operator ,
although this final form holds in any curvilinear coordinate system, not just Cartesian.
This expression represents a significant feature of any conservative vector field , namely has a corresponding potential .
Using Stokes' theorem, this is equivalently stated as
meaning the curl, denoted ∇×, of the vector field vanishes.
Physics
Newtonian gravitation
In the case of the gravitational field , which can be shown to be conservative, it is equal to the gradient in gravitational potential :
There are opposite signs between gravitational field and potential, because the potential gradient and field are opposite in direction: as the potential increases, the gravitational field strength decreases and vice versa.
Electromagnetism
In electrostatics, the electric field is independent of time , so there is no induction of a time-dependent magnetic field by Faraday's law of induction:
which implies is the gradient of the electric potential , identical to the classical gravitational field:
In electrodynamics, the field is time dependent and induces a time-dependent field also (again by Faraday's law), so the curl of is not zero like before, which implies the electric field is no longer the gradient of electric potential. A time-dependent term must be added:
where is the electromagnetic vector potential. This last potential expression in fact reduces Faraday's law to an identity.
Fluid mechanics
In fluid mechanics, the velocity field describes the fluid motion. An irrotational flow means the velocity field is conservative, or equivalently the vorticity pseudovector field is zero:
This allows the velocity potential to be defined simply as:
Chemistry
In an electrochemical half-cell, at the interface between the electrolyte (an ionic solution) and the metal electrode, the standard electric potential difference is:
where R = gas constant, T = temperature of solution, z = valency of the metal, e = elementary charge, NA = Avogadro constant, and aM+z is the activity of the ions in solution. Quantities with superscript ⊖ denote the measurement is taken under standard conditions. The potential gradient is relatively abrupt, since there is an almost definite boundary between the metal and solution, hence the interface term.
Biology
In biology, a potential gradient is the net difference in electric charge across a cell membrane.
Non-uniqueness of potentials
Since gradients in potentials correspond to physical fields, it makes no difference if a constant is added on (it is erased by the gradient operator which includes partial differentiation). This means there is no way to tell what the "absolute value" of the potential "is" – the zero value of potential is completely arbitrary and can be chosen anywhere by convenience (even "at infinity"). This idea also applies to vector potentials, and is exploited in classical field theory and also gauge field theory.
Absolute values of potentials are not physically observable, only gradients and path-dependent potential differences are. However, the Aharonov–Bohm effect is a quantum mechanical effect which illustrates that non-zero electromagnetic potentials along a closed loop (even when the and fields are zero everywhere in the region) lead to changes in the phase of the wave function of an electrically charged particle in the region, so the potentials appear to have measurable significance.
Potential theory
Field equations, such as Gauss's laws for electricity, for magnetism, and for gravity, can be written in the form:
where is the electric charge density, monopole density (should they exist), or mass density and is a constant (in terms of physical constants , , and other numerical factors).
Scalar potential gradients lead to Poisson's equation:
A general theory of potentials has been developed to solve this equation for the potential. The gradient of that solution gives the physical field, solving the field equation.
See also
Tensors in curvilinear coordinates
References
Concepts in physics
Spatial gradient
pl:Gradient potencjału | 0.779144 | 0.978577 | 0.762452 |
Convergent evolution | Convergent evolution is the independent evolution of similar features in species of different periods or epochs in time. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions.
The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to parallel evolution, which occurs when two independent species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog.
Many instances of convergent evolution are known in plants, including the repeated development of C4 photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory.
Overview
In morphology, analogous traits arise when different species live in similar ways and/or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies.
In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies.
In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course." Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.
Distinctions
Cladistics
In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis.
Atavism
In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years.
Parallel vs. convergent evolution
When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two.
When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences.
At molecular level
Proteins
Protease active sites
The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly.
Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies.
Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N-terminal threonine in order to avoid such steric clashes.
Several evolutionarily independent enzyme superfamilies with different protein folds use the N-terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families.
Cone snail and fish insulin
Conus geographus produces a distinct form of insulin that is more similar to fish insulin protein sequences than to insulin from more closely related molluscs, suggesting convergent evolution, though with the possibility of horizontal gene transfer.
Ferrous iron uptake via protein transporters in land plants and chlorophytes
Distant homologues of the metal ion transporters ZIP in land plants and chlorophytes have converged in structure, likely to take up Fe2+ efficiently. The IRT1 proteins from Arabidopsis thaliana and rice have extremely different amino acid sequences from Chlamydomonass IRT1, but their three-dimensional structures are similar, suggesting convergent evolution.
Na+,K+-ATPase and Insect resistance to cardiotonic steroids
Many examples of convergent evolution exist in insects in terms of developing resistance at a molecular level to toxins. One well-characterized example is the evolution of resistance to cardiotonic steroids (CTSs) via amino acid substitutions at well-defined positions of the α-subunit of Na+,K+-ATPase (ATPalpha). Variation in ATPalpha has been surveyed in various CTS-adapted species spanning six insect orders. Among 21 CTS-adapted species, 58 (76%) of 76 amino acid substitutions at sites implicated in CTS resistance occur in parallel in at least two lineages. 30 of these substitutions (40%) occur at just two sites in the protein (positions 111 and 122). CTS-adapted species have also recurrently evolved neo-functionalized duplications of ATPalpha, with convergent tissue-specific expression patterns.
Nucleic acids
Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis-regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection.
In animal morphology
Bodyplans
Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. A similar shape and swimming adaptations are even present in molluscs, such as Phylliroe. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming.
The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body, and especially the skull shape, of the thylacine (Tasmanian tiger or Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes.
Echolocation
As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations.
Electric fishes
The Gymnotiformes of South America and the Mormyridae of Africa independently evolved passive electroreception (around 119 and 110 million years ago, respectively). Around 20 million years after acquiring that ability, both groups evolved active electrogenesis, producing weak electric fields to help them detect prey.
Eyes
One of the best-known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidaria (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes—with one sharp difference: the cephalopod eye is "wired" in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. As a result, vertebrates have a blind spot.
Flight
Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have independently evolved their own means of powered flight. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore- and hindlimbs, while insects have wings that evolved separately from different organs.
Flying squirrels and sugar gliders are much alike in their body plans, with gliding wings stretched between their limbs, but flying squirrels are placental mammals while sugar gliders are marsupials, widely separated within the mammal lineage from the placentals.
Hummingbird hawk-moths and hummingbirds have evolved similar flight and feeding patterns.
Insect mouthparts
Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting-chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower-visiting insects such as bees and flower beetles, or the biting-sucking mouthparts of blood-sucking insects such as fleas and mosquitos.
Opposable thumbs
Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans and other apes, monkeys, and lemurs. Opposable thumbs also evolved in giant pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers.
Primates
Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin-lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes.
Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80% of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved.
In plants
The annual life-cycle
While most plant species are perennial, about 6% follow an annual life cycle, living for only one growing season. The annual life cycle independently emerged in over 120 plant families of angiosperms. The prevalence of annual species increases under hot-dry summer conditions in the four species-rich families of annuals (Asteraceae, Brassicaceae, Fabaceae, and Poaceae), indicating that the annual life cycle is adaptive.
Carbon fixation
C4 photosynthesis, one of the three major carbon-fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use carbon fixation, with many monocots including 46% of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae.
Fruits
Fruits with a wide variety of structural origins have converged to become edible. Apples are pomes with five carpels; their accessory tissues form the apple's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits.
Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology.
Carnivory
Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis-related protein and a thaumatin-related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress-responsive proteins have often been co-opted in the repeated evolution of carnivory.
Methods of inference
Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern-based or process-based convergence is expected. Pattern-based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process-based convergence is when the convergence is due to similar forces of natural selection.
Pattern-based measures
Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long-term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa.
Distance-based measures assess the degree of similarity between lineages over time. Frequency-based measures assess the number of lineages that have evolved in a particular trait space.
Process-based measures
Methods to infer process-based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein–Uhlenbeck process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred.
See also
: the presence of multiple alleles in ancestral populations might lead to the impression that convergent evolution has occurred.
Iterative evolution – The repeated evolution of a specific trait or body plan from the same ancestral lineage at different points in time.
Breeding back – A form of selective breeding to recreate the traits of an extinct species, but the genome will differ from the original species.
Orthogenesis (contrastable with convergent evolution; involves teleology)
Contingency (evolutionary biology) – effect of evolutionary history on outcomes
Notes
References
Further reading
External links
Convergent evolution
Evolutionary biology terminology | 0.763802 | 0.998231 | 0.762451 |
Euler angles | The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.
They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in three dimensional linear algebra.
Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position.
Chained rotations equivalence
Euler angles can be defined by elemental geometry or by composition of rotations (i.e. chained rotations). The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame.
The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation).
In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation.
Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition.
Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups:
Proper Euler angles
Tait–Bryan angles .
Tait–Bryan angles are also called Cardan angles; nautical angles; heading, elevation, and bank; or yaw, pitch, and roll. Sometimes, both kinds of sequences are called "Euler angles". In that case, the sequences of the first group are called proper or classic Euler angles.
Classic Euler angles
The Euler angles are three angles introduced by Swiss mathematician Leonhard Euler (1707–1783) to describe the orientation of a rigid body with respect to a fixed coordinate system.
Geometrical definition
The axes of the original frame are denoted as x, y, z and the axes of the rotated frame as X, Y, Z. The geometrical definition (sometimes referred to as static) begins by defining the line of nodes (N) as the intersection of the planes xy and XY (it can also be defined as the common perpendicular to the axes z and Z and then written as the vector product N = z × Z). Using it, the three Euler angles can be defined as follows:
(or ) is the signed angle between the x axis and the N axis (x-convention – it could also be defined between y and N, called y-convention).
(or ) is the angle between the z axis and the Z axis.
(or ) is the signed angle between the N axis and the X axis (x-convention).
Euler angles between two reference frames are defined only if both frames have the same handedness.
Conventions by intrinsic rotations
Intrinsic rotations are elemental rotations that occur about the axes of a coordinate system XYZ attached to a moving body. Therefore, they change their orientation after each elemental rotation. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three intrinsic rotations can be used to reach any target orientation for XYZ.
Euler angles can be defined by intrinsic rotations. The rotated frame XYZ may be imagined to be initially aligned with xyz, before undergoing the three elemental rotations represented by Euler angles. Its successive orientations may be denoted as follows:
x-y-z or x0-y0-z0 (initial)
x′-y′-z′ or x1-y1-z1 (after first rotation)
x″-y″-z″ or x2-y2-z2 (after second rotation)
X-Y-Z or x3-y3-z3 (final)
For the above-listed sequence of rotations, the line of nodes N can be simply defined as the orientation of X after the first elemental rotation. Hence, N can be simply denoted x′. Moreover, since the third elemental rotation occurs about Z, it does not change the orientation of Z. Hence Z coincides with z″. This allows us to simplify the definition of the Euler angles as follows:
α (or φ) represents a rotation around the z axis,
β (or θ) represents a rotation around the x′ axis,
γ (or ψ) represents a rotation around the z″ axis.
Conventions by extrinsic rotations
Extrinsic rotations are elemental rotations that occur about the axes of the fixed coordinate system xyz. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three extrinsic rotations can be used to reach any target orientation for XYZ. The Euler or Tait–Bryan angles (α, β, γ) are the amplitudes of these elemental rotations. For instance, the target orientation can be reached as follows (note the reversed order of Euler angle application):
The XYZ system rotates about the z axis by γ. The X axis is now at angle γ with respect to the x axis.
The XYZ system rotates again, but this time about the x axis by β. The Z axis is now at angle β with respect to the z axis.
The XYZ system rotates a third time, about the z axis again, by angle α.
In sum, the three elemental rotations occur about z, x and z. Indeed, this sequence is often denoted z-x-z (or 3-1-3). Sets of rotation axes associated with both proper Euler angles and Tait–Bryan angles are commonly named using this notation (see above for details).
If each step of the rotation acts on the rotating coordinate system XYZ, the rotation is intrinsic (Z-X'-Z''). Intrinsic rotation can also be denoted 3-1-3.
Signs, ranges and conventions
Angles are commonly defined according to the right-hand rule. Namely, they have positive values when they represent a rotation that appears clockwise when looking in the positive direction of the axis, and negative values when the rotation appears counter-clockwise. The opposite convention (left hand rule) is less frequently adopted.
About the ranges (using interval notation):
for α and γ, the range is defined modulo 2 radians. For instance, a valid range could be .
for β, the range covers radians (but can not be said to be modulo ). For example, it could be or .
The angles α, β and γ are uniquely determined except for the singular case that the xy and the XY planes are identical, i.e. when the z axis and the Z axis have the same or opposite directions. Indeed, if the z axis and the Z axis are the same, β = 0 and only (α + γ) is uniquely defined (not the individual values), and, similarly, if the z axis and the Z axis are opposite, β = and only (α − γ) is uniquely defined (not the individual values). These ambiguities are known as gimbal lock in applications.
There are six possibilities of choosing the rotation axes for proper Euler angles. In all of them, the first and third rotation axes are the same. The six possible sequences are:
z1-x′-z2″ (intrinsic rotations) or z2-x-z1 (extrinsic rotations)
x1-y′-x2″ (intrinsic rotations) or x2-y-x1 (extrinsic rotations)
y1-z′-y2″ (intrinsic rotations) or y2-z-y1 (extrinsic rotations)
z1-y′-z2″ (intrinsic rotations) or z2-y-z1 (extrinsic rotations)
x1-z′-x2″ (intrinsic rotations) or x2-z-x1 (extrinsic rotations)
y1-x′-y2″ (intrinsic rotations) or y2-x-y1 (extrinsic rotations)
Precession, nutation and intrinsic rotation
Precession, nutation, and intrinsic rotation (spin) are defined as the movements obtained by changing one of the Euler angles while leaving the other two constant. These motions are not expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes N and the third one is an intrinsic rotation around Z, an axis fixed in the body that moves.
The static definition implies that:
α (precession) represents a rotation around the z axis,
β (nutation) represents a rotation around the N or x′ axis,
γ (intrinsic rotation) represents a rotation around the Z or z″ axis.
If β is zero, there is no rotation about N. As a consequence, Z coincides with z, α and γ represent rotations about the same axis (z), and the final orientation can be obtained with a single rotation about z, by an angle equal to .
As an example, consider a top. The top spins around its own axis of symmetry; this corresponds to its intrinsic rotation. It also rotates around its pivotal axis, with its center of mass orbiting the pivotal axis; this rotation is a precession. Finally, the top can wobble up and down; the inclination angle is the nutation angle. The same example can be seen with the movements of the earth.
Though all three movements can be represented by a rotation operator with constant coefficients in some frame, they cannot be represented by these operators all at the same time. Given a reference frame, at most one of them will be coefficient-free. Only precession can be expressed in general as a matrix in the basis of the space without dependencies of the other angles.
These movements also behave as a gimbal set. Given a set of frames, able to move each with respect to the former according to just one angle, like a gimbal, there will exist an external fixed frame, one final frame and two frames in the middle, which are called "intermediate frames". The two in the middle work as two gimbal rings that allow the last frame to reach any orientation in space.
Tait–Bryan angles
[[Image:taitbrianzyx.svg|thumb|left|200px|Tait–Bryan angles. z-y′-x″ sequence (intrinsic rotations; N coincides with y). The angle rotation sequence is ψ, θ, φ. Note that in this case and θ is a negative angle.]]
The second type of formalism is called Tait–Bryan angles, after Scottish mathematical physicist Peter Guthrie Tait (1831–1901) and English applied mathematician George H. Bryan (1864–1928). It is the convention normally used for aerospace applications, so that zero degrees elevation represents the horizontal attitude. Tait–Bryan angles represent the orientation of the aircraft with respect to the world frame. When dealing with other vehicles, different axes conventions are possible.
Definitions
The definitions and notations used for Tait–Bryan angles are similar to those described above for proper Euler angles (geometrical definition, intrinsic rotation definition, extrinsic rotation definition). The only difference is that Tait–Bryan angles represent rotations about three distinct axes (e.g. x-y-z, or x-y′-z″), while proper Euler angles use the same axis for both the first and third elemental rotations (e.g., z-x-z, or z-x′-z″).
This implies a different definition for the line of nodes in the geometrical construction. In the proper Euler angles case it was defined as the intersection between two homologous Cartesian planes (parallel when Euler angles are zero; e.g. xy and XY). In the Tait–Bryan angles case, it is defined as the intersection of two non-homologous planes (perpendicular when Euler angles are zero; e.g. xy and YZ).
Conventions
The three elemental rotations may occur either about the axes of the original coordinate system, which remains motionless (extrinsic rotations), or about the axes of the rotating coordinate system, which changes its orientation after each elemental rotation (intrinsic rotations).
There are six possibilities of choosing the rotation axes for Tait–Bryan angles. The six possible sequences are:
x-y′-z″ (intrinsic rotations) or z-y-x (extrinsic rotations)
y-z′-x″ (intrinsic rotations) or x-z-y (extrinsic rotations)
z-x′-y″ (intrinsic rotations) or y-x-z (extrinsic rotations)
x-z′-y″ (intrinsic rotations) or y-z-x (extrinsic rotations)
z-y′-x″ (intrinsic rotations) or x-y-z (extrinsic rotations): the intrinsic rotations are known as: yaw, pitch and roll
y-x′-z″ (intrinsic rotations) or z-x-y (extrinsic rotations)
Signs and ranges
Tait–Bryan convention is widely used in engineering with different purposes. There are several axes conventions in practice for choosing the mobile and fixed axes, and these conventions determine the signs of the angles. Therefore, signs must be studied in each case carefully.
The range for the angles ψ and φ covers 2 radians. For θ the range covers radians.
Alternative names
These angles are normally taken as one in the external reference frame (heading, bearing), one in the intrinsic moving frame (bank) and one in a middle frame, representing an elevation or inclination with respect to the horizontal plane, which is equivalent to the line of nodes for this purpose.
As chained rotations
For an aircraft, they can be obtained with three rotations around its principal axes if done in the proper order and starting from a frame coincident with the reference frame.
A yaw will obtain the bearing,
a pitch will yield the elevation, and
a roll gives the bank angle.
Therefore, in aerospace they are sometimes called yaw, pitch, and roll. Notice that this will not work if the rotations are applied in any other order or if the airplane axes start in any position non-equivalent to the reference frame.
Tait–Bryan angles, following z-y′-x″ (intrinsic rotations) convention, are also known as nautical angles, because they can be used to describe the orientation of a ship or aircraft, or Cardan angles, after the Italian mathematician and physicist Gerolamo Cardano, who first described in detail the Cardan suspension and the Cardan joint.
Angles of a given frame
A common problem is to find the Euler angles of a given frame. The fastest way to get them is to write the three given vectors as columns of a matrix and compare it with the expression of the theoretical matrix (see later table of matrices). Hence the three Euler Angles can be calculated. Nevertheless, the same result can be reached avoiding matrix algebra and using only elemental geometry. Here we present the results for the two most commonly used conventions: ZXZ for proper Euler angles and ZYX for Tait–Bryan. Notice that any other convention can be obtained just changing the name of the axes.
Proper Euler angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in the main diagram, it can be seen that:
And, since
for we have
As is the double projection of a unitary vector,
There is a similar construction for , projecting it first over the plane defined by the axis z and the line of nodes. As the angle between the planes is and , this leads to:
and finally, using the inverse cosine function,
Tait–Bryan angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in this new diagram (notice that the angle theta is negative), it can be seen that:
As before,
for we have
in a way analogous to the former one:
Looking for similar expressions to the former ones:
Last remarks
Note that the inverse sine and cosine functions yield two possible values for the argument. In this geometrical description, only one of the solutions is valid. When Euler angles are defined as a sequence of rotations, all the solutions can be valid, but there will be only one inside the angle ranges. This is because the sequence of rotations to reach the target frame is not unique if the ranges are not previously defined.
For computational purposes, it may be useful to represent the angles using . For example, in the case of proper Euler angles:
Conversion to other orientation representations
Euler angles are one way to represent orientations. There are others, and it is possible to change to and from other conventions. Three parameters are always required to describe orientations in a 3-dimensional Euclidean space. They can be given in several ways, Euler angles being one of them; see charts on SO(3) for others.
The most common orientation representations are the rotation matrices, the axis-angle and the quaternions, also known as Euler–Rodrigues parameters, which provide another mechanism for representing 3D rotations. This is equivalent to the special unitary group description.
Expressing rotations in 3D as unit quaternions instead of matrices has some advantages:
Concatenating rotations is computationally faster and numerically more stable.
Extracting the angle and axis of rotation is simpler.
Interpolation is more straightforward. See for example slerp.
Quaternions do not suffer from gimbal lock as Euler angles do.
Regardless, the rotation matrix calculation is the first step for obtaining the other two representations.
Rotation matrix
Any orientation can be achieved by composing three elemental rotations, starting from a known standard orientation. Equivalently, any rotation matrix R can be decomposed as a product of three elemental rotation matrices. For instance:
is a rotation matrix that may be used to represent a composition of extrinsic rotations about axes z, y, x, (in that order), or a composition of intrinsic rotations about axes x-y′-z″ (in that order). However, both the definition of the elemental rotation matrices X, Y, Z, and their multiplication order depend on the choices taken by the user about the definition of both rotation matrices and Euler angles (see, for instance, Ambiguities in the definition of rotation matrices). Unfortunately, different sets of conventions are adopted by users in different contexts. The following table was built according to this set of conventions:
Each matrix is meant to operate by pre-multiplying column vectors (see Ambiguities in the definition of rotation matrices)
Each matrix is meant to represent an active rotation (the composing and composed matrices are supposed to act on the coordinates of vectors defined in the initial fixed reference frame and give as a result the coordinates of a rotated vector defined in the same reference frame).
Each matrix is meant to represent, primarily, a composition of intrinsic rotations (around the axes of the rotating reference frame) and, secondarily, the composition of three extrinsic rotations (which corresponds to the constructive evaluation of the R matrix by the multiplication of three truly elemental matrices, in reverse order).
Right handed reference frames are adopted, and the right hand rule is used to determine the sign of the angles α, β, γ.
For the sake of simplicity, the following table of matrix products uses the following nomenclature:
X, Y, Z are the matrices representing the elemental rotations about the axes x, y, z of the fixed frame (e.g., Xα represents a rotation about x by an angle α).
s and c represent sine and cosine (e.g., sα represents the sine of α).
These tabular results are available in numerous textbooks. For each column the last row constitutes the most commonly used convention.
To change the formulas for passive rotations (or find reverse active rotation), transpose the matrices (then each matrix transforms the initial coordinates of a vector remaining fixed to the coordinates of the same vector measured in the rotated reference system; same rotation axis, same angles, but now the coordinate system rotates, rather than the vector).
The following table contains formulas for angles α, β and γ from elements of a rotation matrix .
Properties
The Euler angles form a chart on all of SO(3), the special orthogonal group of rotations in 3D space. The chart is smooth except for a polar coordinate style singularity along . See charts on SO(3) for a more complete treatment.
The space of rotations is called in general "The Hypersphere of rotations", though this is a misnomer: the group Spin(3) is isometric to the hypersphere S3, but the rotation space SO(3) is instead isometric to the real projective space RP'''3 which is a 2-fold quotient space of the hypersphere. This 2-to-1 ambiguity is the mathematical origin of spin in physics.
A similar three angle decomposition applies to SU(2), the special unitary group of rotations in complex 2D space, with the difference that β ranges from 0 to 2. These are also called Euler angles.
The Haar measure for SO(3) in Euler angles is given by the Hopf angle parametrisation of SO(3), , where parametrise , the space of rotation axes.
For example, to generate uniformly randomized orientations, let α and γ be uniform from 0 to 2, let z be uniform from −1 to 1, and let .
Geometric algebra
Other properties of Euler angles and rotations in general can be found from the geometric algebra, a higher level abstraction, in which the quaternions are an even subalgebra. The principal tool in geometric algebra is the rotor where angle of rotation, is the rotation axis (unitary vector) and is the pseudoscalar (trivector in )
Higher dimensions
It is possible to define parameters analogous to the Euler angles in dimensions higher than three.
In four dimensions and above, the concept of "rotation about an axis" loses meaning and instead becomes "rotation in a plane." The number of Euler angles needed to represent the group is , equal to the number of planes containing two distinct coordinate axes in n-dimensional Euclidean space.
In SO(4) a rotation matrix is defined by two unit quaternions, and therefore has six degrees of freedom, three from each quaternion.
Applications
Vehicles and moving frames
Their main advantage over other orientation descriptions is that they are directly measurable from a gimbal mounted in a vehicle. As gyroscopes keep their rotation axis constant, angles measured in a gyro frame are equivalent to angles measured in the lab frame. Therefore, gyros are used to know the actual orientation of moving spacecraft, and Euler angles are directly measurable. Intrinsic rotation angle cannot be read from a single gimbal, so there has to be more than one gimbal in a spacecraft. Normally there are at least three for redundancy. There is also a relation to the well-known gimbal lock problem of mechanical engineering.
When studying rigid bodies in general, one calls the xyz system space coordinates, and the XYZ system body coordinates. The space coordinates are treated as unmoving, while the body coordinates are considered embedded in the moving body. Calculations involving acceleration, angular acceleration, angular velocity, angular momentum, and kinetic energy are often easiest in body coordinates, because then the moment of inertia tensor does not change in time. If one also diagonalizes the rigid body's moment of inertia tensor (with nine components, six of which are independent), then one has a set of coordinates (called the principal axes) in which the moment of inertia tensor has only three components.
The angular velocity of a rigid body takes a simple form using Euler angles in the moving frame. Also the Euler's rigid body equations are simpler because the inertia tensor is constant in that frame.
Crystallographic texture
In materials science, crystallographic texture (or preferred orientation) can be described using Euler angles. In texture analysis, the Euler angles provide a mathematical depiction of the orientation of individual crystallites within a polycrystalline material, allowing for the quantitative description of the macroscopic material.
The most common definition of the angles is due to Bunge and corresponds to the ZXZ convention. It is important to note, however, that the application generally involves axis transformations of tensor quantities, i.e. passive rotations. Thus the matrix that corresponds to the Bunge Euler angles is the transpose of that shown in the table above.
Others
Euler angles, normally in the Tait–Bryan convention, are also used in robotics for speaking about the degrees of freedom of a wrist. They are also used in electronic stability control in a similar way.
Gun fire control systems require corrections to gun-order angles (bearing and elevation) to compensate for deck tilt (pitch and roll). In traditional systems, a stabilizing gyroscope with a vertical spin axis corrects for deck tilt, and stabilizes the optical sights and radar antenna. However, gun barrels point in a direction different from the line of sight to the target, to anticipate target movement and fall of the projectile due to gravity, among other factors. Gun mounts roll and pitch with the deck plane, but also require stabilization. Gun orders include angles computed from the vertical gyro data, and those computations involve Euler angles.
Euler angles are also used extensively in the quantum mechanics of angular momentum. In quantum mechanics, explicit descriptions of the representations of SO(3) are very important for calculations, and almost all the work has been done using Euler angles. In the early history of quantum mechanics, when physicists and chemists had a sharply negative reaction towards abstract group theoretic methods (called the Gruppenpest''), reliance on Euler angles was also essential for basic theoretical work.
Many mobile computing devices contain accelerometers which can determine these devices' Euler angles with respect to the earth's gravitational attraction. These are used in applications such as games, bubble level simulations, and kaleidoscopes.
Computer graphics libraries like three.js use them to point the camera
See also
3D projection
Axis-angle representation
Conversion between quaternions and Euler angles
Davenport chained rotations
Euler's rotation theorem
Gimbal lock
Quaternion
Quaternions and spatial rotation
Rotation formalisms in three dimensions
Spherical coordinate system
References
Bibliography
External links
David Eberly. Euler Angle Formulas, Geometric Tools
An interactive tutorial on Euler angles available at https://www.mecademic.com/en/how-is-orientation-in-space-represented-with-euler-angles
EulerAngles an iOS app for visualizing in 3D the three rotations associated with Euler angles
Orientation Library "orilib", a collection of routines for rotation / orientation manipulation, including special tools for crystal orientations
Online tool to convert rotation matrices available at rotation converter (numerical conversion)
Online tool to convert symbolic rotation matrices (dead, but still available from the Wayback Machine) symbolic rotation converter
Rotation, Reflection, and Frame Change: Orthogonal tensors in computational engineering mechanics, IOP Publishing
Euler Angles, Quaternions, and Transformation Matrices for Space Shuttle Analysis, NASA
Rotation in three dimensions
Euclidean symmetries
Angle
Analytic geometry | 0.763426 | 0.998709 | 0.762441 |
Subsets and Splits