title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
Angular momentum coupling | In quantum mechanics, angular momentum coupling is the procedure of constructing eigenstates of total angular momentum out of eigenstates of separate angular momenta. For instance, the orbit and spin of a single particle can interact through spin–orbit interaction, in which case the complete physical picture must include spin–orbit coupling. Or two charged particles, each with a well-defined angular momentum, may interact by Coulomb forces, in which case coupling of the two one-particle angular momenta to a total angular momentum is a useful step in the solution of the two-particle Schrödinger equation.
In both cases the separate angular momenta are no longer constants of motion, but the sum of the two angular momenta usually still is. Angular momentum coupling in atoms is of importance in atomic spectroscopy. Angular momentum coupling of electron spins is of importance in quantum chemistry. Also in the nuclear shell model angular momentum coupling is ubiquitous.
In astronomy, spin–orbit coupling reflects the general law of conservation of angular momentum, which holds for celestial systems as well. In simple cases, the direction of the angular momentum vector is neglected, and the spin–orbit coupling is the ratio between the frequency with which a planet or other celestial body spins about its own axis to that with which it orbits another body. This is more commonly known as orbital resonance. Often, the underlying physical effects are tidal forces.
General theory and detailed origin
Angular momentum conservation
Conservation of angular momentum is the principle that the total angular momentum of a system has a constant magnitude and direction if the system is subjected to no external torque. Angular momentum is a property of a physical system that is a constant of motion (also referred to as a conserved property, time-independent and well-defined) in two situations:
The system experiences a spherically symmetric potential field.
The system moves (in quantum mechanical sense) in isotropic space.
In both cases the angular momentum operator commutes with the Hamiltonian of the system. By Heisenberg's uncertainty relation this means that the angular momentum and the energy (eigenvalue of the Hamiltonian) can be measured at the same time.
An example of the first situation is an atom whose electrons only experience the Coulomb force of its atomic nucleus. If we ignore the electron–electron interaction (and other small interactions such as spin–orbit coupling), the orbital angular momentum of each electron commutes with the total Hamiltonian. In this model the atomic Hamiltonian is a sum of kinetic energies of the electrons and the spherically symmetric electron–nucleus interactions. The individual electron angular momenta commute with this Hamiltonian. That is, they are conserved properties of this approximate model of the atom.
An example of the second situation is a rigid rotor moving in field-free space. A rigid rotor has a well-defined, time-independent, angular momentum.
These two situations originate in classical mechanics. The third kind of conserved angular momentum, associated with spin, does not have a classical counterpart. However, all rules of angular momentum coupling apply to spin as well.
In general the conservation of angular momentum implies full rotational symmetry
(described by the groups SO(3) and SU(2)) and, conversely, spherical symmetry implies conservation of angular momentum. If two or more physical systems have conserved angular momenta, it can be useful to combine these momenta to a total angular momentum of the combined system—a conserved property of the total system.
The building of eigenstates of the total conserved angular momentum from the angular momentum eigenstates of the individual subsystems is referred to as angular momentum coupling.
Application of angular momentum coupling is useful when there is an interaction between subsystems that, without interaction, would have conserved angular momentum. By the very interaction the spherical symmetry of the subsystems is broken, but the angular momentum of the total system remains a constant of motion. Use of the latter fact is helpful in the solution of the Schrödinger equation.
Examples
As an example we consider two electrons, in an atom (say the helium atom) labeled with = 1 and 2. If there is no electron–electron interaction, but only electron–nucleus interaction, then the two electrons can be rotated around the nucleus independently of each other; nothing happens to their energy. The expectation values of both operators, 1 and 2, are conserved.
However, if we switch on the electron–electron interaction that depends on the distance (1,2) between the electrons, then only a simultaneous
and equal rotation of the two electrons will leave (1,2) invariant. In such a case the expectation value of neither
1 nor 2 is a constant of motion in general, but the expectation value of the total orbital angular momentum operator = 1 + 2
is. Given the eigenstates of 1 and 2, the construction of eigenstates of (which still is conserved) is the coupling of the angular momenta of electrons 1 and 2.
The total orbital angular momentum quantum number is restricted to integer values and must satisfy the triangular condition that , such that the three nonnegative integer values could correspond to the three sides of a triangle.
In quantum mechanics, coupling also exists between angular momenta belonging to different Hilbert spaces of a single object, e.g. its spin and its orbital angular momentum. If the spin has half-integer values, such as for an electron, then the total (orbital plus spin) angular momentum will also be restricted to half-integer values.
Reiterating slightly differently the above: one expands the quantum states of composed systems (i.e. made of subunits like two hydrogen atoms or two electrons) in basis sets which are made of tensor products of quantum states which in turn describe the subsystems individually. We assume that the states of the subsystems can be chosen as eigenstates of their angular momentum operators (and of their component along any arbitrary axis).
The subsystems are therefore correctly described by a pair of , quantum numbers (see angular momentum for details). When there is interaction among the subsystems, the total Hamiltonian contains terms that do not commute with the angular operators acting on the subsystems only. However, these terms do commute with the total angular momentum operator. Sometimes one refers to the non-commuting interaction terms in the Hamiltonian as angular momentum coupling terms, because they necessitate the angular momentum coupling.
Spin–orbit coupling
The behavior of atoms and smaller particles is well described by the theory of quantum mechanics, in which each particle has an intrinsic angular momentum called spin and specific configurations (of e.g. electrons in an atom) are described by a set of quantum numbers. Collections of particles also have angular momenta and corresponding quantum numbers, and under different circumstances the angular momenta of the parts couple in different ways to form the angular momentum of the whole. Angular momentum coupling is a category including some of the ways that subatomic particles can interact with each other.
In atomic physics, spin–orbit coupling, also known as spin-pairing, describes a weak magnetic interaction, or coupling, of the particle spin and the orbital motion of this particle, e.g. the electron spin and its motion around an atomic nucleus. One of its effects is to separate the energy of internal states of the atom, e.g. spin-aligned and spin-antialigned that would otherwise be identical in energy. This interaction is responsible for many of the details of atomic structure.
In solid-state physics, the spin coupling with the orbital motion can lead to splitting of energy bands due to Dresselhaus or Rashba effects.
In the macroscopic world of orbital mechanics, the term spin–orbit coupling is sometimes used in the same sense as spin–orbit resonance.
LS coupling
In light atoms (generally Z ≤ 30), electron spins si interact among themselves so they combine to form a total spin angular momentum S. The same happens with orbital angular momenta ℓi, forming a total orbital angular momentum L. The interaction between the quantum numbers L and S is called Russell–Saunders coupling (after Henry Norris Russell and Frederick Saunders) or LS coupling. Then S and L couple together and form a total angular momentum J:
where L and S are the totals:
This is an approximation which is good as long as any external magnetic fields are weak. In larger magnetic fields, these two momenta decouple, giving rise to a different splitting pattern in the energy levels (the Paschen–Back effect), and the size of LS coupling term becomes small.
For an extensive example on how LS-coupling is practically applied, see the article on term symbols.
jj coupling
In heavier atoms the situation is different. In atoms with bigger nuclear charges, spin–orbit interactions are frequently as large as or larger than spin–spin interactions or orbit–orbit interactions. In this situation, each orbital angular momentum ℓi tends to combine with the corresponding individual spin angular momentum si, originating an individual total angular momentum ji. These then couple up to form the total angular momentum J
This description, facilitating calculation of this kind of interaction, is known as jj coupling.
Spin–spin coupling
Spin–spin coupling is the coupling of the intrinsic angular momentum (spin) of different particles.
J-coupling between pairs of nuclear spins is an important feature of nuclear magnetic resonance (NMR) spectroscopy as it can
provide detailed information about the structure and conformation of molecules. Spin–spin coupling between nuclear spin and electronic spin is responsible for hyperfine structure in atomic spectra.
Term symbols
Term symbols are used to represent the states and spectral transitions of atoms, they are found from coupling of angular momenta mentioned above. When the state of an atom has been specified with a term symbol, the allowed transitions can be found through selection rules by considering which transitions would conserve angular momentum. A photon has spin 1, and when there is a transition with emission or absorption of a photon the atom will need to change state to conserve angular momentum. The term symbol selection rules are: = 0; = 0, ±1; = ± 1; = 0, ±1 .
The expression "term symbol" is derived from the "term series" associated with the Rydberg states of an atom and their energy levels. In the Rydberg formula the frequency or wave number of the light emitted by a hydrogen-like atom is proportional to the difference between the two terms of a transition. The series known to early spectroscopy were designated sharp, principal, diffuse, and fundamental and consequently the letters and were used to represent the orbital angular momentum states of an atom.
Relativistic effects
In very heavy atoms, relativistic shifting of the energies of the electron energy levels accentuates spin–orbit coupling effect. Thus, for example, uranium molecular orbital diagrams must directly incorporate relativistic symbols when considering interactions with other atoms.
Nuclear coupling
In atomic nuclei, the spin–orbit interaction is much stronger than for atomic electrons, and is incorporated directly into the nuclear shell model. In addition, unlike atomic–electron term symbols, the lowest energy state is not , but rather, . All nuclear levels whose value (orbital angular momentum) is greater than zero are thus split in the shell model to create states designated by and . Due to the nature of the shell model, which assumes an average potential rather than a central Coulombic potential, the nucleons that go into the and nuclear states are considered degenerate within each orbital (e.g. The 2 contains four nucleons, all of the same energy. Higher in energy is the 2 which contains two equal-energy nucleons).
See also
Clebsch–Gordan coefficients
Angular momentum diagrams (quantum mechanics)
Spherical basis
Notes
External links
LS and jj coupling
Term symbol
Web calculator of spin couplings: shell model, atomic term symbol
Angular momentum
Atomic physics
Rotational symmetry
ar:ترابط مغزلي مداري
it:Interazione spin-orbita | 0.77918 | 0.982739 | 0.76573 |
Ariadne's thread (logic) | Ariadne's thread, named for the legend of Ariadne, is solving a problem which has multiple apparent ways to proceed—such as a physical maze, a logic puzzle, or an ethical dilemma—through an exhaustive application of logic to all available routes. It is the particular method used that is able to follow completely through to trace steps or take point by point a series of found truths in a contingent, ordered search that reaches an end position. This process can take the form of a mental record, a physical marking, or even a philosophical debate; it is the process itself that assumes the name.
Implementation
The key element to applying Ariadne's thread to a problem is the creation and maintenance of a record—physical or otherwise—of the problem's available and exhausted options at all times. This record is referred to as the "thread", regardless of its actual medium. The purpose the record serves is to permit backtracking—that is, reversing earlier decisions and trying alternatives. Given the record, applying the algorithm is straightforward:
At any moment that there is a choice to be made, make one arbitrarily from those not already marked as failures, and follow it logically as far as possible.
If a contradiction results, back up to the last decision made, mark it as a failure, and try another decision at the same point. If no other options exist there, back up to the last place in the record that does have options, mark the failure at that level, and proceed onward.
This algorithm will terminate upon either finding a solution or marking all initial choices as failures; in the latter case, there is no solution. If a thorough examination is desired even though a solution has been found, one can revert to the previous decision, mark the success, and continue on as if a solution were never found; the algorithm will exhaust all decisions and find all solutions.
Distinction from trial and error
The terms "Ariadne's thread" and "trial and error" are often used interchangeably, which is not necessarily correct. They have two distinctive differences:
"Trial and error" implies that each "trial" yields some particular value to be studied and improved upon, removing "errors" from each iteration to enhance the quality of future trials. Ariadne's thread has no such mechanism, and hence all decisions made are arbitrary. For example, the scientific method is trial and error; puzzle-solving is Ariadne's thread.
Trial-and-error approaches are rarely concerned with how many solutions may exist to a problem, and indeed often assume only one correct solution exists. Ariadne's thread makes no such assumption, and is capable of locating all possible solutions to a purely logical problem.
In short, trial and error approaches a desired solution; Ariadne's thread blindly exhausts the search space completely, finding any and all solutions. Each has its appropriate distinct uses. They can be employed in tandem—for example, although the editing of a Wikipedia article is arguably a trial-and-error process (given how in theory it approaches an ideal state), article histories provide the record for which Ariadne's thread may be applied, reverting detrimental edits and restoring the article back to the most recent error-free version, from which other options may be attempted.
Applications
Obviously, Ariadne's thread may be applied to the solving of mazes in the same manner as the legend; an actual thread can be used as the record, or chalk or a similar marker can be applied to label passages. If the maze is on paper, the thread may well be a pencil.
Logic problems of all natures may be resolved via Ariadne's thread, the maze being but an example. At present, it is most prominently applied to Sudoku puzzles, used to attempt values for as-yet-unsolved cells. The medium of the thread for puzzle-solving can vary widely, from a pencil to numbered chits to a computer program, but all accomplish the same task. Note that as the compilation of Ariadne's thread is an inductive process, and due to its exhaustiveness leaves no room for actual study, it is largely frowned upon as a solving method, to be employed only as a last resort when deductive methods fail.
Artificial intelligence is heavily dependent upon Ariadne's thread when it comes to game-playing, most notably in programs which play chess; the possible moves are the decisions, game-winning states the solutions, and game-losing states failures. Due to the massive depth of many games, most algorithms cannot afford to apply Ariadne's thread entirely on every move due to time constraints, and therefore work in tandem with a heuristic that evaluates game states and limits a breadth-first search only to those that are most likely to be beneficial, a trial-and-error process.
Even circumstances where the concept of "solution" is not so well defined have had Ariadne's thread applied to them, such as navigating the World Wide Web, making sense of patent law, and in philosophy; "Ariadne's Thread" is a popular name for websites of many purposes, but primarily for those that feature philosophical or ethical debate.
See also
Brute-force search
Depth-first search
Labyrinth
Deductive reasoning
Computer chess
J. Hillis Miller
Gordian Knot
References
Solving Sudoku Step-by-step guide by Michael Mepham; includes history of Ariadne's thread and demonstration of application
Constructing Sudoku A flow chart shows how to construct and solve Sudoku by using Ariadne's thread (back-tracking technique)
Ariadne and the Minotaur: The Cultural Role of a Philosophy of Rhetoric Article by Andrea Battistini detailing Ariadne's thread as a philosophical metaphor
Philosophy in Labyrinths A study of the logic behind and meaning of labyrinths; includes rather literal interpretations of Ariadne's thread.
Logic
Philosophical analogies
Philosophical methodology
Problem solving methods
Ariadne | 0.771591 | 0.992401 | 0.765728 |
Van Allen radiation belt | The Van Allen radiation belt is a zone of energetic charged particles, most of which originate from the solar wind, that are captured by and held around a planet by that planet's magnetosphere. Earth has two such belts, and sometimes others may be temporarily created. The belts are named after James Van Allen, who published an article describing the belts in 1958.
Earth's two main belts extend from an altitude of about above the surface, in which region radiation levels vary. The belts are in the inner region of Earth's magnetic field. They trap energetic electrons and protons. Other nuclei, such as alpha particles, are less prevalent. Most of the particles that form the belts are thought to come from the solar wind while others arrive as cosmic rays. By trapping the solar wind, the magnetic field deflects those energetic particles and protects the atmosphere from destruction.
The belts endanger satellites, which must have their sensitive components protected with adequate shielding if they spend significant time near that zone. Apollo Astronauts going through the Van Allen Belts received a very low and harmless dose of radiation.
In 2013, the Van Allen Probes detected a transient, third radiation belt, which persisted for four weeks.
Discovery
Kristian Birkeland, Carl Størmer, Nicholas Christofilos, and Enrico Medi had investigated the possibility of trapped charged particles in 1895, forming a theoretical basis for the formation of radiation belts. The second Soviet satellite Sputnik 2 which had detectors designed by Sergei Vernov, followed by the US satellites Explorer 1 and Explorer 3, confirmed the existence of the belt in early 1958, later named after James Van Allen from the University of Iowa. The trapped radiation was first mapped by Explorer 4, Pioneer 3, and Luna 1.
The term Van Allen belts refers specifically to the radiation belts surrounding Earth; however, similar radiation belts have been discovered around other planets. The Sun does not support long-term radiation belts, as it lacks a stable, global dipole field. The Earth's atmosphere limits the belts' particles to regions above 200–1,000 km, (124–620 miles) while the belts do not extend past 8 Earth radii RE. The belts are confined to a volume which extends about 65° on either side of the celestial equator.
Research
The NASA Van Allen Probes mission aims at understanding (to the point of predictability) how populations of relativistic electrons and ions in space form or change in response to changes in solar activity and the solar wind.
NASA Institute for Advanced Concepts–funded studies have proposed magnetic scoops to collect antimatter that naturally occurs in the Van Allen belts of Earth, although only about 10 micrograms of antiprotons are estimated to exist in the entire belt.
The Van Allen Probes mission successfully launched on August 30, 2012. The primary mission was scheduled to last two years with expendables expected to last four. The probes were deactivated in 2019 after running out of fuel and are expected to deorbit during the 2030s. NASA's Goddard Space Flight Center manages the Living With a Star program—of which the Van Allen Probes were a project, along with Solar Dynamics Observatory (SDO). The Applied Physics Laboratory was responsible for the implementation and instrument management for the Van Allen Probes.
Radiation belts exist around other planets and moons in the solar system that have magnetic fields powerful enough to sustain them. To date, most of these radiation belts have been poorly mapped. The Voyager Program (namely Voyager 2) only nominally confirmed the existence of similar belts around Uranus and Neptune.
Geomagnetic storms can cause electron density to increase or decrease relatively quickly (i.e., approximately one day or less). Longer-timescale processes determine the overall configuration of the belts. After electron injection increases electron density, electron density is often observed to decay exponentially. Those decay time constants are called "lifetimes." Measurements from the Van Allen Probe B's Magnetic Electron Ion Spectrometer (MagEIS) show long electron lifetimes (i.e., longer than 100 days) in the inner belt; short electron lifetimes of around one or two days are observed in the "slot" between the belts; and energy-dependent electron lifetimes of roughly five to 20 days are found in the outer belt.
Inner belt
The inner Van Allen Belt extends typically from an altitude of 0.2 to 2 Earth radii (L values of 1.2 to 3) or to above the Earth. In certain cases, when solar activity is stronger or in geographical areas such as the South Atlantic Anomaly, the inner boundary may decline to roughly 200 km above the Earth's surface. The inner belt contains high concentrations of electrons in the range of hundreds of keV and energetic protons with energies exceeding 100 MeV—trapped by the relatively strong magnetic fields in the region (as compared to the outer belt).
It is thought that proton energies exceeding 50 MeV in the lower belts at lower altitudes are the result of the beta decay of neutrons created by cosmic ray collisions with nuclei of the upper atmosphere. The source of lower energy protons is believed to be proton diffusion, due to changes in the magnetic field during geomagnetic storms.
Due to the slight offset of the belts from Earth's geometric center, the inner Van Allen belt makes its closest approach to the surface at the South Atlantic Anomaly.
In March 2014, a pattern resembling "zebra stripes" was observed in the radiation belts by the Radiation Belt Storm Probes Ion Composition Experiment (RBSPICE) onboard Van Allen Probes. The initial theory proposed in 2014 was that—due to the tilt in Earth's magnetic field axis—the planet's rotation generated an oscillating, weak electric field that permeates through the entire inner radiation belt. A 2016 study instead concluded that the zebra stripes were an imprint of ionospheric winds on radiation belts.
Outer belt
The outer belt consists mainly of high-energy (0.1–10 MeV) electrons trapped by the Earth's magnetosphere. It is more variable than the inner belt, as it is more easily influenced by solar activity. It is almost toroidal in shape, beginning at an altitude of 3 Earth radii and extending to 10 Earth radii (RE)— above the Earth's surface. Its greatest intensity is usually around 4 to 5 RE. The outer electron radiation belt is mostly produced by inward radial diffusion and local acceleration due to transfer of energy from whistler-mode plasma waves to radiation belt electrons. Radiation belt electrons are also constantly removed by collisions with Earth's atmosphere, losses to the magnetopause, and their outward radial diffusion. The gyroradii of energetic protons would be large enough to bring them into contact with the Earth's atmosphere. Within this belt, the electrons have a high flux and at the outer edge (close to the magnetopause), where geomagnetic field lines open into the geomagnetic "tail", the flux of energetic electrons can drop to the low interplanetary levels within about —a decrease by a factor of 1,000.
In 2014, it was discovered that the inner edge of the outer belt is characterized by a very sharp transition, below which highly relativistic electrons (> 5MeV) cannot penetrate. The reason for this shield-like behavior is not well understood.
The trapped particle population of the outer belt is varied, containing electrons and various ions. Most of the ions are in the form of energetic protons, but a certain percentage are alpha particles and O+ oxygen ions—similar to those in the ionosphere but much more energetic. This mixture of ions suggests that ring current particles probably originate from more than one source.
The outer belt is larger than the inner belt, and its particle population fluctuates widely. Energetic (radiation) particle fluxes can increase and decrease dramatically in response to geomagnetic storms, which are themselves triggered by magnetic field and plasma disturbances produced by the Sun. The increases are due to storm-related injections and acceleration of particles from the tail of the magnetosphere. Another cause of variability of the outer belt particle populations is the wave-particle interactions with various plasma waves in a broad range of frequencies.
On February 28, 2013, a third radiation belt—consisting of high-energy ultrarelativistic charged particles—was reported to be discovered. In a news conference by NASA's Van Allen Probe team, it was stated that this third belt is a product of coronal mass ejection from the Sun. It has been represented as a separate creation which splits the Outer Belt, like a knife, on its outer side, and exists separately as a storage container of particles for a month's time, before merging once again with the Outer Belt.
The unusual stability of this third, transient belt has been explained as due to a 'trapping' by the Earth's magnetic field of ultrarelativistic particles as they are lost from the second, traditional outer belt. While the outer zone, which forms and disappears over a day, is highly variable due to interactions with the atmosphere, the ultrarelativistic particles of the third belt are thought not to scatter into the atmosphere, as they are too energetic to interact with atmospheric waves at low latitudes. This absence of scattering and the trapping allows them to persist for a long time, finally only being destroyed by an unusual event, such as the shock wave from the Sun.
Flux values
In the belts, at a given point, the flux of particles of a given energy decreases sharply with energy.
At the magnetic equator, electrons of energies exceeding 5000 keV (resp. 5 MeV) have omnidirectional fluxes ranging from 1.2×106 (resp. 3.7×104) up to 9.4×109 (resp. 2×107) particles per square centimeter per second.
The proton belts contain protons with kinetic energies ranging from about 100 keV, which can penetrate 0.6 μm of lead, to over 400 MeV, which can penetrate 143 mm of lead.
Most published flux values for the inner and outer belts may not show the maximum probable flux densities that are possible in the belts. There is a reason for this discrepancy: the flux density and the location of the peak flux is variable, depending primarily on solar activity, and the number of spacecraft with instruments observing the belt in real time has been limited. The Earth has not experienced a solar storm of Carrington event intensity and duration, while spacecraft with the proper instruments have been available to observe the event.
Radiation levels in the belts would be dangerous to humans if they were exposed for an extended period of time. The Apollo missions minimised hazards for astronauts by sending spacecraft at high speeds through the thinner areas of the upper belts, bypassing inner belts completely, except for the Apollo 14 mission where the spacecraft traveled through the heart of the trapped radiation belts.
Antimatter confinement
In 2011, a study confirmed earlier speculation that the Van Allen belt could confine antiparticles. The Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) experiment detected levels of antiprotons orders of magnitude higher than are expected from normal particle decays while passing through the South Atlantic Anomaly. This suggests the Van Allen belts confine a significant flux of antiprotons produced by the interaction of the Earth's upper atmosphere with cosmic rays. The energy of the antiprotons has been measured in the range from 60 to 750 MeV.
Research funded by the NASA Institute for Advanced Concepts concluded that harnessing these antiprotons for spacecraft propulsion would be feasible. Researchers believed that this approach would have advantages over antiproton generation at CERN because collecting the particles in situ eliminates transportation losses and costs. Jupiter and Saturn are also possible sources, but the Earth belt is the most productive. Jupiter is less productive than might be expected due to magnetic shielding from cosmic rays of much of its atmosphere. In 2019, CMS announced that the construction of a device that would be capable of collecting these particles has already begun. NASA will use this device to collect these particles and transport them to institutes all around the world for further examination. These so-called "antimatter containers" could be used for industrial purposes as well in the future.
Implications for space travel
Spacecraft travelling beyond low Earth orbit enter the zone of radiation of the Van Allen belts. Beyond the belts, they face additional hazards from cosmic rays and solar particle events. A region between the inner and outer Van Allen belts lies at 2 to 4 Earth radii and is sometimes referred to as the "safe zone".
Solar cells, integrated circuits, and sensors can be damaged by radiation. Geomagnetic storms occasionally damage electronic components on spacecraft. Miniaturization and digitization of electronics and logic circuits have made satellites more vulnerable to radiation, as the total electric charge in these circuits is now small enough so as to be comparable with the charge of incoming ions. Electronics on satellites must be hardened against radiation to operate reliably. The Hubble Space Telescope, among other satellites, often has its sensors turned off when passing through regions of intense radiation. A satellite shielded by 3 mm of aluminium in an elliptic orbit passing the radiation belts will receive about 2,500 rem (25 Sv) per year. (For comparison, a full-body dose of 5 Sv is deadly.) Almost all radiation will be received while passing the inner belt.
The Apollo missions marked the first event where humans traveled through the Van Allen belts, which was one of several radiation hazards known by mission planners. The astronauts had low exposure in the Van Allen belts due to the short period of time spent flying through them.
Astronauts' overall exposure was actually dominated by solar particles once outside Earth's magnetic field. The total radiation received by the astronauts varied from mission-to-mission but was measured to be between 0.16 and 1.14 rads (1.6 and 11.4 mGy), much less than the standard of 5 rem (50 mSv) per year set by the United States Atomic Energy Commission for people who work with radioactivity.
Causes
It is generally understood that the inner and outer Van Allen belts result from different processes. The inner belt is mainly composed of energetic protons produced from the decay of so-called "albedo" neutrons, which are themselves the result of cosmic ray collisions in the upper atmosphere. The outer Van Allen belt consists mainly of electrons. They are injected from the geomagnetic tail following geomagnetic storms, and are subsequently energized through wave-particle interactions.
In the inner belt, particles that originate from the Sun are trapped in the Earth's magnetic field. Particles spiral along the magnetic lines of flux as they move "latitudinally" along those lines. As particles move toward the poles, the magnetic field line density increases, and their "latitudinal" velocity is slowed and can be reversed, deflecting the particles back towards the equatorial region, causing them to bounce back and forth between the Earth's poles. In addition to both spiralling around and moving along the flux lines, the electrons drift slowly in an eastward direction, while the protons drift westward.
The gap between the inner and outer Van Allen belts is sometimes called the "safe zone" or "safe slot", and is the location of medium Earth orbits. The gap is caused by the VLF radio waves, which scatter particles in pitch angle, which adds new ions to the atmosphere. Solar outbursts can also dump particles into the gap, but those drain out in a matter of days. The VLF radio waves were previously thought to be generated by turbulence in the radiation belts, but recent work by J.L. Green of the Goddard Space Flight Center compared maps of lightning activity collected by the Microlab 1 spacecraft with data on radio waves in the radiation-belt gap from the IMAGE spacecraft; the results suggest that the radio waves are actually generated by lightning within Earth's atmosphere. The generated radio waves strike the ionosphere at the correct angle to pass through only at high latitudes, where the lower ends of the gap approach the upper atmosphere. These results are still being debated in the scientific community.
Proposed removal
Draining the charged particles from the Van Allen belts would open up new orbits for satellites and make travel safer for astronauts.
High Voltage Orbiting Long Tether, or HiVOLT, is a concept proposed by Russian physicist V. V. Danilov and further refined by Robert P. Hoyt and Robert L. Forward for draining and removing the radiation fields of the Van Allen radiation belts that surround the Earth.
Another proposal for draining the Van Allen belts involves beaming very-low-frequency (VLF) radio waves from the ground into the Van Allen belts.
Draining radiation belts around other planets has also been proposed, for example, before exploring Europa, which orbits within Jupiter's radiation belt.
As of 2024, it remains uncertain if there are any negative unintended consequences to removing these radiation belts.
See also
Dipole model of the Earth's magnetic field
L-shell
List of artificial radiation belts
Space weather
Paramagnetism
Explanatory notes
Citations
Additional sources
Part I: Radial transport, pp. 1679–1693, ; Part II: Local acceleration and loss, pp. 1694–1713, .
External links
An explanation of the belts by David P. Stern and Mauricio Peredo
Background: Trapped particle radiation models—Introduction to the trapped radiation belts by SPENVIS
SPENVIS—Space Environment, Effects, and Education System—Gateway to the SPENVIS orbital dose calculation software
The Van Allen Probes Web Site Johns Hopkins University Applied Physics Laboratory
1958 in science
Articles containing video clips
Geomagnetism
Space physics
Space plasmas | 0.76586 | 0.99982 | 0.765723 |
Geoid | The geoid is the shape that the ocean surface would take under the influence of the gravity of Earth, including gravitational attraction and Earth's rotation, if other influences such as winds and tides were absent. This surface is extended through the continents (such as might be approximated with very narrow hypothetical canals). According to Gauss, who first described it, it is the "mathematical figure of the Earth", a smooth but irregular surface whose shape results from the uneven distribution of mass within and on the surface of Earth. It can be known only through extensive gravitational measurements and calculations. Despite being an important concept for almost 200 years in the history of geodesy and geophysics, it has been defined to high precision only since advances in satellite geodesy in the late 20th century.
The geoid is often expressed as a geoid undulation or geoidal height above a given reference ellipsoid, which is a slightly flattened sphere whose equatorial bulge is caused by the planet's rotation. Generally the geoidal height rises where the Earth's material is locally more dense and exerts greater gravitational force than the surrounding areas. The geoid in turn serves as a reference coordinate surface for various vertical coordinates, such as orthometric heights, geopotential heights, and dynamic heights (see Geodesy#Heights).
All points on a geoid surface have the same geopotential (the sum of gravitational potential energy and centrifugal potential energy). At this surface, apart from temporary tidal fluctuations, the force of gravity acts everywhere perpendicular to the geoid, meaning that plumb lines point perpendicular and bubble levels are parallel to the geoid.
Being an equigeopotential means the geoid corresponds to the free surface of water at rest (if only the Earth's gravity and rotational acceleration were at work); this is also a sufficient condition for a ball to remain at rest instead of rolling over the geoid.
Earth's gravity acceleration (the vertical derivative of geopotential) is thus non-uniform over the geoid.
Description
The geoid surface is irregular, unlike the reference ellipsoid (which is a mathematical idealized representation of the physical Earth as an ellipsoid), but is considerably smoother than Earth's physical surface. Although the "ground" of the Earth has excursions on the order of +8,800 m (Mount Everest) and −11,000 m (Marianas Trench), the geoid's deviation from an ellipsoid ranges from +85 m (Iceland) to −106 m (southern India), less than 200 m total.
If the ocean were of constant density and undisturbed by tides, currents or weather, its surface would resemble the geoid. The permanent deviation between the geoid and mean sea level is called ocean surface topography. If the continental land masses were crisscrossed by a series of tunnels or canals, the sea level in those canals would also very nearly coincide with the geoid. Geodesists are able to derive the heights of continental points above the geoid by spirit leveling.
Being an equipotential surface, the geoid is, by definition, a surface upon which the force of gravity is perpendicular everywhere, apart from temporary tidal fluctuations. This means that when traveling by ship, one does not notice the undulation of the geoid; neglecting tides, the local vertical (plumb line) is always perpendicular to the geoid and the local horizon tangential to it. Likewise, spirit levels will always be parallel to the geoid.
Simplified example
Earth's gravitational field is not uniform. An oblate spheroid is typically used as the idealized Earth, but even if the Earth were spherical and did not rotate, the strength of gravity would not be the same everywhere because density varies throughout the planet. This is due to magma distributions, the density and weight of different geological compositions in the Earth's crust, mountain ranges, deep sea trenches, crust compaction due to glaciers, and so on.
If that sphere were then covered in water, the water would not be the same height everywhere. Instead, the water level would be higher or lower with respect to Earth's center, depending on the integral of the strength of gravity from the center of the Earth to that location. The geoid level coincides with where the water would be. Generally the geoid rises where the Earth's material is locally more dense, exerts greater gravitational force, and pulls more water from the surrounding area.
Formulation
The geoid undulation (also known as geoid height or geoid anomaly), N, is the height of the geoid relative to a given ellipsoid of reference.
The undulation is not standardized, as different countries use different mean sea levels as reference, but most commonly refers to the EGM96 geoid.
In maps and common use, the height over the mean sea level (such as orthometric height, H) is used to indicate the height of elevations while the ellipsoidal height, h, results from the GPS system and similar GNSS:
(An analogous relationship exists between normal heights and the quasigeoid, which disregards local density variations.)
In practice, many handheld GPS receivers interpolate N in a pre-computed geoid map (a lookup table).
So a GPS receiver on a ship may, during the course of a long voyage, indicate height variations, even though the ship will always be at sea level (neglecting the effects of tides). That is because GPS satellites, orbiting about the center of gravity of the Earth, can measure heights only relative to a geocentric reference ellipsoid. To obtain one's orthometric height, a raw GPS reading must be corrected. Conversely, height determined by spirit leveling from a tide gauge, as in traditional land surveying, is closer to orthometric height. Modern GPS receivers have a grid implemented in their software by which they obtain, from the current position, the height of the geoid (e.g., the EGM96 geoid) over the World Geodetic System (WGS) ellipsoid. They are then able to correct the height above the WGS ellipsoid to the height above the EGM96 geoid. When height is not zero on a ship, the discrepancy is due to other factors such as ocean tides, atmospheric pressure (meteorological effects), local sea surface topography, and measurement uncertainties.
Determination
The undulation of the geoid N is closely related to the disturbing potential T according to Bruns' formula (named after Heinrich Bruns):
where is the force of normal gravity, computed from the normal field potential .
Another way of determining N is using values of gravity anomaly , differences between true and normal reference gravity, as per (or Stokes' integral), published in 1849 by George Gabriel Stokes:
The integral kernel S, called Stokes function, was derived by Stokes in closed analytical form.
Note that determining anywhere on Earth by this formula requires to be known everywhere on Earth, including oceans, polar areas, and deserts. For terrestrial gravimetric measurements this is a near-impossibility, in spite of close international co-operation within the International Association of Geodesy (IAG), e.g., through the International Gravity Bureau (BGI, Bureau Gravimétrique International).
Another approach for geoid determination is to combine multiple information sources: not just terrestrial gravimetry, but also satellite geodetic data on the figure of the Earth, from analysis of satellite orbital perturbations, and lately from satellite gravity missions such as GOCE and GRACE. In such combination solutions, the low-resolution part of the geoid solution is provided by the satellite data, while a 'tuned' version of the above Stokes equation is used to calculate the high-resolution part, from terrestrial gravimetric data from a neighbourhood of the evaluation point only.
Calculating the undulation is mathematically challenging.
The precise geoid solution by Petr Vaníček and co-workers improved on the Stokesian approach to geoid computation. Their solution enables millimetre-to-centimetre accuracy in geoid computation, an order-of-magnitude improvement from previous classical solutions.
Geoid undulations display uncertainties which can be estimated by using several methods, e.g., least-squares collocation (LSC), fuzzy logic, artificial neural networks, radial basis functions (RBF), and geostatistical techniques. Geostatistical approach has been defined as the most-improved technique in prediction of geoid undulation.
Relationship to mass density
Variations in the height of the geoidal surface are related to anomalous density distributions within the Earth. Geoid measures thus help understanding the internal structure of the planet. Synthetic calculations show that the geoidal signature of a thickened crust (for example, in orogenic belts produced by continental collision) is positive, opposite to what should be expected if the thickening affects the entire lithosphere. Mantle convection also changes the shape of the geoid over time.
The surface of the geoid is higher than the reference ellipsoid wherever there is a positive gravity anomaly or negative disturbing potential (mass excess) and lower than the reference ellipsoid wherever there is a negative gravity anomaly or positive disturbing potential (mass deficit).
This relationship can be understood by recalling that gravity potential is defined so that it has negative values and is inversely proportional to distance from the body.
So, while a mass excess will strengthen the gravity acceleration, it will decrease the gravity potential. As a consequence, the geoid's defining equipotential surface will be found displaced away from the mass excess.
Analogously, a mass deficit will weaken the gravity pull but will increase the geopotential at a given distance, causing the geoid to move towards the mass deficit.
The presence of a localized inclusion in the background medium will rotate the gravity acceleration vectors slightly towards or away from a denser or lighter body, respectively, causing a bump or dimple in the equipotential surface.
The largest absolute deviation can be found in the Indian Ocean Geoid Low, 106 meters below the average sea level.
Another large feature is the North Atlantic Geoid High (or North Atlantic Geoid Swell), caused in part by the weight of ice cover over North America and northern Europe in the Late Cenozoic Ice Age.
Temporal change
Recent satellite missions, such as the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and
GRACE, have enabled the study of time-variable geoid signals. The first products based on GOCE satellite data became available online in June 2010, through the European Space Agency. ESA launched the satellite in March 2009 on a mission to map Earth's gravity with unprecedented accuracy and spatial resolution. On 31 March 2011, a new geoid model was unveiled at the Fourth International GOCE User Workshop hosted at the Technical University of Munich, Germany. Studies using the time-variable geoid computed from GRACE data have provided information on global hydrologic cycles, mass balances of ice sheets, and postglacial rebound. From postglacial rebound measurements, time-variable GRACE data can be used to deduce the viscosity of Earth's mantle.
Spherical harmonics representation
Spherical harmonics are often used to approximate the shape of the geoid. The current best such set of spherical harmonic coefficients is EGM2020 (Earth Gravitational Model 2020), determined in an international collaborative project led by the National Imagery and Mapping Agency (now the National Geospatial-Intelligence Agency, or NGA). The mathematical description of the non-rotating part of the potential function in this model is:
where and are geocentric (spherical) latitude and longitude respectively, are the fully normalized associated Legendre polynomials of degree and order , and and are the numerical coefficients of the model based on measured data. The above equation describes the Earth's gravitational potential , not the geoid itself, at location the co-ordinate being the geocentric radius, i.e., distance from the Earth's centre. The geoid is a particular equipotential surface, and is somewhat involved to compute. The gradient of this potential also provides a model of the gravitational acceleration. The most commonly used EGM96 contains a full set of coefficients to degree and order 360 (i.e., ), describing details in the global geoid as small as 55 km (or 110 km, depending on the definition of resolution). The number of coefficients, and , can be determined by first observing in the equation for that for a specific value of there are two coefficients for every value of except for . There is only one coefficient when since . There are thus coefficients for every value of . Using these facts and the formula, , it follows that the total number of coefficients is given by
using the EGM96 value of .
For many applications, the complete series is unnecessarily complex and is truncated after a few (perhaps several dozen) terms.
Still, even higher resolution models have been developed. Many of the authors of EGM96 have published EGM2008. It incorporates much of the new satellite gravity data (e.g., the Gravity Recovery and Climate Experiment), and supports up to degree and order 2160 (1/6 of a degree, requiring over 4 million coefficients), with additional coefficients extending to degree 2190 and order 2159. EGM2020 is the international follow-up that was originally scheduled for 2020 (still unreleased in 2024), containing the same number of harmonics generated with better data.
See also
Deflection of the vertical
Geodetic datum
Geopotential
International Terrestrial Reference Frame
Physical geodesy
Planetary geoid
Areoid (Mars' geoid)
Selenoid (Moon's geoid)
References
Further reading
External links
NGA webpage on Earth Gravitational Models
NASA webpage on EGM96
NOAA webpage on Geoid Models
International Centre for Global Earth Models (ICGEM)
International Service for the Geoid (ISG)
Gravimetry
Geodesy
Vertical datums
Vertical position | 0.768807 | 0.995961 | 0.765702 |
Maxwell–Boltzmann statistics | In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
The expected number of particles with energy for Maxwell–Boltzmann statistics is
where:
is the energy of the i-th energy level,
is the average number of particles in the set of states with energy ,
is the degeneracy of energy level i, that is, the number of states with energy which may nevertheless be distinguished from each other by some other means,
μ is the chemical potential,
k is the Boltzmann constant,
T is absolute temperature,
N is the total number of particles:
Z is the partition function:
e is Euler's number
Equivalently, the number of particles is sometimes expressed as
where the index i now specifies a particular state rather than the set of all states with energy , and .
History
Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.
Applicability
Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces.
Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in which particle B is in state 1 and particle A is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in the Gibbs paradox.
At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons,photon etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in the entropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed.
Quantum particles are either bosons (following Bose–Einstein statistics) or fermions (subject to the Pauli exclusion principle, following instead Fermi–Dirac statistics). Both of these quantum statistics approach the Maxwell–Boltzmann statistics in the limit of high temperature and low particle density.
Derivations
Maxwell–Boltzmann statistics can be derived in various statistical mechanical thermodynamic ensembles:
The grand canonical ensemble, exactly.
The canonical ensemble, exactly.
The microcanonical ensemble, but only in the thermodynamic limit.
In each case it is necessary to assume that the particles are non-interacting, and that multiple particles can occupy the same state and do so independently.
Derivation from microcanonical ensemble
Suppose we have a container with a huge number of very small particles all with identical physical characteristics (such as mass, charge, etc.). Let's refer to this as the system. Assume that though the particles have identical properties, they are distinguishable. For example, we might identify each particle by continually observing their trajectories, or by placing a marking on each one, e.g., drawing a different number on each one as is done with lottery balls.
The particles are moving inside that container in all directions with great speed. Because the particles are speeding around, they possess some energy. The Maxwell–Boltzmann distribution is a mathematical function that describes about how many particles in the container have a certain energy. More precisely, the Maxwell–Boltzmann distribution gives the non-normalized probability (this means that the probabilities do not add up to 1) that the state corresponding to a particular energy is occupied.
In general, there may be many particles with the same amount of energy . Let the number of particles with the same energy be , the number of particles possessing another energy be , and so forth for all the possible energies To describe this situation, we say that is the occupation number of the energy level If we know all the occupation numbers then we know the total energy of the system. However, because we can distinguish between which particles are occupying each energy level, the set of occupation numbers does not completely describe the state of the system. To completely describe the state of the system, or the microstate, we must specify exactly which particles are in each energy level. Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.
To begin with, assume that there is only one state at each energy level (there is no degeneracy). What follows next is a bit of combinatorial thinking which has little to do in accurately describing the reservoir of particles. For instance, let's say there is a total of boxes labelled . With the concept of combination, we could calculate how many ways there are to arrange into the set of boxes, where the order of balls within each box isn’t tracked. First, we select balls from a total of balls to place into box , and continue to select for each box from the remaining balls, ensuring that every ball is placed in one of the boxes. The total number of ways that the balls can be arranged is
As every ball has been placed into a box, , and we simplify the expression as
This is just the multinomial coefficient, the number of ways of arranging N items into k boxes, the l-th box holding Nl items, ignoring the permutation of items in each box.
Now, consider the case where there is more than one way to put particles in the box (i.e. taking the degeneracy problem into consideration). If the -th box has a "degeneracy" of , that is, it has "sub-boxes" ( boxes with the same energy . These states/boxes with the same energy are called degenerate states.), such that any way of filling the -th box where the number in the sub-boxes is changed is a distinct way of filling the box, then the number of ways of filling the i-th box must be increased by the number of ways of distributing the objects in the "sub-boxes". The number of ways of placing distinguishable objects in "sub-boxes" is (the first object can go into any of the boxes, the second object can also go into any of the boxes, and so on). Thus the number of ways that a total of particles can be classified into energy levels according to their energies, while each level having distinct states such that the i-th level accommodates particles is:
This is the form for W first derived by Boltzmann. Boltzmann's fundamental equation relates the thermodynamic entropy S to the number of microstates W, where k is the Boltzmann constant. It was pointed out by Gibbs however, that the above expression for W does not yield an extensive entropy, and is therefore faulty. This problem is known as the Gibbs paradox. The problem is that the particles considered by the above equation are not indistinguishable. In other words, for two particles (A and B) in two energy sublevels the population represented by [A,B] is considered distinct from the population [B,A] while for indistinguishable particles, they are not. If we carry out the argument for indistinguishable particles, we are led to the Bose–Einstein expression for W:
The Maxwell–Boltzmann distribution follows from this Bose–Einstein distribution for temperatures well above absolute zero, implying that . The Maxwell–Boltzmann distribution also requires low density, implying that . Under these conditions, we may use Stirling's approximation for the factorial:
to write:
Using the fact that for we can again use Stirling's approximation to write:
This is essentially a division by N! of Boltzmann's original expression for W, and this correction is referred to as .
We wish to find the for which the function is maximized, while considering the constraint that there is a fixed number of particles and a fixed energy in the container. The maxima of and are achieved by the same values of and, since it is easier to accomplish mathematically, we will maximize the latter function instead. We constrain our solution using Lagrange multipliers forming the function:
Finally
In order to maximize the expression above we apply Fermat's theorem (stationary points), according to which local extrema, if exist, must be at critical points (partial derivatives vanish):
By solving the equations above we arrive to an expression for :
Substituting this expression for into the equation for and assuming that yields:
or, rearranging:
Boltzmann realized that this is just an expression of the Euler-integrated fundamental equation of thermodynamics. Identifying E as the internal energy, the Euler-integrated fundamental equation states that :
where T is the temperature, P is pressure, V is volume, and μ is the chemical potential. Boltzmann's equation is the realization that the entropy is proportional to with the constant of proportionality being the Boltzmann constant. Using the ideal gas equation of state (PV = NkT), It follows immediately that and so that the populations may now be written:
Note that the above formula is sometimes written:
where is the absolute activity.
Alternatively, we may use the fact that
to obtain the population numbers as
where Z is the partition function defined by:
In an approximation where εi is considered to be a continuous variable, the Thomas–Fermi approximation yields a continuous degeneracy g proportional to so that:
which is just the Maxwell–Boltzmann distribution for the energy.
Derivation from canonical ensemble
In the above discussion, the Boltzmann distribution function was obtained via directly analysing the multiplicities of a system. Alternatively, one can make use of the canonical ensemble. In a canonical ensemble, a system is in thermal contact with a reservoir. While energy is free to flow between the system and the reservoir, the reservoir is thought to have infinitely large heat capacity as to maintain constant temperature, T, for the combined system.
In the present context, our system is assumed to have the energy levels with degeneracies . As before, we would like to calculate the probability that our system has energy .
If our system is in state , then there would be a corresponding number of microstates available to the reservoir. Call this number . By assumption, the combined system (of the system we are interested in and the reservoir) is isolated, so all microstates are equally probable. Therefore, for instance, if , we can conclude that our system is twice as likely to be in state than . In general, if is the probability that our system is in state ,
Since the entropy of the reservoir , the above becomes
Next we recall the thermodynamic identity (from the first law of thermodynamics):
In a canonical ensemble, there is no exchange of particles, so the term is zero. Similarly, This gives
where and denote the energies of the reservoir and the system at , respectively. For the second equality we have used the conservation of energy. Substituting into the first equation relating :
which implies, for any state s of the system
where Z is an appropriately chosen "constant" to make total probability 1. (Z is constant provided that the temperature T is invariant.)
where the index s runs through all microstates of the system. Z is sometimes called the Boltzmann sum over states (or "Zustandssumme" in the original German). If we index the summation via the energy eigenvalues instead of all possible states, degeneracy must be taken into account. The probability of our system having energy is simply the sum of the probabilities of all corresponding microstates:
where, with obvious modification,
this is the same result as before.
Comments on this derivation:
Notice that in this formulation, the initial assumption "... suppose the system has total N particles..." is dispensed with. Indeed, the number of particles possessed by the system plays no role in arriving at the distribution. Rather, how many particles would occupy states with energy follows as an easy consequence.
What has been presented above is essentially a derivation of the canonical partition function. As one can see by comparing the definitions, the Boltzmann sum over states is equal to the canonical partition function.
Exactly the same approach can be used to derive Fermi–Dirac and Bose–Einstein statistics. However, there one would replace the canonical ensemble with the grand canonical ensemble, since there is exchange of particles between the system and the reservoir. Also, the system one considers in those cases is a single particle state, not a particle. (In the above discussion, we could have assumed our system to be a single atom.)
See also
Bose–Einstein statistics
Fermi–Dirac statistics
Boltzmann factor
Notes
References
Bibliography
Carter, Ashley H., "Classical and Statistical Thermodynamics", Prentice–Hall, Inc., 2001, New Jersey.
Raj Pathria, "Statistical Mechanics", Butterworth–Heinemann, 1996.
Concepts in physics
James Clerk Maxwell
Ludwig Boltzmann | 0.772758 | 0.990867 | 0.765701 |
Rayleigh–Jeans law | In physics, the Rayleigh–Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength λ, it is
where is the spectral radiance (the power emitted per unit emitting area, per steradian, per unit wavelength), is the speed of light, is the Boltzmann constant, and is the temperature in kelvins. For frequency , the expression is instead
The Rayleigh–Jeans law agrees with experimental results at large wavelengths (low frequencies) but strongly disagrees at short wavelengths (high frequencies). This inconsistency between observations and the predictions of classical physics is commonly known as the ultraviolet catastrophe. Planck's law, which gives the correct radiation at all frequencies, has the Rayleigh–Jeans law as its low-frequency limit.
Historical development
In 1900, the British physicist Lord Rayleigh derived the λ−4 dependence of the Rayleigh–Jeans law based on classical physical arguments, relying upon the equipartition theorem. This law predicted an energy output that diverges towards infinity as wavelength approaches zero (as frequency tends to infinity). Measurements of the spectral emission of actual black bodies revealed that the emission agreed with Rayleigh's calculation at low frequencies but diverged at high frequencies, reaching a maximum and then falling with frequency, so the total energy emitted is finite. Rayleigh recognized the unphysical behavior of his formula at high frequencies and introduced an ad hoc cutoff to correct it, but experimentalists found that his cutoff did not agree with data. Hendrik Lorentz also presented a derivation of the wavelength dependence in 1903. More complete derivations, which included the proportionality constant, were presented in 1905 by Rayleigh and Sir James Jeans and independently by Albert Einstein. Rayleigh believed that this discrepancy could be resolved by the equipartition theorem failing to be valid for high-frequency vibrations, while Jeans argued that the underlying cause was matter and luminiferous aether not being in thermal equilibrium.
Rayleigh published his first derivation of the frequency dependence in June 1900. Planck discovered the curve now known as Planck's law in October of that year and presented it in December. Planck's original intent was to find a satisfactory derivation of Wien's expression for the blackbody radiation curve, which accurately described the data at high frequencies. Planck found Wien's original derivation inadequate and devised his own. Then, after learning that the most recent experimental results disagreed with his predictions for low frequencies, Planck revised his calculation, obtaining what is now called Planck's law.
Comparison to Planck's law
In 1900 Max Planck empirically obtained an expression for black-body radiation expressed in terms of wavelength (Planck's law):
where h is the Planck constant, and is the Boltzmann constant. Planck's law does not suffer from an ultraviolet catastrophe and agrees well with the experimental data, but its full significance (which ultimately led to quantum theory) was only appreciated several years later. Since
then in the limit of high temperatures or long wavelengths, the term in the exponential becomes small, and the exponential is well approximated with the Taylor polynomial's first-order term:
So
This results in Planck's blackbody formula reducing to
which is identical to the classically derived Rayleigh–Jeans expression.
The same argument can be applied to the blackbody radiation expressed in terms of frequency . In the limit of small frequencies, that is ,
This last expression is the Rayleigh–Jeans law in the limit of small frequencies.
Consistency of frequency- and wavelength-dependent expressions
When comparing the frequency- and wavelength-dependent expressions of the Rayleigh–Jeans law, it is important to remember that
and
Note that these two expressions then have different units, as a step in wavelength is not equivalent to a step in frequency. Therefore,
even after substituting the value , because has units of energy emitted per unit time per unit area of emitting surface, per unit solid angle, per unit wavelength, whereas has units of energy emitted per unit time per unit area of emitting surface, per unit solid angle, per unit frequency. To be consistent, we must use the equality
where both sides now have units of power (energy emitted per unit time) per unit area of emitting surface, per unit solid angle.
Starting with the Rayleigh–Jeans law in terms of wavelength, we get
where
This leads to
Other forms of Rayleigh–Jeans law
Depending on the application, the Planck function can be expressed in 3 different forms. The first involves energy emitted per unit time per unit area of emitting surface, per unit solid angle, per spectral unit. In this form, the Planck function and associated Rayleigh–Jeans limits are given by
or
Alternatively, Planck's law can be written as an expression for emitted power integrated over all solid angles. In this form, the Planck function and associated Rayleigh–Jeans limits are given by
or
In other cases, Planck's law is written as
for energy per unit volume (energy density). In this form, the Planck function and associated Rayleigh–Jeans limits are given by
or
See also
Stefan–Boltzmann law
Wien's displacement law
Wien approximation
Sakuma–Hattori equation
References
External links
Derivation of Rayleigh–Jeans law
Derivation of modes a wave in a cube
Foundational quantum physics
Obsolete theories in physics | 0.769981 | 0.994436 | 0.765697 |
Convective available potential energy | In meteorology, convective available potential energy (commonly abbreviated as CAPE), is a measure of the capacity of the atmosphere to support upward air movement that can lead to cloud formation and storms. Some atmospheric conditions, such as very warm, moist, air in an atmosphere that cools rapidly with height, can promote strong and sustained upward air movement, possibly stimulating the formation of cumulus clouds or cumulonimbus (thunderstorm clouds). In that situation the potential energy of the atmosphere to cause upward air movement is very high, so CAPE (a measure of potential energy) would be high and positive. By contrast, other conditions, such as a less warm air parcel or a parcel in an atmosphere with a temperature inversion (in which the temperature increases above a certain height) have much less capacity to support vigorous upward air movement, thus the potential energy level (CAPE) would be much lower, as would the probability of thunderstorms.
More technically, CAPE is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink.
Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards.
Mechanics
CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL):
Where is the height of the level of free convection and is the height of the equilibrium level (neutral buoyancy), where is the virtual temperature of the specific parcel, where is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy.
CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon.
CAPE is effectively positive buoyancy, expressed B+ or simply B; the opposite of convective inhibition (CIN), which is expressed as B-, and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CAPE is sometimes referred to as positive buoyant energy (PBE). This type of CAPE is the maximum energy available to an ascending parcel and to moist convection. When a layer of CIN is present, the layer must be eroded by surface heating or mechanical lifting, so that convective boundary layer parcels may reach their level of free convection (LFC).
On a sounding diagram, CAPE is the positive area above the LFC, the area between the parcel's virtual temperature line and the environmental virtual temperature line where the ascending parcel is warmer than the environment. Neglecting the virtual temperature correction may result in substantial relative errors in the calculated value of CAPE for small CAPE values. CAPE may also exist below the LFC, but if a layer of CIN (subsidence) is present, it is unavailable to deep, moist convection until CIN is exhausted. When there is mechanical lift to saturation, cloud base begins at the lifted condensation level (LCL); absent forcing, cloud base begins at the convective condensation level (CCL) where heating from below causes spontaneous buoyant lifting to the point of condensation when the convective temperature is reached. When CIN is absent or is overcome, saturated parcels at the LCL or CCL, which had been small cumulus clouds, will rise to the LFC, and then spontaneously rise until hitting the stable layer of the equilibrium level. The result is deep, moist convection (DMC), or simply, a thunderstorm.
When a parcel is unstable, it will continue to move vertically, in either direction, dependent on whether it receives upward or downward forcing, until it reaches a stable layer (though momentum, gravity, and other forcing may cause the parcel to continue). There are multiple types of CAPE, downdraft CAPE (DCAPE), estimates the potential strength of rain and evaporatively cooled downdrafts. Other types of CAPE may depend on the depth being considered. Other examples are surface based CAPE (SBCAPE), mixed layer or mean layer CAPE (MLCAPE), most unstable or maximum usable CAPE (MUCAPE), and normalized CAPE (NCAPE).
Fluid elements displaced upwards or downwards in such an atmosphere expand or compress adiabatically in order to remain in pressure equilibrium with their surroundings, and in this manner become less or more dense.
If the adiabatic decrease or increase in density is less than the decrease or increase in the density of the ambient (not moved) medium, then the displaced fluid element will be subject to downwards or upwards pressure, which will function to restore it to its original position. Hence, there will be a counteracting force to the initial displacement. Such a condition is referred to as convective stability.
On the other hand, if adiabatic decrease or increase in density is greater than in the ambient fluid, the upwards or downwards displacement will be met with an additional force in the same direction exerted by the ambient fluid. In these circumstances, small deviations from the initial state will become amplified. This condition is referred to as convective instability.
Convective instability is also termed static instability, because the instability does not depend on the existing motion of the air; this contrasts with dynamic instability where instability is dependent on the motion of air and its associated effects such as dynamic lifting.
Significance to thunderstorms
Thunderstorms form when air parcels are lifted vertically. Deep, moist convection requires a parcel to be lifted to the LFC where it then rises spontaneously until reaching a layer of non-positive buoyancy. The atmosphere is warm at the surface and lower levels of the troposphere where there is mixing (the planetary boundary layer (PBL)), but becomes substantially cooler with height. The temperature profile of the atmosphere, the change in temperature, the degree that it cools with height, is the lapse rate. When the rising air parcel cools more slowly than the surrounding atmosphere, it remains warmer and less dense. The parcel continues to rise freely (convectively; without mechanical lift) through the atmosphere until it reaches an area of air less dense (warmer) than itself.
The amount, and shape, of the positive-buoyancy area modulates the speed of updrafts, thus extreme CAPE can result in explosive thunderstorm development; such rapid development usually occurs when CAPE stored by a capping inversion is released when the "lid" is broken by heating or mechanical lift. The amount of CAPE also modulates how low-level vorticity is entrained and then stretched in the updraft, with importance to tornadogenesis. The most important CAPE for tornadoes is within the lowest 1 to 3 km (0.6 to 1.9 mi) of the atmosphere, whilst deep layer CAPE and the width of CAPE at mid-levels is important for supercells. Tornado outbreaks tend to occur within high CAPE environments. Large CAPE is required for the production of very large hail, owing to updraft strength, although a rotating updraft may be stronger with less CAPE. Large CAPE also promotes lightning activity.
Two notable days for severe weather exhibited CAPE values over 5 kJ/kg. Two hours before the 1999 Oklahoma tornado outbreak occurred on May 3, 1999, the CAPE value sounding at Oklahoma City was at 5.89 kJ/kg. A few hours later, an F5 tornado ripped through the southern suburbs of the city. Also on May 4, 2007, CAPE values of 5.5 kJ/kg were reached and an EF5 tornado tore through Greensburg, Kansas. On these days, it was apparent that conditions were ripe for tornadoes and CAPE wasn't a crucial factor. However, extreme CAPE, by modulating the updraft (and downdraft), can allow for exceptional events, such as the deadly F5 tornadoes that hit Plainfield, Illinois on August 28, 1990, and Jarrell, Texas on May 27, 1997, on days which weren't readily apparent as conducive to large tornadoes. CAPE was estimated to exceed 8 kJ/kg in the environment of the Plainfield storm and was around 7 kJ/kg for the Jarrell storm.
Severe weather and tornadoes can develop in an area of low CAPE values. The surprise severe weather event that occurred in Illinois and Indiana on April 20, 2004, is a good example. Importantly in that case, was that although overall CAPE was weak, there was strong CAPE in the lowest levels of the troposphere which enabled an outbreak of minisupercells producing large, long-track, intense tornadoes.
Example from meteorology
A good example of convective instability can be found in our own atmosphere. If dry mid-level air is drawn over very warm, moist air in the lower troposphere, a hydrolapse (an area of rapidly decreasing dew point temperatures with height) results in the region where the moist boundary layer and mid-level air meet. As daytime heating increases mixing within the moist boundary layer, some of the moist air will begin to interact with the dry mid-level air above it. Owing to thermodynamic processes, as the dry mid-level air is slowly saturated its temperature begins to drop, increasing the adiabatic lapse rate. Under certain conditions, the lapse rate can increase significantly in a short amount of time, resulting in convection. High convective instability can lead to severe thunderstorms and tornadoes as moist air which is trapped in the boundary layer eventually becomes highly negatively buoyant relative to the adiabatic lapse rate and escapes as a rapidly rising bubble of humid air triggering the development of a cumulus or cumulonimbus cloud.
Limitations
As with most parameters used in meteorology, there are some caveats to keep in mind, one of which is what CAPE represents physically and in what instances CAPE can be used. One example where the more common method for determining CAPE might start to break down is in the presence of tropical cyclones (TCs), such as tropical depressions, tropical storms, or hurricanes.
The more common method of determining CAPE can break down near tropical cyclones because CAPE assumes that liquid water is lost instantaneously during condensation. This process is thus irreversible upon adiabatic descent. This process is not realistic for tropical cyclones. To make the process more realistic for tropical cyclones is to use Reversible CAPE (RCAPE for short). RCAPE assumes the opposite extreme to the standard convention of CAPE and is that no liquid water will be lost during the process. This new process gives parcels a greater density related to water loading.
RCAPE is calculated using the same formula as CAPE, the difference in the formula being in the virtual temperature. In this new formulation, we replace the parcel saturation mixing ratio (which leads to the condensation and vanishing of liquid water) with the parcel water content. This slight change can drastically change the values we get through the integration.
RCAPE does have some limitations, one of which is that RCAPE assumes no evaporation keeping consistent for the use within a TC but should be used sparingly elsewhere.
Another limitation of both CAPE and RCAPE is that currently, both systems do not consider entrainment.
See also
Atmospheric thermodynamics
Lifted index
Maximum potential intensity
References
Further reading
Barry, R.G. and Chorley, R.J. Atmosphere, weather and climate (7th ed) Routledge 1998 p. 80-81
External links
Map of current global CAPE
Severe weather and convection
Atmospheric thermodynamics
Fluid dynamics
Meteorological quantities | 0.775436 | 0.987396 | 0.765662 |
Electric current | An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. The moving particles are called charge carriers, which may be one of several types of particles, depending on the conductor. In electric circuits the charge carriers are often electrons moving through a wire. In semiconductors they can be electrons or holes. In an electrolyte the charge carriers are ions, while in plasma, an ionized gas, they are ions and electrons.
In the International System of Units (SI), electric current is expressed in units of ampere (sometimes called an "amp", symbol A), which is equivalent to one coulomb per second. The ampere is an SI base unit and electric current is a base quantity in the International System of Quantities (ISQ). Electric current is also known as amperage and is measured using a device called an ammeter.
Electric currents create magnetic fields, which are used in motors, generators, inductors, and transformers. In ordinary conductors, they cause Joule heating, which creates light in incandescent light bulbs. Time-varying currents emit electromagnetic waves, which are used in telecommunications to broadcast information.
Symbol
The conventional symbol for current is , which originates from the French phrase , (current intensity). Current intensity is often referred to simply as current. The symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating Ampère's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using to until 1896.
Conventions
The conventional direction of current, also known as conventional current, is arbitrarily defined as the direction in which charges flow. In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei of the atoms are held in a fixed position, and the negatively charged electrons are the charge carriers, free to move about in the metal. In other materials, notably the semiconductors, the charge carriers can be positive or negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. Negatively charged carriers, such as the electrons (the charge carriers in metal wires and many other electronic circuit components), therefore flow in the opposite direction of conventional current flow in an electrical circuit.
Reference direction
A current in a wire or circuit element can flow in either of two directions. When defining a variable to represent the current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the reference direction of the current . When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown until the analysis is completed. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the current implies the actual direction of current through that circuit element is opposite that of the chosen reference direction.
Ohm's law
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.
Alternating and direct current
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, though certain applications use alternative waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or modulated) onto the AC signal.
In contrast, direct current (DC) refers to a system in which the movement of electric charge in only one direction (sometimes called unidirectional flow). Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Alternating current can also be converted to direct current through use of a rectifier. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was galvanic current.
Occurrences
Natural observable examples of electric current include lightning, static electric discharge, and the solar wind, the source of the polar auroras.
Man-made occurrences of electric current include the flow of conduction electrons in metal wires such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery, and the flow of holes within metals and semiconductors.
A biological example of current is the flow of ions in neurons and nerves, responsible for both thought and sensory perception.
Measurement
Current can be measured using an ammeter.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient.
Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current.
Devices, at the circuit level, use various techniques to measure current:
Shunt resistors
Hall effect current sensor transducers
Transformers (however DC cannot be measured)
Magnetoresistive field sensors
Rogowski coils
Current clamps
Resistive heating
Joule heating, also known as ohmic heating and resistive heating, is the process of power dissipation by which the passage of an electric current through a conductor increases the internal energy of the conductor, converting thermodynamic work into heat. The phenomenon was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
This relationship is known as Joule's Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known SI unit of power, the watt (symbol: W), is equivalent to one joule per second.
Electromagnetism
Electromagnet
In an electromagnet a coil of wires behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Electromagnetic induction
Magnetic fields can also be used to make electric currents. When a changing magnetic field is applied to a conductor, an electromotive force (EMF) is induced, which starts an electric current, when there is a suitable path.
Radio waves
When an electric current flows in a suitably shaped conductor at radio frequencies, radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
Conduction mechanisms in various media
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, conventional current is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction to the overall electron movement. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydronium ions flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
Metals
In a metal, some of the outer electrons in each atom are not bound to the individual molecules as they are in molecular solids, or in full bands as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons can serve as charge carriers, carrying a current. Metals are particularly conductive because there are many of these free electrons. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, One, Two, Three...Infinity (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:
where Q is the electric charge transferred through the surface over a time t. If Q and t are measured in coulombs and seconds respectively, I is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
Electrolytes
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, neutralizing each ion.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions ("protons") that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Vacuum
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called cathode spots or anode spots) are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
Semiconductor
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to many discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the conduction band, the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimensionthat is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as free electrons, though they are often simply called electrons if that is clear in context.
Current density and Ohm's law
Current density is the rate at which charge passes through a chosen unit area. It is defined as a vector whose magnitude is the current per unit cross-sectional area. As discussed in Reference direction, the direction is arbitrary. Conventionally, if the moving charges are positive, then the current density has the same sign as the velocity of the charges. For negative charges, the sign of the current density is opposite to the velocity of the charges. In SI units, current density (symbol: j) is expressed in the SI base units of amperes per square metre.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
where is the current, measured in amperes; is the potential difference, measured in volts; and is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
Drift speed
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
where
is the electric current
is number of charged particles per unit volume (or charge carrier density)
is the cross-sectional area of the conductor
is the drift velocity, and
is the charge on each particle.
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode-ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. (See also hydraulic analogy.)
The low drift velocity of charge carriers is analogous to air motion; in other words, winds.
The high speed of electromagnetic waves is roughly analogous to the speed of sound in a gas (sound waves move through air much faster than large-scale motions such as convection)
The random motion of charges is analogous to heatthe thermal velocity of randomly vibrating gas particles.
See also
Current density
Displacement current (electric) and
Electric shock
Electrical measurements
History of electrical engineering
Polarity symbols
International System of Quantities
SI electromagnetism units
Single-phase electric power
Static electricity
Three-phase electric power
Two-phase electric power
Notes
References
SI base quantities
Electromagnetic quantities | 0.765935 | 0.999614 | 0.76564 |
Collision | In physics, a collision is any event in which two or more bodies exert forces on each other in a relatively short time. Although the most common use of the word collision refers to incidents in which two or more objects collide with great force, the scientific use of the term implies nothing about the magnitude of the force.
Types of collisions
Collision is short-duration interaction between two bodies or more than two bodies simultaneously causing change in motion of bodies involved due to internal forces acted between them during this. Collisions involve forces (there is a change in velocity). The magnitude of the velocity difference just before impact is called the closing speed. All collisions conserve momentum. What distinguishes different types of collisions is whether they also conserve kinetic energy of the system before and after the collision. Collisions are of three types:
Perfectly inelastic collision. If most or all of the total kinetic energy is lost (dissipated as heat, sound, etc. or absorbed by the objects themselves), the collision is said to be inelastic; such collisions involve objects coming to a full stop. A "perfectly inelastic" collision (also called a "perfectly plastic" collision) is a limiting case of inelastic collision in which the two bodies coalesce after impact. An example of such a collision is a car crash, as cars crumple inward when crashing, rather than bouncing off of each other. This is by design, for the safety of the occupants and bystanders should a crash occur - the frame of the car absorbs the energy of the crash instead.
Inelastic collision. If most of the kinetic energy is conserved (i.e. the objects continue moving afterwards), the collision is said to be elastic. An example of this is a baseball bat hitting a baseball - the kinetic energy of the bat is transferred to the ball, greatly increasing the ball's velocity. The sound of the bat hitting the ball represents the loss of energy. An inelastic collision is sometimes also called a plastic collision.
Elastic collision If all of the total kinetic energy is conserved (i.e. no energy is released as sound, heat, etc.), the collision is said to be perfectly elastic. Such a system is an idealization and cannot occur in reality, due to the second law of thermodynamics.
The degree to which a collision is elastic or inelastic is quantified by the coefficient of restitution, a value that generally ranges between zero and one. A perfectly elastic collision has a coefficient of restitution of one; a perfectly inelastic collision has a coefficient of restitution of zero. The line of impact is the line that is collinear to the common normal of the surfaces that are closest or in contact during impact. This is the line along which internal force of collision acts during impact, and Newton's coefficient of restitution is defined only along this line.
Collisions in ideal gases approach perfectly elastic collisions, as do scattering interactions of sub-atomic particles which are deflected by the electromagnetic force. Some large-scale interactions like the slingshot type gravitational interactions between satellites and planets are almost perfectly elastic.
Examples
Billiards
Collisions play an important role in cue sports. Because the collisions between billiard balls are nearly elastic, and the balls roll on a surface that produces low rolling friction, their behavior is often used to illustrate Newton's laws of motion. After a zero-friction collision of a moving ball with a stationary one of equal mass, the angle between the directions of the two balls is 90 degrees. This is an important fact that professional billiards players take into account, although it assumes the ball is moving without any impact of friction across the table rather than rolling with friction.
Consider an elastic collision in two dimensions of any two masses m1 and m2, with respective initial velocities u1 and u2 where u2 = 0, and final velocities V1 and V2.
Conservation of momentum gives m1u1 = m1V1 + m2V2.
Conservation of energy for an elastic collision gives (1/2)m1|u1|2 = (1/2)m1|V1|2 + (1/2)m2|V2|2.
Now consider the case m1 = m2: we obtain u1 = V1 + V2 and |u1|2 = |V1|2 + |V2|2.
Taking the dot product of each side of the former equation with itself, |u1|2 = u1•u1 = |V1|2 + |V2|2 + 2V1•V2. Comparing this with the latter equation gives V1•V2 = 0, so they are perpendicular unless V1 is the zero vector (which occurs if and only if the collision is head-on).
Perfect inelastic collision
In a perfect inelastic collision, i.e., a zero coefficient of restitution, the colliding particles coalesce. It is necessary to consider conservation of momentum:
where v is the final velocity, which is hence given by
The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is.
With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation).
Animal locomotion
Collisions of an animal's foot or paw with the underlying substrate are generally termed ground reaction forces. These collisions are inelastic, as kinetic energy is not conserved. An important research topic in prosthetics is quantifying the forces generated during the foot-ground collisions associated with both disabled and non-disabled gait. This quantification typically requires subjects to walk across a force platform (sometimes called a "force plate") as well as detailed kinematic and dynamic (sometimes termed kinetic) analysis.
Hypervelocity impacts
Hypervelocity is very high velocity, approximately over 3,000 meters per second (11,000 km/h, 6,700 mph, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. An impact under extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts.
See also
Ballistic pendulum
Coefficient of restitution
Collision detection
Contact mechanics
Elastic collision
Friction
Impact crater
Impact event
Inelastic collision
Kinetic theory of gases - collisions between molecules
Projectile
Notes
References
Reissued (1979) New York: Dover .
External links
Three Dimensional Collision - Oblique inelastic collision between two homogeneous spheres.
One Dimensional Collision - One Dimensional Collision Flash Applet.
Two Dimensional Collision - Two Dimensional Collision Flash Applet.
Mechanics
hu:Ütközés | 0.771826 | 0.991953 | 0.765615 |
Avrami equation | The Avrami equation describes how solids transform from one phase to another at constant temperature. It can specifically describe the kinetics of crystallisation, can be applied generally to other changes of phase in materials, like chemical reaction rates, and can even be meaningful in analyses of ecological systems.
The equation is also known as the Johnson–Mehl–Avrami–Kolmogorov (JMAK) equation. The equation was first derived by Johnson, Mehl, Avrami and Kolmogorov (in Russian) in a series of articles published in the Journal of Chemical Physics between 1939 and 1941. Moreover, Kolmogorov treated statistically the crystallization of a solid in 1937 (in Russian, Kolmogorov, A. N., Izv. Akad. Nauk. SSSR., 1937, 3, 355).
Transformation kinetics
Transformations are often seen to follow a characteristic s-shaped, or sigmoidal, profile where the transformation rates are low at the beginning and the end of the transformation but rapid in between.
The initial slow rate can be attributed to the time required for a significant number of nuclei of the new phase to form and begin growing. During the intermediate period the transformation is rapid as the nuclei grow into particles and consume the old phase while nuclei continue to form in the remaining parent phase.
Once the transformation approaches completion, there remains little untransformed material for further nucleation, and the production of new particles begins to slow. Additionally, the previously formed particles begin to touch one another, forming a boundary where growth stops.
Derivation
The simplest derivation of the Avrami equation makes a number of significant assumptions and simplifications:
Nucleation occurs randomly and homogeneously over the entire untransformed portion of the material.
The growth rate does not depend on the extent of transformation.
Growth occurs at the same rate in all directions.
If these conditions are met, then a transformation of into will proceed by the nucleation of new particles at a rate per unit volume, which grow at a rate into spherical particles and only stop growing when they impinge upon each other. During a time interval , nucleation and growth can only take place in untransformed material. However, the problem is more easily solved by applying the concept of an extended volume – the volume of the new phase that would form if the entire sample was still untransformed. During the time interval to the number of nuclei N that appear in a sample of volume V will be given by
where is one of two parameters in this simple model: the nucleation rate per unit volume, which is assumed to be constant. Since growth is isotropic, constant and unhindered by previously transformed material, each nucleus will grow into a sphere of radius , and so the extended volume of due to nuclei appearing in the time interval will be
where is the second of the two parameters in this simple model: the growth velocity of a crystal, which is also assumed constant. The integration of this equation between and will yield the total extended volume that appears in the time interval:
Only a fraction of this extended volume is real; some portion of it lies on previously transformed material and is virtual. Since nucleation occurs randomly, the fraction of the extended volume that forms during each time increment that is real will be proportional to the volume fraction of untransformed . Thus
rearranged
and upon integration:
where Y is the volume fraction of .
Given the previous equations, this can be reduced to the more familiar form of the Avrami (JMAK) equation, which gives the fraction of transformed material after a hold time at a given temperature:
where , and .
This can be rewritten as
which allows the determination of the constants n and from a plot of vs . If the transformation follows the Avrami equation, this yields a straight line with slope n and intercept .
Final crystallite (domain) size
Crystallization is largely over when reaches values close to 1, which will be at a crystallization time defined by , as then the exponential term in the above expression for will be small. Thus crystallization takes a time of order
i.e., crystallization takes a time that decreases as one over the one-quarter power of the nucleation rate per unit volume, , and one over the three-quarters power of the growth velocity . Typical crystallites grow for some fraction of the crystallization time and so have a linear dimension , or
i.e., the one quarter power of the ratio of the growth velocity to the nucleation rate per unit volume. Thus the size of the final crystals only depends on this ratio, within this model, and as we should expect, fast growth rates and slow nucleation rates result in large crystals. The average volume of the crystallites is of order this typical linear size cubed.
This all assumes an exponent of , which is appropriate for the uniform (homogeneous) nucleation in three dimensions. Thin films, for example, may be effectively two-dimensional, in which case if nucleation is again uniform the exponent . In general, for uniform nucleation and growth, , where is the dimensionality of space in which crystallization occurs.
Interpretation of Avrami constants
Originally, n was held to have an integer value between 1 and 4, which reflected the nature of the transformation in question. In the derivation above, for example, the value of 4 can be said to have contributions from three dimensions of growth and one representing a constant nucleation rate. Alternative derivations exist, where n has a different value.
If the nuclei are preformed, and so all present from the beginning, the transformation is only due to the 3-dimensional growth of the nuclei, and n has a value of 3.
An interesting condition occurs when nucleation occurs on specific sites (such as grain boundaries or impurities) that rapidly saturate soon after the transformation begins. Initially, nucleation may be random, and growth unhindered, leading to high values for n (3 or 4). Once the nucleation sites are consumed, the formation of new particles will cease.
Furthermore, if the distribution of nucleation sites is non-random, then the growth may be restricted to 1 or 2 dimensions. Site saturation may lead to n values of 1, 2 or 3 for surface, edge and point sites respectively.
Applications in biophysics
The Avrami equation was applied in cancer biophysics in two aspects. First aspect is connected with tumor growth and cancer cells kinetics, which can be described by the sigmoidal curve. In this context the Avrami function was discussed as an alternative to the widely used Gompertz curve. In the second aspect the Avrami nucleation and growth theory was used together with multi-hit theory of carcinogenesis to show how the cancer cell is created. The number of oncogenic mutations in cellular DNA can be treated as nucleation particles which can transform whole DNA molecule into cancerous one (neoplastic transformation). This model was applied to clinical data of gastric cancer, and shows that Avrami's constant n is between 4 and 5 which suggest the fractal geometry of carcinogenic dynamics. Similar findings were published for breast and ovarian cancers, where n=5.3.
Multiple Fitting of a Single Dataset (MFSDS)
The Avrami equation was used by Ivanov et al. to fit multiple times a dataset generated by another model, the so called αDg to а sequence of the upper values of α, always starting from α=0, in order to generate a sequence of values of the Avrami parameter n. This approach was shown effective for a given experimental dataset, see the plot, and the n values obtained follow the general direction predicted by fitting multiple times the α21 model.
References
External links
IUPAC Compendium of Chemical Terminology 2nd ed. (the "Gold Book"), Oxford (1997)
Crystallography
Equations | 0.780204 | 0.981283 | 0.765601 |
Artificial gravity | Artificial gravity is the creation of an inertial force that mimics the effects of a gravitational force, usually by rotation.
Artificial gravity, or rotational gravity, is thus the appearance of a centrifugal force in a rotating frame of reference (the transmission of centripetal acceleration via normal force in the non-rotating frame of reference), as opposed to the force experienced in linear acceleration, which by the equivalence principle is indistinguishable from gravity.
In a more general sense, "artificial gravity" may also refer to the effect of linear acceleration, e.g. by means of a rocket engine.
Rotational simulated gravity has been used in simulations to help astronauts train for extreme conditions.
Rotational simulated gravity has been proposed as a solution in human spaceflight to the adverse health effects caused by prolonged weightlessness.
However, there are no current practical outer space applications of artificial gravity for humans due to concerns about the size and cost of a spacecraft necessary to produce a useful centripetal force comparable to the gravitational field strength on Earth (g).
Scientists are concerned about the effect of such a system on the inner ear of the occupants. The concern is that using centripetal force to create artificial gravity will cause disturbances in the inner ear leading to nausea and disorientation. The adverse effects may prove intolerable for the occupants.
Centripetal force
In the context of a rotating space station, it is the radial force provided by the spacecraft's hull that acts as centripetal force. Thus, the "gravity" force felt by an object is the centrifugal force perceived in the rotating frame of reference as pointing "downwards" towards the hull.
By Newton's Third Law, the value of little g (the perceived "downward" acceleration) is equal in magnitude and opposite in direction to the centripetal acceleration. It was tested with satellites like Bion 3 (1975) and Bion 4 (1977); they both had centrifuges on board to put some specimens in an artificial gravity environment.
Differences from normal gravity
From the perspective of people rotating with the habitat, artificial gravity by rotation behaves similarly to normal gravity but with the following differences, which can be mitigated by increasing the radius of a space station.
Centrifugal force varies with distance: Unlike real gravity, the apparent force felt by observers in the habitat pushes radially outward from the axis, and the centrifugal force is directly proportional to the distance from the axis of the habitat. With a small radius of rotation, a standing person's head would feel significantly less gravity than their feet. Likewise, passengers who move in a space station experience changes in apparent weight in different parts of the body.
The Coriolis effect gives an apparent force that acts on objects that are moving relative to a rotating reference frame. This apparent force acts at right angles to the motion and the rotation axis and tends to curve the motion in the opposite sense to the habitat's spin. If an astronaut inside a rotating artificial gravity environment moves towards or away from the axis of rotation, they will feel a force pushing them in or against the direction of spin. These forces act on the semicircular canals of the inner ear and can cause dizziness. Lengthening the period of rotation (lower spin rate) reduces the Coriolis force and its effects. It is generally believed that at 2 rpm or less, no adverse effects from the Coriolis forces will occur, although humans have been shown to adapt to rates as high as 23 rpm.
Changes in the rotation axis or rate of a spin would cause a disturbance in the artificial gravity field and stimulate the semicircular canals (refer to above). Any movement of mass within the station, including a movement of people, would shift the axis and could potentially cause a dangerous wobble. Thus, the rotation of a space station would need to be adequately stabilized, and any operations to deliberately change the rotation would need to be done slowly enough to be imperceptible. One possible solution to prevent the station from wobbling would be to use its liquid water supply as ballast which could be pumped between different sections of the station as required.
Human spaceflight
The Gemini 11 mission attempted in 1966 to produce artificial gravity by rotating the capsule around the Agena Target Vehicle to which it was attached by a 36-meter tether. They were able to generate a small amount of artificial gravity, about 0.00015 g, by firing their side thrusters to slowly rotate the combined craft like a slow-motion pair of bolas. The resultant force was too small to be felt by either astronaut, but objects were observed moving towards the "floor" of the capsule.
Health benefits
Artificial gravity has been suggested as a solution to various health risks associated with spaceflight. In 1964, the Soviet space program believed that a human could not survive more than 14 days in space for fear that the heart and blood vessels would be unable to adapt to the weightless conditions. This fear was eventually discovered to be unfounded as spaceflights have now lasted up to 437 consecutive days, with missions aboard the International Space Station commonly lasting 6 months. However, the question of human safety in space did launch an investigation into the physical effects of prolonged exposure to weightlessness. In June 1991, a Spacelab Life Sciences 1 flight performed 18 experiments on two men and two women over nine days. In an environment without gravity, it was concluded that the response of white blood cells and muscle mass decreased. Additionally, within the first 24 hours spent in a weightless environment, blood volume decreased by 10%. Long weightless periods can cause brain swelling and eyesight problems. Upon return to Earth, the effects of prolonged weightlessness continue to affect the human body as fluids pool back to the lower body, the heart rate rises, a drop in blood pressure occurs, and there is a reduced tolerance for exercise.
Artificial gravity, for its ability to mimic the behavior of gravity on the human body, has been suggested as one of the most encompassing manners of combating the physical effects inherent in weightless environments. Other measures that have been suggested as symptomatic treatments include exercise, diet, and Pingvin suits. However, criticism of those methods lies in the fact that they do not fully eliminate health problems and require a variety of solutions to address all issues. Artificial gravity, in contrast, would remove the weightlessness inherent in space travel. By implementing artificial gravity, space travelers would never have to experience weightlessness or the associated side effects. Especially in a modern-day six-month journey to Mars, exposure to artificial gravity is suggested in either a continuous or intermittent form to prevent extreme debilitation to the astronauts during travel.
Proposals
Several proposals have incorporated artificial gravity into their design:
Discovery II: a 2005 vehicle proposal capable of delivering a 172-metric-ton crew to Jupiter's orbit in 118 days. A very small portion of the 1,690-metric-ton craft would incorporate a centrifugal crew station.
Multi-Mission Space Exploration Vehicle (MMSEV): a 2011 NASA proposal for a long-duration crewed space transport vehicle; it included a rotational artificial gravity space habitat intended to promote crew health for a crew of up to six persons on missions of up to two years in duration. The torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to 0.69 g if built with the diameter option.
ISS Centrifuge Demo: a 2011 NASA proposal for a demonstration project preparatory to the final design of the larger torus centrifuge space habitat for the Multi-Mission Space Exploration Vehicle. The structure would have an outside diameter of with a ring interior cross-section diameter of . It would provide 0.08 to 0.51 g partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for the ISS crew.
Mars Direct: A plan for a crewed Mars mission created by NASA engineers Robert Zubrin and David Baker in 1990, later expanded upon in Zubrin's 1996 book The Case for Mars. The "Mars Habitat Unit", which would carry astronauts to Mars to join the previously launched "Earth Return Vehicle", would have had artificial gravity generated during flight by tying the spent upper stage of the booster to the Habitat Unit, and setting them both rotating about a common axis.
The proposed Tempo3 mission rotates two halves of a spacecraft connected by a tether to test the feasibility of simulating gravity on a crewed mission to Mars.
The Mars Gravity Biosatellite was a proposed mission meant to study the effect of artificial gravity on mammals. An artificial gravity field of 0.38 g (equivalent to Mars's surface gravity) was to be produced by rotation (32 rpm, radius of ca. 30 cm). Fifteen mice would have orbited Earth (Low Earth orbit) for five weeks and then land alive. However, the program was canceled on 24 June 2009, due to a lack of funding and shifting priorities at NASA.
Vast Space is a private company that proposes to build the world's first artificial gravity space station using the rotating spacecraft concept.
Martian Gravity Simulator in a Lunar Lava Tube and/or Cave is an unproven concept that proposes to build the world's first artificial martian gravity simulator in a lunar lava tube and/or cave using inflatable architecture with rotating internal structure. More precisely a large low pressure inflatable sphere with one or two rotating higher pressure tori within it. Reference in progress...</ref>
Issues with implementation
Some of the reasons that artificial gravity remains unused today in spaceflight trace back to the problems inherent in implementation. One of the realistic methods of creating artificial gravity is the centrifugal effect caused by the centripetal force of the floor of a rotating structure pushing up on the person. In that model, however, issues arise in the size of the spacecraft. As expressed by John Page and Matthew Francis, the smaller a spacecraft (the shorter the radius of rotation), the more rapid the rotation that is required. As such, to simulate gravity, it would be better to utilize a larger spacecraft that rotates slowly.
The requirements on size about rotation are due to the differing forces on parts of the body at different distances from the axis of rotation. If parts of the body closer to the rotational axis experience a force that is significantly different from parts farther from the axis, then this could have adverse effects. Additionally, questions remain as to what the best way is to initially set the rotating motion in place without disturbing the stability of the whole spacecraft's orbit. At the moment, there is not a ship massive enough to meet the rotation requirements, and the costs associated with building, maintaining, and launching such a craft are extensive.
In general, with the small number of negative health effects present in today's typically shorter spaceflights, as well as with the very large cost of research for a technology which is not yet really needed, the present day development of artificial gravity technology has necessarily been stunted and sporadic.
As the length of typical space flights increases, the need for artificial gravity for the passengers in such lengthy spaceflights will most certainly also increase, and so will the knowledge and resources available to create such artificial gravity, most likely also increase. In summary, it is probably only a question of time, as to how long it might take before the conditions are suitable for the completion of the development of artificial gravity technology, which will almost certainly be required at some point along with the eventual and inevitable development of an increase in the average length of a spaceflight.
In science fiction
Several science fiction novels, films, and series have featured artificial gravity production.
In the movie 2001: A Space Odyssey, a rotating centrifuge in the Discovery spacecraft provides artificial gravity.
The 1999 television series Cowboy Bebop, a rotating ring in the Bebop spacecraft creates artificial gravity throughout the spacecraft.
In the novel The Martian, the Hermes spacecraft achieves artificial gravity by design; it employs a ringed structure, at whose periphery forces around 40% of Earth's gravity are experienced, similar to Mars' gravity.
In the novel Project Hail Mary by the same author, weight on the titular ship Hail Mary is provided initially by engine thrust, as the ship is capable of constant acceleration up to and is also able to separate, turn the crew compartment inwards, and rotate to produce while in orbit.
The movie Interstellar features a spacecraft called the Endurance that can rotate on its central axis to create artificial gravity, controlled by retro thrusters on the ship.
The 2021 film Stowaway features the upper stage of a launch vehicle connected by 450-meter long tethers to the ship's main hull, acting as a counterweight for inertia-based artificial gravity.
In the television series For All Mankind, the space hotel Polaris, later renamed Phoenix after being purchased and converted into a space vessel by Helios Aerospace for their own Mars mission, features a wheel-like structure controlled by thrusters to create artificial gravity, whilst a central axial hub operates in zero gravity as a docking station.
Linear acceleration
Linear acceleration is another method of generating artificial gravity, by using the thrust from a spacecraft's engines to create the illusion of being under a gravitational pull. A spacecraft under constant acceleration in a straight line would have the appearance of a gravitational pull in the direction opposite to that of the acceleration, as the thrust from the engines would cause the spacecraft to "push" itself up into the objects and persons inside of the vessel, thus creating the feeling of weight. This is because of Newton's third law: the weight that one would feel standing in a linearly accelerating spacecraft would not be a true gravitational pull, but simply the reaction of oneself pushing against the craft's hull as it pushes back. Similarly, objects that would otherwise be free-floating within the spacecraft if it were not accelerating would "fall" towards the engines when it started accelerating, as a consequence of Newton's first law: the floating object would remain at rest, while the spacecraft would accelerate towards it, and appear to an observer within that the object was "falling".
To emulate artificial gravity on Earth, spacecraft using linear acceleration gravity may be built similar to a skyscraper, with its engines as the bottom "floor". If the spacecraft were to accelerate at the rate of 1 g—Earth's gravitational pull—the individuals inside would be pressed into the hull at the same force, and thus be able to walk and behave as if they were on Earth.
This form of artificial gravity is desirable because it could functionally create the illusion of a gravity field that is uniform and unidirectional throughout a spacecraft, without the need for large, spinning rings, whose fields may not be uniform, not unidirectional with respect to the spacecraft, and require constant rotation. This would also have the advantage of relatively high speed: a spaceship accelerating at 1 g, 9.8 m/s2, for the first half of the journey, and then decelerating for the other half, could reach Mars within a few days. Similarly, a hypothetical space travel using constant acceleration of 1 g for one year would reach relativistic speeds and allow for a round trip to the nearest star, Proxima Centauri. As such, low-impulse but long-term linear acceleration has been proposed for various interplanetary missions. For example, even heavy (100 ton) cargo payloads to Mars could be transported to Mars in and retain approximately 55 percent of the LEO vehicle mass upon arrival into a Mars orbit, providing a low-gravity gradient to the spacecraft during the entire journey.
This form of gravity is not without challenges, however. At present, the only practical engines that could propel a vessel fast enough to reach speeds comparable to Earth's gravitational pull require chemical reaction rockets, which expel reaction mass to achieve thrust, and thus the acceleration could only last for as long as a vessel had fuel. The vessel would also need to be constantly accelerating and at a constant speed to maintain the gravitational effect, and thus would not have gravity while stationary, and could experience significant swings in g-forces if the vessel were to accelerate above or below 1 g. Further, for point-to-point journeys, such as Earth-Mars transits, vessels would need to constantly accelerate for half the journey, turn off their engines, perform a 180° flip, reactivate their engines, and then begin decelerating towards the target destination, requiring everything inside the vessel to experience weightlessness and possibly be secured down for the duration of the flip.
A propulsion system with a very high specific impulse (that is, good efficiency in the use of reaction mass that must be carried along and used for propulsion on the journey) could accelerate more slowly producing useful levels of artificial gravity for long periods of time. A variety of electric propulsion systems provide examples. Two examples of this long-duration, low-thrust, high-impulse propulsion that have either been practically used on spacecraft or are planned in for near-term in-space use are Hall effect thrusters and Variable Specific Impulse Magnetoplasma Rockets (VASIMR). Both provide very high specific impulse but relatively low thrust, compared to the more typical chemical reaction rockets. They are thus ideally suited for long-duration firings which would provide limited amounts of, but long-term, milli-g levels of artificial gravity in spacecraft.
In a number of science fiction plots, acceleration is used to produce artificial gravity for interstellar spacecraft, propelled by as yet theoretical or hypothetical means.
This effect of linear acceleration is well understood, and is routinely used for 0 g cryogenic fluid management for post-launch (subsequent) in-space firings of upper stage rockets.
Roller coasters, especially launched roller coasters or those that rely on electromagnetic propulsion, can provide linear acceleration "gravity", and so can relatively high acceleration vehicles, such as sports cars. Linear acceleration can be used to provide air-time on roller coasters and other thrill rides.
Simulating lunar gravity
In January 2022, China was reported by the South China Morning Post to have built a small ( diameter) research facility to simulate low lunar gravity with the help of magnets. The facility was reportedly partly inspired by the work of Andre Geim (who later shared the 2010 Nobel Prize in Physics for his research on graphene) and Michael Berry, who both shared the Ig Nobel Prize in Physics in 2000 for the magnetic levitation of a frog.
Simulating microgravity
Parabolic flight
Weightless Wonder is the nickname for the NASA aircraft that flies parabolic trajectories. Briefly, it provides a nearly weightless environment to train astronauts, conduct research, and film motion pictures. The parabolic trajectory creates a vertical linear acceleration that matches that of gravity, giving zero-g for a short time, usually 20–30 seconds, followed by approximately 1.8g for a similar period. The nickname Vomit Comet is also used, referring to motion sickness that aircraft passengers often experience during these parabolic trajectories. Such reduced gravity aircraft are nowadays operated by several organizations worldwide.
Neutral buoyancy
The Neutral Buoyancy Laboratory (NBL) is an astronaut training facility at the Sonny Carter Training Facility at the NASA Johnson Space Center in Houston, Texas. The NBL is a large indoor pool of water, the largest in the world, in which astronauts may perform simulated EVA tasks in preparation for space missions. The NBL contains full-sized mock-ups of the Space Shuttle cargo bay, flight payloads, and the International Space Station (ISS).
The principle of neutral buoyancy is used to simulate the weightless environment of space. The suited astronauts are lowered into the pool using an overhead crane and their weight is adjusted by support divers so that they experience no buoyant force and no rotational moment about their center of mass. The suits worn in the NBL are down-rated from fully flight-rated EMU suits like those in use on the space shuttle and International Space Station.
The NBL tank is in length, wide, and deep, and contains 6.2 million gallons (23.5 million liters) of water. Divers breathe nitrox while working in the tank.
Neutral buoyancy in a pool is not weightlessness, since the balance organs in the inner ear still sense the up-down direction of gravity. Also, there is a significant amount of drag presented by water. Generally, drag effects are minimized by doing tasks slowly in the water. Another difference between neutral buoyancy simulation in a pool and actual EVA during spaceflight is that the temperature of the pool and the lighting conditions are maintained constant.
Graviton control or generator
Speculative or fictional mechanisms
In science fiction, artificial gravity (or cancellation of gravity) or "paragravity" is sometimes present in spacecraft that are neither rotating nor accelerating. At present, there is no confirmed technique as such that can simulate gravity other than actual rotation or acceleration. There have been many claims over the years of such a device. Eugene Podkletnov, a Russian engineer, has claimed since the early 1990s to have made such a device consisting of a spinning superconductor producing a powerful "gravitomagnetic field", but there has been no verification or even negative results from third parties. In 2006, a research group funded by ESA claimed to have created a similar device that demonstrated positive results for the production of gravitomagnetism, although it produced only 0.0001 g. This result has not been replicated.
See also
References
External links
List of peer review papers on artificial gravity
TEDx talk about artificial gravity
Overview of artificial gravity in Sci-Fi and Space Science
NASA's Java simulation of artificial gravity
Variable Gravity Research Facility (xGRF), concept with tethered rotating satellites, perhaps a Bigelow expandable module and a spent upper stage as a counterweight
Gravity
Gravity
Space colonization
Scientific speculation
Space medicine
Rotation | 0.768849 | 0.995773 | 0.765599 |
Laser cooling | Laser cooling includes several techniques where atoms, molecules, and small mechanical systems are cooled with laser light. The directed energy of lasers is often associated with heating materials, e.g. laser cutting, so it can be counterintuitive that laser cooling often results in sample temperatures approaching absolute zero. It is a routine step in many atomic physics experiments where the laser-cooled atoms are then subsequently manipulated and measured, or in technologies, such as atom-based quantum computing architectures. Laser cooling relies on the change in momentum when an object, such as an atom, absorbs and re-emits a photon (a particle of light). For example, if laser light illuminates a warm cloud of atoms from all directions and the laser's frequency is tuned below an atomic resonance, the atoms will be cooled. This common type of laser cooling relies on the Doppler effect where individual atoms will preferentially absorb laser light from the direction opposite to the atom's motion. The absorbed light is re-emitted by the atom in a random direction. After repeated emission and absorption of light the net effect on the cloud of atoms is that they will expand more slowly. The slower expansion reflects a decrease in the velocity distribution of the atoms, which corresponds to a lower temperature and therefore the atoms have been cooled. For an ensemble of particles, their thermodynamic temperature is proportional to the variance in their velocity, therefore the lower the distribution of velocities, the lower temperature of the particles.
The 1997 Nobel Prize in Physics was awarded to Claude Cohen-Tannoudji, Steven Chu, and William Daniel Phillips "for development of methods to cool and trap atoms with laser light".
History
Radiation pressure
Radiation pressure is the force that electromagnetic radiation exerts on matter. In 1873 Maxwell published his treatise on electromagnetism in which he predicted radiation pressure. The force was experimentally demonstrated for the first time by Lebedev and reported at a conference in Paris in 1900, and later published in more detail in 1901. Following Lebedev's measurements Nichols and Hull also demonstrated the force of radiation pressure in 1901, with a refined measurement reported in 1903.
Atoms and molecules have bound states and transitions can occur between these states in the presence of light. Sodium is historically notable because it has a strong transition at 589 nm, a wavelength which is close to the peak sensitivity of the human eye. This made it easy to see the interaction of light with sodium atoms. In 1933, Otto Frisch deflected an atomic beam of sodium atoms with light.
This was the first realization of radiation pressure acting on an atom or molecule.
Laser cooling proposals
The introduction of lasers in atomic physics experiments was the precursor to the laser cooling proposals in the mid 1970s. Laser cooling was proposed separately in 1975 by two different research groups: Hänsch and Schawlow, and Wineland and Dehmelt. Both proposals outlined the simplest laser cooling process, known as Doppler cooling, where laser light tuned below an atom's resonant frequency is preferentially absorbed by atoms moving towards the laser and after absorption a photon is emitted in a random direction. This process is repeated many times and in a configuration with counterpropagating laser cooling light the velocity distribution of the atoms is reduced.
In 1977 Ashkin submitted a paper which describes how Doppler cooling could be used to provide the necessary damping to load atoms into an optical trap. In this work he emphasized how this could allow for long spectroscopic measurements which would increase precision because the atoms would be held in place. He also discussed overlapping optical traps to study interactions between different atoms.
Initial realizations
Following the laser cooling proposals, in 1978 two research groups that Wineland, Drullinger and Walls of NIST, and Neuhauser, Hohenstatt, Toscheck and Dehmelt of the University of Washington succeeded in laser cooling atoms. The NIST group wanted to reduce the effect of Doppler broadening on spectroscopy. They cooled magnesium ions in a Penning trap to below 40 K. The Washington group cooled barium ions.
The research from both groups served to illustrate the mechanical properties of light.
Influenced by the Wineland's work on laser cooling ions, William Phillips applied the same principles to laser cool neutral atoms. In 1982, he published the first paper where neutral atoms were laser cooled. The process used is now known as the Zeeman slower and is a standard technique for slowing an atomic beam.
Modern advances
Atoms
The Doppler cooling limit for electric dipole transitions is typically in the hundreds of microkelvins. In the 1980s this limit was seen as the lowest achievable temperature. It was a surprise then when sodium atoms were cooled to 43 microkelvin when their Doppler cooling limit is 240 microkelvin, this unforeseen low temperature was explained by considering the interaction of polarized laser light with more atomic states and transitions. Previous conceptions of laser cooling were decided to have been too simplistic. The major laser cooling breakthroughs in the 70s and 80s led to several improvements to preexisting technology and new discoveries with temperatures just above absolute zero. The cooling processes were utilized to make atomic clocks more accurate and to improve spectroscopic measurements, and led to the observation of a new state of matter at ultracold temperatures. The new state of matter, the Bose–Einstein condensate, was observed in 1995 by Eric Cornell, Carl Wieman, and Wolfgang Ketterle.
Exotic Atoms
Most laser cooling experiments bring the atoms close to at rest in the laboratory frame, but cooling of relativistic atoms has also been achieved, where the effect of cooling manifests as a narrowing of the velocity distribution. In 1990, a group at JGU successfully laser-cooled a beam of 7Li+ at in a storage ring from to lower than , using two counter-propagating lasers addressing the same transition, but at and , respectively, to compensate for the large Doppler shift.
Laser cooling of antimatter has also been demonstrated, first in 2021 by the ALPHA collaboration on antihydrogen atoms.
Molecules
Molecules are significantly more challenging to laser cool than atoms because molecules have vibrational and rotational degrees of freedom. These extra degrees of freedom result in more energy levels that can be populated from excited state decays, requiring more lasers compared to atoms to address the more complex level structure. Vibrational decays are particularly challenging because there are no symmetry rules that restrict the vibrational states that can be populated.
In 2010, a team at Yale successfully laser-cooled a diatomic molecule. In 2016, a group at MPQ successfully cooled formaldehyde to via optoelectric Sisyphus cooling. In 2022, a group at Harvard successfully laser cooled and trapped CaOH to in a magneto-optical trap.
Mechanical systems
Starting in the 2000s, laser cooling was applied to small mechanical systems, ranging from small cantilevers to the mirrors used in the LIGO observatory. These devices are connected to a larger substrate, such as a mechanical membrane attached to a frame, or they are held in optical traps, in both cases the mechanical system is a harmonic oscillator. Laser cooling reduces the random vibrations of the mechanical oscillator, removing thermal phonons from the system.
In 2007, an MIT team successfully laser-cooled a macro-scale (1 gram) object to 0.8 K. In 2011, a team from the California Institute of Technology and the University of Vienna became the first to laser-cool a (10 μm × 1 μm) mechanical object to its quantum ground state.
Methods
The first example of laser cooling, and also still the most common method (so much so that it is still often referred to simply as 'laser cooling') is Doppler cooling.
Doppler cooling
Doppler cooling, which is usually accompanied by a magnetic trapping force to give a magneto-optical trap, is by far the most common method of laser cooling. It is used to cool low density gases down to the Doppler cooling limit, which for rubidium-85 is around 150 microkelvins.
In Doppler cooling, initially, the frequency of light is tuned slightly below an electronic transition in the atom. Because the light is detuned to the "red" (i.e., at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect. Thus if one applies light from two opposite directions, the atoms will always scatter more photons from the laser beam pointing opposite to their direction of motion. In each scattering event the atom loses a momentum equal to the momentum of the photon. If the atom, which is now in the excited state, then emits a photon spontaneously, it will be kicked by the same amount of momentum, but in a random direction. Since the initial momentum change is a pure loss (opposing the direction of motion), while the subsequent change is random, the probable result of the absorption and emission process is to reduce the momentum of the atom, and therefore its speed—provided its initial speed was larger than the recoil speed from scattering a single photon. If the absorption and emission are repeated many times, the average speed, and therefore the kinetic energy of the atom, will be reduced. Since the temperature of a group of atoms is a measure of the average random internal kinetic energy, this is equivalent to cooling the atoms.
Other methods
Other methods of laser cooling include:
Sisyphus cooling
Resolved sideband cooling
Raman sideband cooling
Velocity selective coherent population trapping (VSCPT)
Gray molasses
Optical molasses
Cavity-mediated cooling
Use of a Zeeman slower
Electromagnetically induced transparency (EIT) cooling
Anti-Stokes cooling in solids
Polarization gradient cooling
Applications
Laser cooling is very common in the field of atomic physics. Reducing the random motion of atoms has several benefits, including the ability to trap atoms with optical or magnetic fields. Spectroscopic measurements of a cold atomic sample will also have reduced systematic uncertainties due to thermal motion.
Often multiple laser cooling techniques are used in a single experiment to prepare a cold sample of atoms, which is then subsequently manipulated and measured. In a representative experiment a vapor of strontium atoms is generated in a hot oven that exit the oven as an atomic beam. After leaving the oven the atoms are Doppler cooled in two dimensions transverse to their motion to reduce loss of atoms due to divergence of the atomic beam. The atomic beam is then slowed and cooled with a Zeeman slower to optimize the atom loading efficiency into a magneto-optical trap (MOT), which Doppler cools the atoms, that operates on the with lasers at 461 nm. The MOT transitions from using light at 461 nm to using light at 689 nm to drive the , which is a narrow transition, to realize even colder atoms. The atoms are then transferred into an optical dipole trap where evaporative cooling gets them to temperatures where they can be effectively loaded into an optical lattice.
Laser cooling is important for quantum computing efforts based on neutral atoms and trapped atomic ions. In an ion trap Doppler cooling reduces the random motion of the ions so they form a well-ordered crystal structure in the trap. After Doppler cooling the ions are often cooled to their motional ground state to reduce decoherence during quantum gates between ions.
Equipment
Laser cooling atoms (and molecules especially) requires specialized experimental equipment that when assembled forms a cold atom machine. Such a machine generally consists of two parts: a vacuum chamber which houses the laser cooled atoms and the laser systems used for cooling, as well as for preparing and manipulating atomic states and detecting the atoms.
Vacuum system
In order for atoms to be laser cooled, the atoms cannot collide with room temperature background gas particles. Such collisions will drastically heat the atoms, and knock them out of weak traps. Acceptable collision rates for cold atom machines typically require vacuum pressures at 10−9 Torr, and very often hundreds or even thousands of times lower pressures are necessary. To achieve these low pressures, a vacuum chamber is needed. The vacuum chamber typically includes windows so that the atoms can be addressed with lasers (e.g. for laser cooling) and light emitted by the atoms or absorption of light be the atoms can be detected. The vacuum chamber also requires an atomic source for the atom(s) to be laser cooled. The atomic source is generally heated to produce thermal atoms that can be laser cooled. For ion trapping experiments the vacuum system must also hold the ion trap, with the appropriate electric feedthroughs for the trap. Neutral atom systems very often employ a Magneto-optical trap (MOT) as one of the early stages in collecting and cooling atoms. For a MOT typically magnetic field coils are placed outside of the vacuum chamber to generate magnetic field gradients for the MOT.
Lasers
The lasers required for cold atom machines are entirely dependent on the choice of atom. Each atom has unique electronic transitions at very distinct wavelengths that must be driven for the atom to be laser cooled. Rubidium, for example is a very commonly used atom which requires driving two transitions with laser light at 780 nm that are separated by a few GHz. The light for rubidium can be generated from a signal laser at 780 nm and an Electro-optic modulator. Generally tens of mW (and often hundreds of mW to cool significantly more atoms) is used to cool neutral atoms. Trapped ions on the other hand require microwatts of optical power, as they are generally tightly confined and the laser light can be focused to a small spot size. The strontium ion, for example requires light at both 422 nm and 1092 nm in order to be Doppler cooled. Because of the small Doppler shifts involved with laser cooling, very narrow lasers, order of a few MHz, are required for laser cooling. Such lasers are generally stabilized to spectroscopy reference cells, optical cavities, or sometimes wavemeters so the laser light can be precisely tuned relative to the atomic transitions.
See also
Particle beam cooling
References
Additional sources
Laser Cooling HyperPhysics
PhysicsWorld series of articles by Chad Orzel:
Cold: how physicists learned to manipulate and move particles with laser cooling
Colder: how physicists beat the theoretical limit for laser cooling and laid the foundations for a quantum revolution
Coldest: how a letter to Einstein and advances in laser-cooling technology led physicists to new quantum states of matter
Thermodynamics
Atomic physics
Cooling technology
Laser applications | 0.771512 | 0.992332 | 0.765596 |
History of gravitational theory | In physics, theories of gravitation postulate mechanisms of interaction governing the movements of bodies with mass. There have been numerous theories of gravitation since ancient times. The first extant sources discussing such theories are found in ancient Greek philosophy. This work was furthered through the Middle Ages by Indian, Islamic, and European scientists, before gaining great strides during the Renaissance and Scientific Revolution—culminating in the formulation of Newton's law of gravity. This was superseded by Albert Einstein's theory of relativity in the early 20th century.
Greek philosopher Aristotle found that objects immersed in a medium tend to fall at speeds proportional to their weight. Vitruvius understood that objects fall based on their specific gravity. In the 6th century CE, Byzantine Alexandrian scholar John Philoponus modified the Aristotelian concept of gravity with the theory of impetus. In the 7th century, Indian astronomer Brahmagupta spoke of gravity as an attractive force. In the 14th century, European philosophers Jean Buridan and Albert of Saxony—who were influenced by certain Islamic scholars—developed the theory of impetus and linked it to the acceleration and mass of objects. Albert also developed a law of proportion regarding the relationship between the speed of an object in free fall and the time elapsed.
Italians of the 16th century found that objects in free fall tend to accelerate equally. In 1632, Galileo Galilei put forth the basic principle of relativity. The existence of the gravitational constant was explored by various researchers from the mid-17th century, helping Isaac Newton formulate his law of universal gravitation. Newton's classical mechanics were superseded in the early 20th century, when Einstein developed the special and general theories of relativity. An elemental force carrier of gravity is hypothesized in quantum gravity approaches such as string theory, in a potentially unified theory of everything.
Antiquity
Classical antiquity
Heraclitus, Anaxagoras, Empedocles and Leucippus
The pre-Socratic Ionian Greek philosopher Heraclitus used the word logos ('word') to describe a kind of law which keeps the cosmos in harmony, moving all objects, including the stars, winds, and waves. Anaxagoras (c. 500 – c. 428 BC), another Ionian philosopher, introduced the concept of nous (cosmic mind) as a ordering force.
In the cosmogonic works of the Greek philosopher Empedocles (c. 494 – c. 434/443 BC) it were distinguished two opposing fundamental cosmic forces, "attraction" and "repulsion"; which Empedocles personified as "Love" and "Strife" (Philotes and Neikos).
The ancient atomist Leucippus (5th-century BCE) proposed the cosmos was created when a large group of atoms came together and swirled as a vortex. The smaller atoms became the celestial bodies of the cosmos. The larger atoms in the center came together as a membrane from which the Earth was formed.
Aristotle
In the 4th century BCE, Greek philosopher Aristotle taught that there is no effect or motion without a cause. The cause of the downward natural motion of heavy bodies, such as the element earth and water, was related to their nature (gravity), which caused them to move downward toward the center of the (geocentric) universe. For this reason Aristotle supported a spherical Earth, since "every portion of earth has weight until it reaches the centre, and the jostling of parts greater and smaller would bring about not a waved surface, but rather compression and convergence of part and part until the centre is reached". On the other hand, light bodies such as the element fire and air, were moved by their nature (levity) upward toward the celestial sphere of the Moon (see sublunary sphere). Astronomical objects near the fixed stars are composed of aether, whose natural motion is circular. Beyond them is the prime mover, the final cause of all motion in the cosmos. In his Physics, Aristotle correctly asserted that objects immersed in a medium tend to fall at speeds proportional to their weight and inversely proportional to the density of the medium.
Strato of Lampsacus, Epicurus and Aristarchus of Samos
Greek philosopher Strato of Lampsacus (c. 335 – c. 269 BCE) rejected the Aristotelian belief of "natural places" in exchange for a mechanical view in which objects do not gain weight as they fall, instead arguing that the greater impact was due to an increase in speed.
Epicurus (c. 341–270 BCE) viewed weight as an inherent property of atoms which influences their movement. These atoms move downward in constant free fall within an infinite vacuum without resistance at equal speed, regardless of their mass. On the other hand, upward motion is due to atomic collisions. Epicureans deviated from older atomist theories like Democritus' (c. 460–c. 370 BCE) by proposing the idea that atoms may randomly deviate from their expected course.
Greek astronomer Aristarchus of Samos (c. 310 – c. 230 BCE) theorized Earth's rotation around its own axis and the orbit of Earth around the Sun in a heliocentric cosmology. Seleucus of Seleucia (c. 190 – c. 150 BCE) supported his cosmology and also described gravitational effects of the Moon on the tidal range.
Archimedes
The 3rd-century-BCE Greek physicist Archimedes (c. 287 – c. 212 BCE) discovered the centre of mass of a triangle. He also postulated that if the centres of gravity of two equal weights was not the same, it would be located in the middle of the line that joins them. In On Floating Bodies, Archimedes claimed that for any object submerged in a fluid there is an equivalent upward buoyant force to the weight of the fluid displaced by the object's volume. The fluids described by Archimedes are not self-gravitating, since he assumes that "any fluid at rest is the surface of a sphere whose centre is the same as that of the Earth".
Hipparchus of Nicaea, Lucretius and Vitruvius
Greek astronomer Hipparchus of Nicaea (c.190 – c. 120 BCE) also rejected Aristotelian physics and followed Strato in adopting some form of theory of impetus to explain motion. The poem De rerum natura by Lucretius (c. 99 – c. 55 BCE) asserts that more massive bodies fall faster in a medium because the latter resists less, but in a vacuum fall with equal speed. Roman engineer and architect Vitruvius (c. 85 – c. 15 BCE) contends in his De architectura that gravity is not dependent on a substance's weight but rather on its 'nature' (cf. specific gravity):
If the quicksilver is poured into a vessel, and a stone weighing one hundred pounds is laid upon it, the stone swims on the surface, and cannot depress the liquid, nor break through, nor separate it. If we remove the hundred pound weight, and put on a scruple of gold, it will not swim, but will sink to the bottom of its own accord. Hence, it is undeniable that the gravity of a substance depends not on the amount of its weight, but on its nature.For another English translation see:
Plutarch, Pliny the Elder, and Claudius Ptolemy
Greek philosopher Plutarch attested the existence of Roman astronomers who rejected Aristotelian physics, "even contemplating theories of inertia and universal gravitation", and suggested that gravitational attraction was not unique to the Earth. The gravitational effects of the Moon on the tides were noticed by Pliny the Elder (23–79 CE) in his Naturalis Historia and Claudius Ptolemy (100 – c. 170 CE) in his Tetrabiblos.
Byzantine era
John Philoponus
In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force which diminishes over time. In his commentary on Aristotle's Physics that "if one lets fall simultaneously from the same height two bodies differing greatly in weight, one will find that the ratio of the times of their motion does not correspond to the ratios of their weights, but the difference in time is a very small one".
Indian subcontinent
Brahmagupta
Brahmagupta (c. 598c. 668 CE) was the first one among Indian mathematicians/astronomers to describe gravity as an attractive force using the term "gurutvākarṣaṇam (गुरुत्वाकर्षणम्)":
The earth on all its sides is the same; all people on the earth stand upright, and all heavy things fall down to the earth by a law of nature, for it is the nature of the earth to attract and to keep things, as it is the nature of water to flow ... If a thing wants to go deeper down than the earth, let it try. The earth is the only low thing, and seeds always return to it, in whatever direction you may throw them away, and never rise upwards from the earth.
Bhāskarāchārya
Another famous Indian mathematician and astronomer, Bhāskarā II (Bhāskarāchārya, "Bhāskara, the teacher", c. 1114c. 1185), describes gravity as an inherent attractive property of Earth in the section Golādhyāyah (On Spherics) of his treatise Siddhānta Shiromani:
The property of attraction is inherent in the Earth. By this property the Earth attracts any unsupported heavy thing towards it: The thing appears to be falling but it is in a state of being drawn to Earth. ... It is manifest from this that ... people situated at distances of a fourth part of the circumference [of earth] from us or in the opposite hemisphere, cannot by any means fall downwards [in space].
Islamic world
Abu Ma'shar
Ancient Greeks like Posidonius had associated the tides in the sea with to be influenced by moonlight. Circa 850 AD, Abu Ma'shar al-Balkhi (Albumasar) recorded the tides and the moon position and noticed high-tides when the Moon was below the horizon. Abu Ma'shar considered an alternative explanation where the Moon and the sea had to share some astrological virtue that attracted each other. This work was translated into Latin and became one of the two main theories for tides for European scholars.
Ibn Sina
In the 11th century CE, Persian polymath Ibn Sina (Avicenna) agreed with Philoponus' theory that "the moved object acquires an inclination from the mover" as an explanation for projectile motion. Ibn Sina then published his own theory of impetus in The Book of Healing (c. 1020). Unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, Ibn Sina viewed it as a persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (mayl), and argued that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. The Iraqi polymath Ibn al-Haytham describes gravity as a force in which heavier body moves towards the centre of the earth. He also describes the force of gravity will only move towards the direction of the centre of the earth not in different directions.
Al-Biruni
Another 11th-century Persian polymath, Al-Biruni, proposed that heavenly bodies have mass, weight, and gravity, just like the Earth. He criticized both Aristotle and Ibn Sina for holding the view that only the Earth has these properties. The 12th-century scholar Al-Khazini suggested that the gravity an object contains varies depending on its distance from the centre of the universe (referring to the centre of the Earth). Al-Biruni and Al-Khazini studied the theory of the centre of gravity, and generalized and applied it to three-dimensional bodies. Fine experimental methods were also developed for determining the specific gravity or specific weight of objects, based the theory of balances and weighing.
Abu'l-Barakāt al-Baghdādī
In the 12th century, Abu'l-Barakāt al-Baghdādī adopted and modified Ibn Sina's theory on projectile motion. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. According to Shlomo Pines, al-Baghdādī's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]."
European Renaissance
14th century
Jean Buridan, the Oxford Calculators, Albert of Saxony
In the 14th century, both the French philosopher Jean Buridan and the Oxford Calculators (the Merton School) of the Merton College of Oxford rejected the Aristotelian concept of gravity. They attributed the motion of objects to an impetus (akin to momentum), which varies according to velocity and mass; Buridan was influenced in this by Ibn Sina's Book of Healing. Buridan and the philosopher Albert of Saxony (c. 1320–1390) adopted Abu'l-Barakat's theory that the acceleration of a falling body is a result of its increasing impetus. Influenced by Buridan, Albert developed a law of proportion regarding the relationship between the speed of an object in free fall and the time elapsed. He also theorized that mountains and valleys are caused by erosion—displacing the Earth's centre of gravity.
Uniform and difform motion
The roots of Domingo de Soto's expression uniform difform motion [uniformly accelerated motion] lies in the Oxford Calculators terms "uniform" motion and "difform" motion. "Uniform" motion was used differently then than it would be now. "Uniform" motion might have referred both to constant speed and to motion in which all parts of a body are moving at equal speed. Apparently, the Calculators did not illustrate the different types of motion with real-world examples. John of Holland at the University of Prague, illustrated uniform motion with what would later be called uniform velocity, but also with a falling stone (all parts moving at the same speed), and with a sphere in uniform rotation. He did, however, make distinctions between different kinds of "uniform" motion. Difform motion was exemplified by walking at increasing speed.
Mean speed theorem
Also in the 14th century, the Merton School developed the mean speed theorem; a uniformly accelerated body starting from rest travels the same distance as a body with uniform speed whose speed is half the final velocity of the accelerated body. The mean speed theorem was proved by Nicole Oresme (c. 1323–1382) and would be influential in later gravitational equations. Written as a modern equation:
However, since small time intervals could not be measured, the relationship between time and distance was not so evident as the equation suggests. More generally; equations, which were not widely used until after Galileo's time, imply a clarity that was not there.
15th–17th century
Leonardo da Vinci
Leonardo da Vinci (1452–1519) made drawings recording the acceleration of falling objects. He wrote that the "mother and origin of gravity" is energy. He describes two pairs of physical powers which stem from a metaphysical origin and have an effect on everything: abundance of force and motion, and gravity and resistance. He associates gravity with the 'cold' classical elements, water and earth, and calls its energy infinite. In Codex Arundel, Leonardo recorded that if a water-pouring vase moves transversally (sideways), simulating the trajectory of a vertically falling object, it produces a right triangle with equal leg length, composed of falling material that forms the hypotenuse and the vase trajectory forming one of the legs. On the hypotenuse, Leonardo noted the equivalence of the two orthogonal motions, one effected by gravity and the other proposed by the experimenter.
Nicolaus Copernicus, Petrus Apianus
By 1514, Nicolaus Copernicus had written an outline of his heliocentric model, in which he stated that Earth's centre is the centre of both its rotation and the orbit of the Moon. In 1533, German humanist Petrus Apianus described the exertion of gravity:
Since it is apparent that in the descent [along the arc] there is more impediment acquired, it is clear that gravity is diminished on this account. But because this comes about by reason of the position of heavy bodies, let it be called a positional gravity [i.e. gravitas secundum situm]
Francesco Beato and Luca Ghini
By 1544, according to Benedetto Varchi, the experiments of at least two Italians, Francesco Beato, a Dominican philosopher at Pisa, and Luca Ghini, a physician and botanist from Bologna, had dispelled the Aristotelian claim that objects fall at speeds proportional to their weight.
Domingo de Soto
In 1551, Domingo de Soto theorized that objects in free fall accelerate uniformly in his book Physicorum Aristotelis quaestiones. This idea was subsequently explored in more detail by Galileo Galilei, who derived his kinematics from the 14th-century Merton College and Jean Buridan, and possibly De Soto as well.
Simon Stevin
In 1585, Flemish polymath Simon Stevin performed a demonstration for Jan Cornets de Groot, a local politician in the Dutch city of Delft. Stevin dropped two lead balls from the Nieuwe Kerk in that city. From the sound of the impacts, Stevin deduced that the balls had fallen at the same speed. The result was published in 1586.
Galileo Galilei
Galileo successfully applied mathematics to the acceleration of falling objects, correctly hypothesizing in a 1604 letter to Paolo Sarpi that the distance of a falling object is proportional to the square of the time elapsed.
Written with modern symbols:
The result was published in Two New Sciences in 1638. In the same book, Galileo suggested that the slight variance of speed of falling objects of different mass was due to air resistance, and that objects would fall completely uniformly in a vacuum. The relation of the distance of objects in free fall to the square of the time taken was confirmed by Italian Jesuits Grimaldi and Riccioli between 1640 and 1650. They also made a calculation of the gravity of Earth by recording the oscillations of a pendulum.
Johannes Kepler
In his Astronomia nova (1609), Johannes Kepler proposed an attractive force of limited radius between any "kindred" bodies:
Gravity is a mutual corporeal disposition among kindred bodies to unite or join together; thus the earth attracts a stone much more than the stone seeks the earth. (The magnetic faculty is another example of this sort).... If two stones were set near one another in some place in the world outside the sphere of influence of a third kindred body, these stones, like two magnetic bodies, would come together in an intermediate place, each approaching the other by a space proportional to the bulk [moles] of the other....
Evangelista Torricelli
A disciple of Galileo, Evangelista Torricelli reiterated Aristotle's model involving a gravitational centre, adding his view that a system can only be in equilibrium when the common centre itself is unable to fall.
European Enlightenment
The relation of the distance of objects in free fall to the square of the time taken was confirmed by Francesco Maria Grimaldi and Giovanni Battista Riccioli between 1640 and 1650. They also made a calculation of the gravity of Earth constant by recording the oscillations of a pendulum.
Mechanical explanations
In 1644, René Descartes proposed that no empty space can exist and that a continuum of matter causes every motion to be curvilinear. Thus, centrifugal force thrusts relatively light matter away from the central vortices of celestial bodies, lowering density locally and thereby creating centripetal pressure. Using aspects of this theory, between 1669 and 1690, Christiaan Huygens designed a mathematical vortex model. In one of his proofs, he shows that the distance elapsed by an object dropped from a spinning wheel will increase proportionally to the square of the wheel's rotation time. In 1671, Robert Hooke speculated that gravitation is the result of bodies emitting waves in the aether. Nicolas Fatio de Duillier (1690) and Georges-Louis Le Sage (1748) proposed a corpuscular model using some sort of screening or shadowing mechanism. In 1784, Le Sage posited that gravity could be a result of the collision of atoms, and in the early 19th century, he expanded Daniel Bernoulli's theory of corpuscular pressure to the universe as a whole. A similar model was later created by Hendrik Lorentz (1853–1928), who used electromagnetic radiation instead of corpuscles.
English mathematician Isaac Newton used Descartes' argument that curvilinear motion constrains inertia, and in 1675, argued that aether streams attract all bodies to one another. Newton (1717) and Leonhard Euler (1760) proposed a model in which the aether loses density near mass, leading to a net force acting on bodies. Further mechanical explanations of gravitation (including Le Sage's theory) were created between 1650 and 1900 to explain Newton's theory, but mechanistic models eventually fell out of favor because most of them lead to an unacceptable amount of drag (air resistance), which was not observed. Others violate the energy conservation law and are incompatible with modern thermodynamics.
'Weight' before Newton
Before Newton, 'weight' had the double meaning 'amount' and 'heaviness'.
Mass as distinct from weight
In 1686, Newton gave the concept of mass its name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter.
Newton's law of universal gravitation
In 1679, Robert Hooke wrote to Isaac Newton of his hypothesis concerning orbital motion, which partly depends on an inverse-square force. In 1684, both Hooke and Newton told Edmond Halley that they had proven the inverse-square law of planetary motion, in January and August, respectively. While Hooke refused to produce his proofs, Newton was prompted to compose De motu corporum in gyrum ('On the motion of bodies in an orbit'), in which he mathematically derives Kepler's laws of planetary motion. In 1687, with Halley's support (and to Hooke's dismay), Newton published Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), which hypothesizes the inverse-square law of universal gravitation. In his own words:I deduced that the forces which keep the planets in their orbs must be reciprocally as the squares of their distances from the centres about which they revolve; and thereby compared the force requisite to keep the moon in her orb with the force of gravity at the surface of the earth; and found them to answer pretty nearly.
Newton's original formula was:
where the symbol means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them – the gravitational constant. Newton would need an accurate measure of this constant to prove his inverse-square law. Reasonably accurate measurements were not available in until the Cavendish experiment by Henry Cavendish in 1797.
In Newton's theory (rewritten using more modern mathematics) the density of mass generates a scalar field, the gravitational potential in joules per kilogram, by
Using the Nabla operator for the gradient and divergence (partial derivatives), this can be conveniently written as:
This scalar field governs the motion of a free-falling particle by:
At distance r from an isolated mass M, the scalar field is
The Principia sold out quickly, inspiring Newton to publish a second edition in 1713.
However the theory of gravity itself was not accepted quickly.
The theory of gravity faced two barriers. First scientists like Gottfried Wilhelm Leibniz complained that it relied on action at a distance, that the mechanism of gravity was "invisible, intangible, and not mechanical". The French philosopher Voltaire countered these concerns, ultimately writing his own book to explain aspects of it to French readers in 1738, which helped to popularize Newton's theory.
Second, detailed comparisons with astronomical data were not initially favorable. Among the most conspicuous issue was the so-called great inequality of Jupiter and Saturn. Comparisons of ancient astronomical observations to those of the early 1700s implied that the orbit of Saturn was increasing in diameter while that of Jupiter was decreasing. Ultimately this meant Saturn would exit the Solar System and Jupiter would collide with other planets or the Sun. The problem was tackled first by Leonhard Euler in 1748, then Joseph-Louis Lagrange in 1763, by Pierre-Simon Laplace in 1773. Each effort improved the mathematical treatment until the issue was resolved by Laplace in 1784 approximately 100 years after Newton's first publication on gravity. Laplace showed that the changes were periodic but with immensely long periods beyond any existing measurements.
Successes such the solution to the great inequality of Jupiter and Saturn mystery accumulated. In 1755, Prussian philosopher Immanuel Kant published a cosmological manuscript based on Newtonian principles, in which he develops an early version of the nebular hypothesis. Edmond Halley proposed that similar looking objects appearing every 76 years was in fact a single comet. The appearance of the comet in 1759, now named after him, within a month of predictions based on Newton's gravity greatly improved scientific opinion of the theory. Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted by the actions of the other planets. Calculations by John Couch Adams and Urbain Le Verrier both predicted the general position of the planet. In 1846, Le Verrier sent his position to Johann Gottfried Galle, asking him to verify it. The same night, Galle spotted Neptune near the position Le Verrier had predicted.
Not every comparison was successful. By the end of the 19th century, Le Verrier showed that the orbit of Mercury could not be accounted for entirely under Newtonian gravity, and all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) were fruitless. Even so, Newton's theory is thought to be exceptionally accurate in the limit of weak gravitational fields and low speeds.
At the end of the 19th century, many tried to combine Newton's force law with the established laws of electrodynamics (like those of Wilhelm Eduard Weber, Carl Friedrich Gauss, and Bernhard Riemann) in order to explain the anomalous perihelion precession of Mercury. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light. In another attempt, Paul Gerber (1898) succeeded in deriving the correct formula for the perihelion shift (which was identical to the formula later used by Albert Einstein). These hypotheses were rejected because of the outdated laws they were based on, being superseded by those of James Clerk Maxwell.
Modern era
In 1900, Hendrik Lorentz tried to explain gravity on the basis of his ether theory and Maxwell's equations. He assumed, like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner, that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. Lorentz calculated that the value for the perihelion advance of Mercury was much too low.
In the late 19th century, Lord Kelvin pondered the possibility of a theory of everything. He proposed that every body pulsates, which might be an explanation of gravitation and electric charges. His ideas were largely mechanistic and required the existence of the aether, which the Michelson–Morley experiment failed to detect in 1887. This, combined with Mach's principle, led to gravitational models which feature action at a distance.
Albert Einstein developed his revolutionary theory of relativity in papers published in 1905 and 1915; these account for the perihelion precession of Mercury. In 1914, Gunnar Nordström attempted to unify gravity and electromagnetism in his theory of five-dimensional gravitation. General relativity was proven in 1919, when Arthur Eddington observed gravitational lensing around a solar eclipse, matching Einstein's equations. This resulted in Einstein's theory superseding Newtonian physics. Thereafter, German mathematician Theodor Kaluza promoted the idea of general relativity with a fifth dimension, which in 1921 Swedish physicist Oskar Klein gave a physical interpretation of in a prototypical string theory, a possible model of quantum gravity and potential theory of everything.
Einstein's field equations include a cosmological constant to account for the alleged staticity of the universe. However, Edwin Hubble observed in 1929 that the universe appears to be expanding. By the 1930s, Paul Dirac developed the hypothesis that gravitation should slowly and steadily decrease over the course of the history of the universe. Alan Guth and Alexei Starobinsky proposed in 1980 that cosmic inflation in the very early universe could have been driven by a negative pressure field, a concept later coined 'dark energy'—found in 2013 to have composed around 68.3% of the early universe.
In 1922, Jacobus Kapteyn proposed the existence of dark matter, an unseen force that moves stars in galaxies at higher velocities than gravity alone accounts for. It was found in 2013 to have comprised 26.8% of the early universe. Along with dark energy, dark matter is an outlier in Einstein's relativity, and an explanation for its apparent effects is a requirement for a successful theory of everything.
In 1957, Hermann Bondi proposed that negative gravitational mass (combined with negative inertial mass) would comply with the strong equivalence principle of general relativity and Newton's laws of motion. Bondi's proof yielded singularity-free solutions for the relativity equations.
Early theories of gravity attempted to explain planetary orbits (Newton) and more complicated orbits (e.g. Lagrange). Then came unsuccessful attempts to combine gravity and either wave or corpuscular theories of gravity. The whole landscape of physics was changed with the discovery of Lorentz transformations, and this led to attempts to reconcile it with gravity. At the same time, experimental physicists started testing the foundations of gravity and relativity—Lorentz invariance, the gravitational deflection of light, the Eötvös experiment. These considerations led to and past the development of general relativity.
Einstein (1905, 1908, 1912)
In 1905, Albert Einstein published a series of papers in which he established the special theory of relativity and the fact that mass and energy are equivalent. In 1907, in what he described as "the happiest thought of my life", Einstein realized that someone who is in free fall experiences no gravitational field. In other words, gravitation is exactly equivalent to acceleration.
Einstein's two-part publication in 1912 (and before in 1908) is really only important for historical reasons. By then he knew of the gravitational redshift and the deflection of light. He had realized that Lorentz transformations are not generally applicable, but retained them. The theory states that the speed of light is constant in free space but varies in the presence of matter. The theory was only expected to hold when the source of the gravitational field is stationary. It includes the principle of least action:
where is the Minkowski metric, and there is a summation from 1 to 4 over indices and .
Einstein and Grossmann includes Riemannian geometry and tensor calculus.
The equations of electrodynamics exactly match those of general relativity. The equation
is not in general relativity. It expresses the stress–energy tensor as a function of the matter density.
Lorentz-invariant models (1905–1910)
Based on the principle of relativity, Henri Poincaré (1905, 1906), Hermann Minkowski (1908), and Arnold Sommerfeld (1910) tried to modify Newton's theory and to establish a Lorentz invariant gravitational law, in which the speed of gravity is that of light. As in Lorentz's model, the value for the perihelion advance of Mercury was much too low.
Abraham (1912)
Meanwhile, Max Abraham developed an alternative model of gravity in which the speed of light depends on the gravitational field strength and so is variable almost everywhere. Abraham's 1914 review of gravitation models is said to be excellent, but his own model was poor.
Nordström (1912)
The first approach of Nordström (1912) was to retain the Minkowski metric and a constant value of but to let mass depend on the gravitational field strength . Allowing this field strength to satisfy
where is rest mass energy and is the d'Alembertian,
where is the mass when gravitational potential vanishes and,
where is the four-velocity and the dot is a differential with respect to time.
The second approach of Nordström (1913) is remembered as the first logically consistent relativistic field theory of gravitation ever formulated. (notation from Pais not Nordström):
where is a scalar field,
This theory is Lorentz invariant, satisfies the conservation laws, correctly reduces to the Newtonian limit and satisfies the weak equivalence principle.
Einstein and Fokker (1914)
This theory is Einstein's first treatment of gravitation in which general covariance is strictly obeyed. Writing:
they relate Einstein–Grossmann to Nordström. They also state:
That is, the trace of the stress energy tensor is proportional to the curvature of space.
Between 1911 and 1915, Einstein developed the idea that gravitation is equivalent to acceleration, initially stated as the equivalence principle, into his general theory of relativity, which fuses the three dimensions of space and the one dimension of time into the four-dimensional fabric of spacetime. However, it does not unify gravity with quanta—individual particles of energy, which Einstein himself had postulated the existence of in 1905.
General relativity
In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of to a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion. The issue that this creates is that free-falling objects can accelerate with respect to each other. To deal with this difficulty, Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. More specifically, Einstein and David Hilbert discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime. These field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime, which describes its geometry. The geodesic paths of spacetime are calculated from the metric tensor.
Notable solutions of the Einstein field equations include:
The Schwarzschild solution, which describes spacetime surrounding a spherically symmetrical non-rotating uncharged massive object. For objects with radii smaller than the Schwarzschild radius, this solution generates a black hole with a central singularity.
The Reissner–Nordström solution, in which the central object has an electrical charge. For charges with a geometrized length less than the geometrized length of the mass of the object, this solution produces black holes with an event horizon surrounding a Cauchy horizon.
The Kerr solution for rotating massive objects. This solution also produces black holes with multiple horizons.
The cosmological Robertson–Walker solution (from 1922 and 1924), which predicts the expansion of the universe.
General relativity has enjoyed much success because its predictions (not called for by older theories of gravity) have been regularly confirmed. For example:
General relativity accounts for the anomalous perihelion precession of Mercury.
Gravitational lensing was first confirmed in 1919, and has more recently been strongly confirmed through the use of a quasar which passes behind the Sun as seen from the Earth.
The expansion of the universe (predicted by the Robertson–Walker metric) was confirmed by Edwin Hubble in 1929.
The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS.
The time delay of light passing close to a massive object was first identified by Irwin Shapiro in 1964 in interplanetary spacecraft signals.
Gravitational radiation has been indirectly confirmed through studies of binary pulsars such as PSR 1913+16.
In 2015, the LIGO experiments directly detected gravitational radiation from two colliding black holes, making this the first direct observation of both gravitational waves and black holes.
It is believed that neutron star mergers (since detected in 2017) and black hole formation may also create detectable amounts of gravitational radiation.
Quantum gravity
Several decades after the discovery of general relativity, it was realized that it cannot be the complete theory of gravity because it is incompatible with quantum mechanics. Later it was understood that it is possible to describe gravity in the framework of quantum field theory like the other fundamental forces. In this framework, the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons. This reproduces general relativity in the classical limit, but only at the linearized level and postulating that the conditions for the applicability of Ehrenfest theorem holds, which is not always the case. Moreover, this approach fails at short distances of the order of the Planck length.
See also
Anti-gravity
History of physics
References
Footnotes
Citations
Sources
(Reprinted from "The enigma of Domingo de Soto: Uniformiter difformis and falling bodies in late medieval physics". (1968). Isis, 59(4), 384–401).
(Reprinted from White, K. (Ed.). (1997). Hispanic philosophy in the age of discovery. Studies in Philosophy and the History of Philosophy 29. Catholic University of America Press).
Theories of gravity
History of physics | 0.770039 | 0.994221 | 0.765589 |
Day length fluctuations | The length of the day (LOD), which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Exact measurements of time by atomic clocks and satellite laser ranging have revealed that the LOD is subject to a number of different changes. These subtle variations have periods that range from a few weeks to a few years. They are attributed to interactions between the dynamic atmosphere and Earth itself. The International Earth Rotation and Reference Systems Service monitors the changes.
In the absence of external torques, the total angular momentum of Earth as a whole system must be constant. Internal torques are due to relative movements and mass redistribution of Earth's core, mantle, crust, oceans, atmosphere, and cryosphere. In order to keep the total angular momentum constant, a change of the angular momentum in one region must necessarily be balanced by angular momentum changes in the other regions.
Crustal movements (such as continental drift) or polar cap melting are slow secular (non-periodic) events. The characteristic coupling time between core and mantle has been estimated to be on the order of ten years, and the so-called 'decade fluctuations' of Earth's rotation rate are thought to result from fluctuations within the core, transferred to the mantle. The length of day (LOD) varies significantly even for time scales from a few years down to weeks (Figure), and the observed fluctuations in the LOD - after eliminating the effects of external torques - are a direct consequence of the action of internal torques. These short term fluctuations are very probably generated by the interaction between the solid Earth and the atmosphere.
The length of day of other planets also varies, particularly of the planet Venus, which has such a dynamic and strong atmosphere that its length of day fluctuates by up to 20 minutes.
Observations
Any change of the axial component of the atmospheric angular momentum (AAM) must be accompanied by a corresponding change of the angular momentum of Earth's crust and mantle (due to the law of conservation of angular momentum). Because the moment of inertia of the system mantle-crust is only slightly influenced by atmospheric pressure loading, this mainly requires a change in the angular velocity of the solid Earth; i.e., a change of LOD. The LOD can presently be measured to a high accuracy over integration times of only a few hours, and general circulation models of the atmosphere allow high precision determination of changes in AAM in the model. A comparison between AAM and LOD shows that they are highly correlated. In particular, one recognizes an annual period of LOD with an amplitude of 0.34 milliseconds, maximizing on February 3, and a semiannual period with an amplitude of 0.29 milliseconds, maximizing on May 8, as well as 10‑day fluctuations of the order of 0.1 milliseconds. Interseasonal fluctuations reflecting El Niño events and quasi-biennial oscillations have also been observed. There is now general agreement that most of the changes in LOD on time scales from weeks to a few years are excited by changes in AAM.
Exchange of angular momentum
One means of exchange of angular momentum between the atmosphere and the non gaseous parts of the earth is evaporation and precipitation. The water cycle moves massive quantities of water between the oceans and the atmosphere. As the mass of water (vapour) rises its rotation must slow due to conservation of angular momentum. Equally when it falls as rain, its rate of rotation will increase to conserve angular momentum. Any net global transfer of water mass from oceans to the atmosphere or the opposite implies a change in the speed of rotation of the solid/liquid Earth which will be reflected in LOD.
Observational evidence shows that there is no significant time delay between the change of AAM and its corresponding change of LOD for periods longer than about 10 days. This implies a strong coupling between atmosphere and solid Earth due to surface friction with a time constant of about 7 days, the spin-down time of the Ekman layer. This spin-down time is the characteristic time for the transfer of atmospheric axial angular momentum to Earth's surface and vice versa.
The zonal wind-component on the ground, which is most effective for the transfer of axial angular momentum between Earth and atmosphere, is the component describing rigid rotation of the atmosphere. The zonal wind of this component has the amplitude u at the equator relative to the ground, where u > 0 indicates superrotation and u < 0 indicates retrograde rotation with respect to the solid Earth. All other wind terms merely redistribute the AAM with latitude, an effect that cancels out when averaged over the globe.
Surface friction allows the atmosphere to 'pick up' angular momentum from Earth in the case of retrograde rotation or release it to Earth in the case of superrotation. Averaging over longer time scales, no exchange of AAM with the solid Earth takes place. Earth and atmosphere are decoupled. This implies that the ground level zonal wind-component responsible for rigid rotation must be zero on the average. Indeed, the observed meridional structure of the climatic mean zonal wind on the ground shows westerly winds (from the west) in middle latitudes beyond about ± 30o latitude and easterly winds (from the east) in low latitudes—the trade winds—as well as near the poles
(prevailing winds).
The atmosphere picks up angular momentum from Earth at low and high latitudes and transfers the same amount to Earth at middle latitudes.
Any short term fluctuation of the rigidly rotating zonal wind-component is then accompanied by a corresponding change in LOD. In order to estimate the order of magnitude of that effect, one may consider the total atmosphere to rotate rigidly with velocity u (in m/s) without surface friction. Then this value is related to the corresponding change of the length of day (in milliseconds) as
The annual component of the change of the length of day of ms corresponds then to a superrotation of m/s, and the semiannual component of ms to m/s.
See also
Atmospheric super-rotation
References
Further reading
Day
Earth
Meteorological phenomena
Geodesy
Astrometry | 0.780514 | 0.980877 | 0.765588 |
Exploratory data analysis | In statistics, exploratory data analysis (EDA) is an approach of analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling and thereby contrasts traditional hypothesis testing. Exploratory data analysis has been promoted by John Tukey since 1970 to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA), which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.
Overview
Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."
Exploratory data analysis is an analysis technique to analyze and investigate the data set and summarize the main characteristics of the dataset. Main advantage of EDA is providing the data visualization of data after conducting the analysis.
Tukey's championing of EDA encouraged the development of statistical computing packages, especially S at Bell Labs. The S programming language inspired the systems S-PLUS and R. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identify outliers, trends and patterns in data that merited further study.
Tukey's EDA was related to two other developments in statistical theory: robust statistics and nonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulating statistical models. Tukey promoted the use of five number summary of numerical data—the two extremes (maximum and minimum), the median, and the quartiles—because these median and quartiles, being functions of the empirical distribution are defined for all distributions, unlike the mean and standard deviation; moreover, the quartiles and median are more robust to skewed or heavy-tailed distributions than traditional summaries (the mean and standard deviation). The packages S, S-PLUS, and R included routines using resampling statistics, such as Quenouille and Tukey's jackknife and Efron bootstrap, which are nonparametric and robust (for many problems).
Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, which concerned Bell Labs. These statistical developments, all championed by Tukey, were designed to complement the analytic theory of testing statistical hypotheses, particularly the Laplacian tradition's emphasis on exponential families.
Development
John W. Tukey wrote the book Exploratory Data Analysis in 1977. Tukey held that too much emphasis in statistics was placed on statistical hypothesis testing (confirmatory data analysis); more emphasis needed to be placed on using data to suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead to systematic bias owing to the issues inherent in testing hypotheses suggested by the data.
The objectives of EDA are to:
Enable unexpected discoveries in the data
Suggest hypotheses about the causes of observed phenomena
Assess assumptions on which statistical inference will be based
Support the selection of appropriate statistical tools and techniques
Provide a basis for further data collection through surveys or experiments
Many EDA techniques have been adopted into data mining. They are also being taught to young students as a way to introduce them to statistical thinking.
Techniques and tools
There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques.
Typical graphical techniques used in EDA are:
Box plot
Histogram
Multi-vari chart
Run chart
Pareto chart
Scatter plot (2D/3D)
Stem-and-leaf plot
Parallel coordinates
Odds ratio
Targeted projection pursuit
Heat map
Bar chart
Horizon graph
Glyph-based visualization methods such as PhenoPlot and Chernoff faces
Projection methods such as grand tour, guided tour and manual tour
Interactive versions of these plots
Dimensionality reduction:
Multidimensional scaling
Principal component analysis (PCA)
Multilinear PCA
Nonlinear dimensionality reduction (NLDR)
Iconography of correlations
Typical quantitative techniques are:
Median polish
Trimean
Ordination
History
Many EDA ideas can be traced back to earlier authors, for example:
Francis Galton emphasized order statistics and quantiles.
Arthur Lyon Bowley used precursors of the stemplot and five-number summary (Bowley actually used a "seven-figure summary", including the extremes, deciles and quartiles, along with the median—see his Elementary Manual of Statistics (3rd edn., 1920), p. 62– he defines "the maximum and minimum, median, quartiles and two deciles" as the "seven positions").
Andrew Ehrenberg articulated a philosophy of data reduction (see his book of the same name).
The Open University course Statistics in Society (MDST 242), took the above ideas and merged them with Gottfried Noether's work, which introduced statistical inference via coin-tossing and the median test.
Example
Findings from EDA are orthogonal to the primary analysis task. To illustrate, consider an example from Cook et al. where the analysis task is to find the variables which best predict the tip that a dining party will give to the waiter. The variables available in the data collected for this task are: the tip amount, total bill, payer gender, smoking/non-smoking section, time of day, day of the week, and size of the party. The primary analysis task is approached by fitting a regression model where the tip rate is the response variable. The fitted model is
(tip rate) = 0.18 - 0.01 × (party size)
which says that as the size of the dining party increases by one person (leading to a higher bill), the tip rate will decrease by 1%, on average.
However, exploring the data reveals other interesting features not described by this model.
What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data.
Software
JMP, an EDA package from SAS Institute.
KNIME, Konstanz Information Miner – Open-Source data exploration platform based on Eclipse.
Minitab, an EDA and general statistics package widely used in industrial and corporate settings.
Orange, an open-source data mining and machine learning software suite.
Python, an open-source programming language widely used in data mining and machine learning.
R, an open-source programming language for statistical computing and graphics. Together with Python one of the most popular languages for data science.
TinkerPlots an EDA software for upper elementary and middle school students.
Weka an open source data mining package that includes visualization and EDA tools such as targeted projection pursuit.
See also
Anscombe's quartet, on importance of exploration
Data dredging
Predictive analytics
Structured data analysis (statistics)
Configural frequency analysis
Descriptive statistics
References
Bibliography
Andrienko, N & Andrienko, G (2005) Exploratory Analysis of Spatial and Temporal Data. A Systematic Approach. Springer.
Cook, D. and Swayne, D.F. (with A. Buja, D. Temple Lang, H. Hofmann, H. Wickham, M. Lawrence) (2007-12-12). Interactive and Dynamic Graphics for Data Analysis: With R and GGobi. Springer. ISBN 9780387717616.
Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1985). Exploring Data Tables, Trends and Shapes. ISBN 978-0-471-09776-1.
Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1983). Understanding Robust and Exploratory Data Analysis. ISBN 978-0-471-09777-8.
Young, F. W. Valero-Mora, P. and Friendly M. (2006) Visual Statistics: Seeing your data with Dynamic Interactive Graphics. Wiley ISBN 978-0-471-68160-1 Jambu M. (1991) Exploratory and Multivariate Data Analysis. Academic Press ISBN 0123800900
S. H. C. DuToit, A. G. W. Steyn, R. H. Stumpf (1986) Graphical Exploratory Data Analysis. Springer ISBN 978-1-4612-9371-2
Leinhardt, G., Leinhardt, S., Exploratory Data Analysis: New Tools for the Analysis of Empirical Data, Review of Research in Education, Vol. 8, 1980 (1980), pp. 85–157.
Theus, M., Urbanek, S. (2008), Interactive Graphics for Data Analysis: Principles and Examples, CRC Press, Boca Raton, FL,
Young, F. W. Valero-Mora, P. and Friendly M. (2006) Visual Statistics: Seeing your data with Dynamic Interactive Graphics. Wiley
Jambu M. (1991) Exploratory and Multivariate Data Analysis. Academic Press
S. H. C. DuToit, A. G. W. Steyn, R. H. Stumpf (1986) Graphical Exploratory Data Analysis. Springer
External links
Carnegie Mellon University – free online course on Probability and Statistics, with a module on EDA
• Exploratory data analysis chapter: engineering statistics handbook | 0.768661 | 0.995991 | 0.765579 |
Popular Mechanics | Popular Mechanics (often abbreviated as PM or PopMech) is a magazine of popular science and technology, featuring automotive, home, outdoor, electronics, science, do it yourself, and technology topics. Military topics, aviation and transportation of all types, space, tools and gadgets are commonly featured.
It was founded in 1902 by Henry Haven Windsor, who was the editor and—as owner of the Popular Mechanics Company—the publisher. For decades, the tagline of the monthly magazine was "Written so you can understand it." In 1958, PM was purchased by the Hearst Corporation, now Hearst Communications.
In 2013, the US edition changed from twelve to ten issues per year, and in 2014 the tagline was changed to "How your world works." The magazine added a podcast in recent years, including regular features Most Useful Podcast Ever and How Your World Works.
History
Popular Mechanics was founded in Chicago by Henry Haven Windsor, with the first issue dated January 11, 1902. His concept was that it would explain "the way the world works" in plain language, with photos and illustrations to aid comprehension. For decades, its tagline was "Written so you can understand it." The magazine was a weekly until September 1902, when it became a monthly. The Popular Mechanics Company was owned by the Windsor family and printed in Chicago until the Hearst Corporation purchased the magazine in 1958. In 1962, the editorial offices moved to New York City. In 2020, Popular Mechanics relocated to Easton, Pennsylvania, along with the additional brands in the Hearst Enthusiast Group (Bicycling and Runner's World). That location also includes Popular Mechanics' testing facility, called the Test Zone.
From the first issue, the magazine featured a large illustration of a technological subject, a look that evolved into the magazine's characteristic full-page, full-color illustration and a small 6.5" x 9.5" trim size beginning with the July, 1911 issue. It maintained the small format until 1975 when it switched the larger standard trim size. Popular Mechanics adopted full-color cover illustrations in 1915, and the look was widely imitated by later technology magazines.
Several international editions were introduced after World War II, starting with a French edition, followed by Spanish in 1947, and Swedish and Danish in 1949. In 2002, the print magazine was being published in English, Chinese, and Spanish and distributed worldwide. South African and Russian editions were introduced that same year.
The march 1962 issue of popular mechanics magazine aided in the June 1962 Alcatraz escape attempt, where three men, Frank Morris and John and Clarence Anglin, used the magazine as a reference to build life vests and a raft out of rubber raincoats and contact cement.
Articles have been contributed by notable people including Guglielmo Marconi, Thomas Edison, Jules Verne, Barney Oldfield, Knute Rockne, Winston Churchill, Charles Kettering, Tom Wolfe and Buzz Aldrin, as well as some US presidents including Teddy Roosevelt and Ronald Reagan. Comedian and car expert Jay Leno had a regular column, Jay Leno's Garage, starting in March, 1999.
Editors
*In general, dates are the inclusive issues for which an editor was responsible. For decades, the lead time to go from submission to print was three months, so some of the dates might not correspond exactly with employment dates. As the Popular Mechanics web site has become more dominant and the importance of print issues has declined, editorial changes have more immediate impact.
Awards
National Magazine Awards
1986 National Magazine Award in the Leisure Interest category for the Popular Mechanics Woodworking Guide, November 1986.
2008 National Magazine Award in the Personal Service category for its "Know Your Footprint: Energy, Water and Waste" series, as well as nominations for General Excellence and Personal Service (a second nomination).
2011 National Magazine Award nomination for "General Excellence" in the "Finance, Technology and Lifestyle magazines" category.
2016 National Magazine Award Finalist in "Personal Service" category for "How to Buy a Car" and "Magazine Section" category for "How Your World Works."
2017 National Magazine Award nomination in the "Magazine Section" category for "Know-How" and in "Feature Writing" for "Climb Aboard, Ye Who Seek the Truth."
All together, the magazine has received 10 National Magazine Award nominations, including 2012 nominations in the Magazine of the Year category and the General Excellence category and a 2015 finalist in both categories.
Other awards
2011 Stater Bros Route 66 Cruisin’ Hall of Fame inductee in "Entertainment/Media" category.
2016 Ad Age "Magazine of the Year."
2017 Webby Awards Honoree for "How to Fix Flying" in the category of "Best Individual Editorial Experience (websites and mobile sites.)"
2019 Defence Media Awards Finalist in "Best Training, Simulation and Readiness" category for "The Air Force Is Changing How Special Ops Fighters Are Trained"
2021 American Nuclear Society "Darlene Schmidt Science News Award" to contributor Caroline Delbert for her "passion and interest in all things nuclear and radiation."
2022 Aerospace Media Awards finalist in the category "Best Propulsion" for "The Space Shuttle Engines Will Rise Again" by Joe Pappalardo.
In popular culture
In 1999, the magazine was a puzzle on Wheel of Fortune. In April 2001, Popular Mechanics was the first magazine to go to space, traveling to the International Space Station aboard the Soyuz TM-32 spacecraft. In December 2002, an issue featured the cover story and image of "The Real Face of Jesus" using data from forensic anthropologists and computer programmers.
In March 2005, Popular Mechanics released an issue dedicated to debunking 9/11 conspiracy theories, which has been used frequently for discrediting 9/11 "trutherism." In 2006, the magazine published a book based on that article entitled "Debunking 9/11 Myths: Why Conspiracy Theories Can't Stand Up to the Facts," with a foreword by then senator John McCain.
An October 2015 issue of Popular Mechanics, featuring director Ridley Scott, included an interactive cover that unlocked special content about Scott's film The Martian. In June 2016, the magazine ran a cover story with then-Vice President of the United States Joe Biden called "Things My Father Taught Me" for its fatherhood issue. Apple Inc. CEO Tim Cook guest-edited the September/October 2022 of Popular Mechanics.
The magazine is mentioned in the 2013 film The Wolf of Wall Street.
Criticisms
In June 2020, following several high-profile takedowns of statues of controversial historical figures, Popular Mechanics faced criticism from primarily conservative commentators and news outlets for an article that provided detailed instructions on how to take down statues.
In early December 2020, Popular Mechanics published an article titled "Leaked Government Photo Shows 'Motionless, Cube-Shaped' UFO." In late December, paranormal claims investigator and fellow of the Committee for Skeptical Inquiry (CSI), Kenny Biddle, investigated the claim in Skeptical Inquirer, reporting that he and investigator and CSI fellow Mick West identified the supposed UFO as a mylar Batman balloon.
Further reading
A nearly complete archive of Popular Mechanics issues from 1905 through 2005 is available through Google Books.
Popular Mechanics' cover art is the subject of Tom Burns' 2015 Texas Tech PhD dissertation, titled Useful fictions: How Popular Mechanics builds technological literacy through magazine cover illustration.
Darren Orr wrote an analysis of the state of Popular Mechanics in 2014 as partial fulfillment of requirements for a master's degree in journalism from University of Missouri-Columbia.
References
External links
Overview on Google Books
Popular Mechanics South African edition
Works by or about Popular Mechanics at Google Books
Monthly magazines published in the United States
Science and technology magazines published in the United States
Hearst Communications publications
Magazines established in 1902
Magazines published in New York City
Popular science magazines
Ten times annually magazines | 0.771336 | 0.992518 | 0.765565 |
Speed | In kinematics, the speed (commonly referred to as v) of an object is the magnitude of the change of its position over time or the magnitude of the change of its position per unit of time; it is thus a non-negative scalar quantity. The average speed of an object in an interval of time is the distance travelled by the object divided by the duration of the interval; the instantaneous speed is the limit of the average speed as the duration of the time interval approaches zero. Speed is the magnitude of velocity (a vector), which indicates additionally the direction of motion.
Speed has the dimensions of distance divided by time. The SI unit of speed is the metre per second (m/s), but the most common unit of speed in everyday usage is the kilometre per hour (km/h) or, in the US and the UK, miles per hour (mph). For air and marine travel, the knot is commonly used.
The fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in vacuum c = metres per second (approximately or ). Matter cannot quite reach the speed of light, as this would require an infinite amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed.
Definition
Historical definition
Italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time. In equation form, that is
where is speed, is distance, and is time. A cyclist who covers 30 metres in a time of 2 seconds, for example, has a speed of 15 metres per second. Objects in motion often have variations in speed (a car might travel along a street at 50 km/h, slow to 0 km/h, and then reach 30 km/h).
Instantaneous speed
Speed at some instant, or assumed constant during a very short period of time, is called instantaneous speed. By looking at a speedometer, one can read the instantaneous speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, but if it did go at that speed for a full hour, it would travel 50 km. If the vehicle continued at that speed for half an hour, it would cover half that distance (25 km). If it continued for only one minute, it would cover about 833 m.
In mathematical terms, the instantaneous speed is defined as the magnitude of the instantaneous velocity , that is, the derivative of the position with respect to time:
If is the length of the path (also known as the distance) travelled until time , the speed equals the time derivative of :
In the special case where the velocity is constant (that is, constant speed in a straight line), this can be simplified to . The average speed over a finite time interval is the total distance travelled divided by the time duration.
Average speed
Different from instantaneous speed, average speed is defined as the total distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, the average speed is 80 kilometres per hour. Likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres (km) is divided by a time in hours (h), the result is in kilometres per hour (km/h).
Average speed does not describe the speed variations that may have taken place during shorter time intervals (as it is the entire distance covered divided by the total time of travel), and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, the distance travelled can be calculated by rearranging the definition to
Using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres.
Expressed in graphical language, the slope of a tangent line at any point of a distance-time graph is the instantaneous speed at this point, while the slope of a chord line of the same graph is the average speed during the time interval covered by the chord. Average speed of an object is
Vav = s÷t
Difference between speed and velocity
Speed denotes only how fast an object is moving, whereas velocity describes both how fast and in which direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified.
The big difference can be discerned when considering movement around a circle. When something moves in a circular path and returns to its starting point, its average velocity is zero, but its average speed is found by dividing the circumference of the circle by the time taken to move around the circle. This is because the average velocity is calculated by considering only the displacement between the starting and end points, whereas the average speed considers only the total distance travelled.
Tangential speed
Units
Units of speed include:
metres per second (symbol m s−1 or m/s), the SI derived unit;
kilometres per hour (symbol km/h);
miles per hour (symbol mi/h or mph);
knots (nautical miles per hour, symbol kn or kt);
feet per second (symbol fps or ft/s);
Mach number (dimensionless), speed divided by the speed of sound;
in natural units (dimensionless), speed divided by the speed of light in vacuum (symbol c = ).
Examples of different speeds
Psychology
According to Jean Piaget, the intuition for the notion of speed in humans precedes that of duration, and is based on the notion of outdistancing. Piaget studied this subject inspired by a question asked to him in 1928 by Albert Einstein: "In what order do children acquire the concepts of time and speed?" Children's early concept of speed is based on "overtaking", taking only temporal and spatial orders into consideration, specifically: "A moving object is judged to be more rapid than another when at a given moment the first object is behind and a moment or so later ahead of the other object."
See also
Air speed
List of vehicle speed records
Typical projectile speeds
Speedometer
V speeds
References
Richard P. Feynman, Robert B. Leighton, Matthew Sands. The Feynman Lectures on Physics, Volume I, Section 8–2. Addison-Wesley, Reading, Massachusetts (1963). .
Physical quantities
Temporal rates | 0.767443 | 0.997529 | 0.765547 |
Recoil | Recoil (often called knockback, kickback or simply kick) is the rearward thrust generated when a gun is being discharged. In technical terms, the recoil is a result of conservation of momentum, as according to Newton's third law the force required to accelerate something will evoke an equal but opposite reactional force, which means the forward momentum gained by the projectile and exhaust gases (ejectae) will be mathematically balanced out by an equal and opposite momentum exerted back upon the gun.
Basics
Any launching system (weapon or not) generates recoil. However recoil only constitutes a problem in the field of artillery and firearms due to the magnitude of the forces at play. Gun chamber pressures and projectile acceleration forces are tremendous, on the order of tens to hundreds megapascal and tens of thousands of times the acceleration of gravity (g's), both necessary to launch the projectile at useful velocity during the very short time (typically only a few milliseconds) it is travelling inside the barrel. Meanwhile, the same pressures acting on the base of the projectile are acting on the rear face of the gun chamber, accelerating the gun rearward during firing with just the same force it is accelerating the projectile forward.
This moves the gun rearward and generates the recoil momentum. This recoil momentum is the product of the mass and the acceleration of the projectile and propellant gasses combined, reversed: the projectile moves forward, the recoil is rearward. The heavier and the faster the projectile, the more recoil will be generated. The gun acquires a rearward velocity that is ratio of this momentum by the mass of the gun: the heavier the gun, the slower the rearward velocity.
As an example, a 8 g (124 gr) bullet of 9×19mm Parabellum flying forward at 350 m/s muzzle speed generates a momentum to push a 0.8 kg pistol firing it at 3.5 m/s rearward, if unopposed by the shooter.
Countering recoil
In order to bring the rearward moving gun to a halt, the momentum acquired by the gun is dissipated by a forward-acting counter-recoil force applied to the gun over a period of time during and after the projectile exits the muzzle. In hand-held small arms, the shooter will apply this force using their own body, resulting in a noticeable impulse commonly referred to as a "kick". In heavier mounted guns, such as heavy machine guns or artillery pieces, recoil momentum is transferred through the platform on which the weapon is mounted. Practical weight gun mounts are typically not strong enough to withstand the maximum forces accelerating the gun during the short time the projectile is in the barrel. To mitigate these large recoil forces, recoil buffering mechanisms spread out the counter-recoiling force over a longer time, typically ten to a hundred times longer than the duration of the forces accelerating the projectile. This results in the required counter-recoiling force being proportionally lower, and easily absorbed by the gun mount.
To apply this counter-recoiling force, modern mounted guns may employ recoil buffering comprising springs and hydraulic recoil mechanisms, similar to shock-absorbing suspension on automobiles. Early cannons used systems of ropes along with rolling or sliding friction to provide forces to slow the recoiling cannon to a stop. Recoil buffering allows the maximum counter-recoil force to be lowered so that strength limitations of the gun mount are not exceeded.
Contribution of propellant gasses
Modern cannons also employ muzzle brakes very effectively to redirect some of the propellant gasses rearward after projectile exit. This provides a counter-recoiling force to the barrel, allowing the buffering system and gun mount to be more efficiently designed at even lower weight.
Propellant gases are even more tapped in recoilless guns, where much of the high pressure gas remaining in the barrel after projectile exit is vented rearward though a nozzle at the back of the chamber, creating a large counter-recoiling force sufficient to eliminate the need for heavy recoil mitigating buffers on the mount (although at the cost of a reduced muzzle velocity of the projectile).
Hand-held guns
The same physics principles affecting recoil in mounted guns also applies to hand-held guns. However, the shooter's body assumes the role of gun mount, and must similarly dissipate the gun's recoiling momentum over a longer period of time than the bullet travel-time in the barrel, in order not to injure the shooter. Hands, arms and shoulders have considerable strength and elasticity for this purpose, up to certain practical limits. Nevertheless, "perceived" recoil limits vary from shooter to shooter, depending on body size, the use of recoil padding, individual pain tolerance, the weight of the firearm, and whether recoil buffering systems and muzzle devices (muzzle brake or suppressor) are employed. For this reason, establishing recoil safety standards for small arms remains challenging, in spite of the straightforward physics involved.
Physics: momentum, energy and impulse
There are two conservation laws at work when a gun is fired: conservation of momentum and conservation of energy. Recoil is explained by the law of conservation of momentum, and so it is easier to discuss it separately from energy.
Momentum is simply mass multiplied by velocity. Velocity is speed in a particular direction (not just speed). In a very technical sense, speed is a scalar (mathematics): a magnitude; while velocity is a vector (physics): magnitude and direction. Momentum is conservative: any change in momentum of an object requires an equal and opposite change of some other objects. Hence the recoil: imparting momentum to the projectile requires imparting opposite momentum to the gun.
A change in momentum of a mass requires applying a force (this is Newton's laws of motion). In a firearm forces wildly change, so what matters is impulse: the change of momentum is equal to the impulse. The rapid change of velocity (acceleration) of the gun is a shock and will countered as if by a shock absorber.
Energy in firing a firearm comes in many forms (thermal, pressure) but for understanding recoil what matters is kinetic energy, which is half mass multiplied by squared speed. For the recoiling gun, this means that for a given rearward momentum, doubling the mass halves the speed and also halves the kinetic energy of the gun, making it easier to dissipate.
Momentum
If all the masses and velocities involved are accounted for, the vector sum, magnitude and direction, of the momentum of all the bodies involved does not change; that is, momentum of the system is conserved. This conservation of momentum is why gun recoil occurs in the opposite direction of bullet projection—the mass times velocity of the projectile (gas included) in the positive direction equals the mass times velocity of the gun in the negative direction. In summation, the total momentum of the system (ammunition, gun and shooter/shooting platform)) equals zero just as it did before the trigger was pulled.
From a practical engineering perspective, therefore, through the mathematical application of conservation of momentum, it is possible to calculate a first approximation of a gun's recoil momentum and kinetic energy simply based on estimates of the projectile speed (and mass) coming out the barrel. And then to properly design recoil buffering systems to safely dissipate that momentum and energy. To confirm analytical calculations and estimates, once a prototype gun is manufactured, the projectile and gun recoil energy and momentum can be directly measured using a ballistic pendulum and ballistic chronograph.
The nature of the recoil process is determined by the force of the expanding gases in the barrel upon the gun (recoil force), which is equal and opposite to the force upon the ejecta. It is also determined by the counter-recoil force applied to the gun (e.g. an operator's hand or shoulder, or a mount). The recoil force only acts during the time that the ejecta are still in the barrel of the gun. The counter-recoil force is generally applied over a longer time period and adds forward momentum to the gun equal to the backward momentum supplied by the recoil force, in order to bring the gun to a halt. There are two special cases of counter recoil force: Free-recoil, in which the time duration of the counter-recoil force is very much larger than the duration of the recoil force, and zero-recoil, in which the counter-recoil force matches the recoil force in magnitude and duration. Except for the case of zero-recoil, the counter-recoil force is smaller than the recoil force but lasts for a longer time. Since the recoil force and the counter-recoil force are not matched, the gun will move rearward, slowing down until it comes to rest. In the zero-recoil case, the two forces are matched and the gun will not move when fired. In most cases, a gun is very close to a free-recoil condition, since the recoil process generally lasts much longer than the time needed to move the ejecta down the barrel. An example of near zero-recoil would be a gun securely clamped to a massive or well-anchored table, or supported from behind by a massive wall. However, employing zero-recoil systems is often neither practical nor safe for the structure of the gun, as the recoil momentum must be absorbed directly through the very small distance of elastic deformation of the materials the gun and mount are made from, perhaps exceeding their strength limits. For example, placing the butt of a large caliber gun directly against a wall and pulling the trigger risks cracking both the gun stock and the surface of the wall.
The recoil of a firearm, whether large or small, is a result of the law of conservation of momentum. Assuming that the firearm and projectile are both at rest before firing, then their total momentum is zero. Assuming a near free-recoil condition, and neglecting the gases ejected from the barrel, (an acceptable first estimate), then immediately after firing, conservation of momentum requires that the total momentum of the firearm and projectile is the same as before, namely zero. Stating this mathematically:
where is the momentum of the firearm and is the momentum of the projectile. In other words, immediately after firing, the momentum of the firearm is equal and opposite to the momentum of the projectile.
Since momentum of a body is defined as its mass multiplied by its velocity, we can rewrite the above equation as:
where:
is the mass of the firearm
is the velocity of the firearm immediately after firing
is the mass of the projectile
is the velocity of the projectile immediately after firing
A force integrated over the time period during which it acts will yield the momentum supplied by that force. The counter-recoil force must supply enough momentum to the firearm to bring it to a halt. This means that:
where:
is the counter-recoil force as a function of time
is duration of the counter-recoil force
A similar equation can be written for the recoil force on the firearm:
where:
is the recoil force as a function of time
is duration of the recoil force
Assuming the forces are somewhat evenly spread out over their respective durations, the condition for free-recoil is , while for zero-recoil,
Angular momentum
For a gun firing under free-recoil conditions, the force on the gun may not only force the gun backwards, but may also cause it to rotate about its center of mass or recoil mount. This is particularly true of older firearms, such as the classic Kentucky rifle, where the butt stock angles down significantly lower than the barrel, providing a pivot point about which the muzzle may rise during recoil. Modern firearms, such as the M16 rifle, employ stock designs that are in direct line with the barrel, in order to minimize any rotational effects. If there is an angle for the recoil parts to rotate about, the torque on the gun is given by:
where is the perpendicular distance of the center of mass of the gun below the barrel axis, is the force on the gun due to the expanding gases, equal and opposite to the force on the bullet, is the moment of inertia of the gun about its center of mass, or its pivot point, and is the angle of rotation of the barrel axis "up" from its orientation at ignition (aim angle). The angular momentum of the gun is found by integrating this equation to obtain:
where the equality of the momenta of the gun and bullet have been used. The angular rotation of the gun as the bullet exits the barrel is then found by integrating again:
where is the angle above the aim angle at which the bullet leaves the barrel, is the time of travel of the bullet in the barrel (because of the acceleration the time is longer than : ) and is the distance the bullet travels from its rest position to the tip of the barrel. The angle at which the bullet leaves the barrel above the aim angle is then given by:
Including the ejected gas
Before the projectile leaves the gun barrel, it obturates the bore and "plugs up" the expanding gas generated by the propellant combustion behind it. This means the gas is essentially contained within a closed system and acts as a neutral element in the overall momentum of the system's physics. However, when the projectile exits the barrel, this functional seal is removed and the highly energetic bore gas is suddenly free to exit the muzzle and expand in the form of a supersonic shockwave (which can be often fast enough to momentarily overtake the projectile and affect its flight dynamics), creating a phenomenon known as the muzzle blast. The forward vector of this blast creates a jet propulsion effect that exerts back upon the barrel, and creates an additional momentum on top of the backward momentum generated by the projectile before it exits the gun.
The overall recoil applied to the firearm is equal and opposite to the total forward momentum of not only the projectile, but also the ejected gas. Likewise, the recoil energy given to the firearm is affected by the ejected gas. By conservation of mass, the mass of the ejected gas will be equal to the original mass of the propellant (assuming complete burning). As a rough approximation, the ejected gas can be considered to have an effective exit velocity of where is the muzzle velocity of the projectile and is approximately constant. The total momentum of the propellant and projectile will then be:
where is the mass of the propellant charge, equal to the mass of the ejected gas.
This expression should be substituted into the expression for projectile momentum in order to obtain a more accurate description of the recoil process. The effective velocity may be used in the energy equation as well, but since the value of α used is generally specified for the momentum equation, the energy values obtained may be less accurate. The value of the constant α is generally taken to lie between 1.25 and 1.75. It is mostly dependent upon the type of propellant used, but may depend slightly on other things such as the ratio of the length of the barrel to its radius.
Muzzle devices can reduce the recoil impulse by altering the pattern of gas expansion. For instance, muzzle brakes primarily works by diverting some of the gas ejecta towards the sides, increasing the lateral blast intensity (hence louder to the sides) but reducing the thrust from the forward-projection (thus less recoil). Similarly, recoil compensators divert the gas ejecta mostly upwards to counteract the muzzle rise. However, suppressors work on a different principle, not by vectoring the gas expansion laterally but instead by modulating the forward speed of the gas expansion. By using internal baffles, the gas is made to travel through a convoluted path before eventually released outside at the front of the suppressor, thus dissipating its energy over a larger area and a longer time. This reduces both the intensity of the blast (thus lower loudness) and the recoil generated (as for the same impulse, force is inversely proportional to time).
Perception of recoil
For small arms, the way in which the shooter perceives the recoil, or kick, can have a significant impact on the shooter's experience and performance. For example, a gun that is said to "kick like a mule" is going to be approached with trepidation, and the shooter may anticipate the recoil and flinch in anticipation as the shot is released. This leads to the shooter jerking the trigger, rather than pulling it smoothly, and the jerking motion is almost certain to disturb the alignment of the gun and may result in a miss. The shooter may also be physically injured by firing a weapon generating recoil in excess of what the body can safely absorb or restrain; perhaps getting hit in the eye by the rifle scope, hit in the forehead by a handgun as the elbow bends under the force, or soft tissue damage to the shoulder, wrist and hand; and these results vary for individuals. In addition, as pictured in the image, excessive recoil can create serious range safety concerns, if the shooter cannot adequately restrain the firearm in a down-range direction.
Perception of recoil is related to the deceleration the body provides against a recoiling gun, deceleration being a force that slows the velocity of the recoiling mass. Force applied over a distance is energy. The force that the body feels, therefore, is dissipating the kinetic energy of the recoiling gun mass. A heavier gun, that is a gun with more mass, will manifest lower recoil kinetic energy, and, generally, result in a lessened perception of recoil. Therefore, although determining the recoiling energy that must be dissipated through a counter-recoiling force is arrived at by conservation of momentum, kinetic energy of recoil is what is actually being restrained and dissipated. The ballistics analyst discovers this recoil kinetic energy through analysis of projectile momentum.
One of the common ways of describing the felt recoil of a particular gun-cartridge combination is as "soft" or "sharp" recoiling; soft recoil is recoil spread over a longer period of time, that is at a lower deceleration, and sharp recoil is spread over a shorter period of time, that is with a higher deceleration. Like pushing softer or harder on the brakes of a car, the driver feels less or more deceleration force being applied, over a longer or shorter distance to bring the car to a stop. However, for the human body to mechanically adjust recoil time, and hence length, to lessen felt recoil force is perhaps an impossible task. Other than employing less safe and less accurate practices, such as shooting from the hip, shoulder padding is a safe and effective mechanism that allows sharp recoiling to be lengthened into soft recoiling, as lower decelerating force is transmitted into the body over a slightly greater distance and time, and spread out over a slightly larger surface.
Keeping the above in mind, you can generally base the relative recoil of firearms by factoring in a small number of parameters: bullet momentum (weight times velocity), (note that momentum and impulse are interchangeable terms), and the weight of the firearm. Lowering momentum lowers recoil, all else being the same. Increasing the firearm weight also lowers recoil, again all else being the same. The following are base examples calculated through the Handloads.com free online calculator, and bullet and firearm data from respective reloading manuals (of medium/common loads) and manufacturer specs:
In a Glock 22 frame, using the empty weight of , the following was obtained:
9 mm Luger: Recoil impulse of 0.78 lbf·s (3.5 N·s); Recoil velocity of ; Recoil energy of
.357 SIG: Recoil impulse of 1.06 lbf·s (4.7 N·s); Recoil velocity of ; Recoil energy of
.40 S&W: Recoil impulse of 0.88 lbf·s (3.9 N·s); Recoil velocity of ; Recoil energy of
In a Smith & Wesson .44 Magnum with 7.5-inch barrel, with an empty weight of , the following was obtained:
.44 Remington Magnum: Recoil impulse of 1.91 lbf·s (8.5 N·s); Recoil velocity of ; Recoil energy of
In a Smith & Wesson 460 7.5-inch barrel, with an empty weight of , the following was obtained:
.460 S&W Magnum: Recoil impulse of 3.14 lbf·s (14.0 N·s); Recoil velocity of ; Recoil energy of
In a Smith & Wesson 500 4.5-inch barrel, with an empty weight of , the following was obtained:
.500 S&W Magnum: Recoil impulse of 3.76 lbf·s (16.7 N·s); Recoil velocity of ; Recoil energy of
In addition to the overall mass of the gun, reciprocating parts of the gun will affect how the shooter perceives recoil. While these parts are not part of the ejecta, and do not alter the overall momentum of the system, they do involve moving masses during the operation of firing. For example, gas-operated shotguns are widely held to have a "softer" recoil than fixed breech or recoil-operated guns. (Although many semi-automatic recoil and gas-operated guns incorporate recoil buffer systems into the stock that effectively spread out peak felt recoil forces.) In a gas-operated gun, the bolt is accelerated rearwards by propellant gases during firing, which results in a forward force on the body of the gun. This is countered by a rearward force as the bolt reaches the limit of travel and moves forwards, resulting in a zero sum, but to the shooter, the recoil has been spread out over a longer period of time, resulting in the "softer" feel.
Mounted guns
A recoil system absorbs recoil energy, reducing the peak force that is conveyed to whatever the gun is mounted on. Old-fashioned cannons without a recoil system roll several meters backwards when fired; systems were used to somewhat limit this movement (ropes, friction including brakes on wheels, slopes so that the recoil would force the gun uphill,...), but utterly preventing any movement would just have resulted in the mount breaking. As a result, guns had to be put back into firing position and carefully aimed again after each shot, dramatically slowing the firing rate. The modern quick-firing guns was made possible by the invention of a much more efficient device: the hydro-pneumatic recoil system. First developed by Wladimir Baranovsky in 1872–5 and adopted by the Russian army, then later in France, in the 75mm field gun of 1897, it is still the main device used by big guns nowadays.
In this system, the barrel is mounted on rails on which it can recoil to the rear, and the recoil is taken up by a cylinder which is similar in operation to an automotive gas-charged shock absorber, and is commonly visible as a cylinder shorter and smaller than the barrel mounted parallel to it. The cylinder contains a charge of compressed air that will act as a spring, as well as hydraulic oil; in operation, the barrel's energy is taken up in compressing the air as the barrel recoils backward, then is dissipated via hydraulic damping as the barrel is returned forward to the firing position under the pressure of the compressed air. The recoil impulse is thus spread out over the time in which the barrel is compressing the air, rather than over the much narrower interval of time when the projectile is being fired. This greatly reduces the peak force conveyed to the mount (or to the ground on which the gun has been placed).
Soft-recoil
In a soft-recoil system, the spring (or air cylinder) that returns the barrel to the forward position starts out in a nearly fully compressed state, then the gun's barrel is released free to fly forward in the moment before firing; the charge is then ignited just as the barrel reaches the fully forward position. Since the barrel is still moving forward when the charge is ignited, about half of the recoil impulse is applied to stopping the forward motion of the barrel, while the other half is, as in the usual system, taken up in recompressing the spring. A latch then catches the barrel and holds it in the starting position. This roughly halves the energy that the spring needs to absorb, and also roughly halves the peak force conveyed to the mount, as compared to the usual system. However, the need to reliably achieve ignition at a single precise instant is a major practical difficulty with this system; and unlike the usual hydro-pneumatic system, soft-recoil systems do not easily deal with hangfires or misfires. One of the early guns to use this system was the French 65 mm mle.1906; it was also used by the World War II British PIAT man-portable anti-tank weapon.
Other devices
Recoilless rifles and rocket launchers exhaust gas to the rear, balancing the recoil. They are used often as light anti-tank weapons. The Swedish-made Carl Gustav 84mm recoilless gun is such a weapon.
In machine guns following Hiram Maxim's design – e.g. the Vickers machine gun – the recoil of the barrel is used to drive the feed mechanism.
See also
Muzzle rise, a torque generated by recoil that tends to cause the muzzle to lift up and back
Power factor, a ranking system used in practical shooting competitions to reward cartridges with more recoil.
Recoil operation, the use of recoil force to cycle a weapon's action
Ricochet, a projectile that rebounds, bounces or skips off a surface, potentially backwards toward the shooter
Recoil buffer
Muzzle brake
Recoil pad
Notes
References
External links
Recoil Tutorial
Recoil Calculator and summary of equations at JBM.
Firearm terminology | 0.771909 | 0.991749 | 0.76554 |
SHELL model | In aviation, the SHELL model (also known as the SHEL model) is a conceptual model of human factors that helps to clarify the location and cause of human error within an aviation environment.
It is named after the initial letters of its components (Software, Hardware, Environment, Liveware) and places emphasis on the human being and human interfaces with other components of the aviation system.
The SHELL model adopts a systems perspective that suggests the human is rarely, if ever, the sole cause of an accident. The systems perspective considers a variety of contextual and task-related factors that interact with the human operator within the aviation system to affect operator performance. As a result, the SHELL model considers both active and latent failures in the aviation system.
History
The model was first developed as the SHEL model by Elwyn Edwards in 1972 and later modified into a 'building block' structure by Frank Hawkins in 1975.
Description
Each component of the SHELL model (software, hardware, environment, liveware) represents a building block of human factors studies within aviation.
The human element or worker of interest (liveware) is at the centre or hub of the SHELL model that represents the modern air transportation system. The human element is the most critical and flexible component in the system, interacting directly with other system components, namely software, hardware, environment and liveware.
However, the edges of the central human component block are varied, to represent human limitations and variations in performance. Therefore, the other system component blocks must be carefully adapted and matched to this central component to accommodate human limitations and avoid stress and breakdowns (incidents/accidents) in the aviation system. To accomplish this matching, the characteristics or general capabilities and limitations of this central human component must be understood.
Human characteristics
Physical size and shape
In the design of aviation workplaces and equipment, body measurements and movement are a vital factor. Differences occur according to ethnicity, age and gender for example. Design decisions must take into account the human dimensions and population percentage that the design is intended to satisfy.
Human size and shape are relevant in the design and location of aircraft cabin equipment, emergency equipment, seats and furnishings as well as access and space requirements for cargo compartments.
Fuel requirements
Humans require food, water and oxygen to function effectively and deficiencies can affect performance and well-being.
Information processing
Humans have limitations in information processing capabilities (such as working memory capacity, time and retrieval considerations) that can also be influenced by other factors such as motivation and stress or high workload. Aircraft display, instrument and alerting/warning system design needs to take into account the capabilities and limitations of human information processing to prevent human error.
Input characteristics
The human senses for collecting vital task and environment-related information are subject to limitations and degradation. Human senses cannot detect the whole range of sensory information available. For example, the human eye cannot see an object at night due to low light levels. This produces implications for pilot performance during night flying. In addition to sight, other senses include sound, smell, taste and touch (movement and temperature).
Output characteristics
After sensing and processing information, the output involves decisions, muscular action and communication. Design considerations include aircraft control-display movement relationship, acceptable direction of movement of controls, control resistance and coding, acceptable human forces required to operate aircraft doors, hatches and cargo equipment and speech characteristics in the design of voice communication procedures.
Environmental tolerances
People function effectively only within a narrow range of environmental conditions (tolerable for optimum human performance) and therefore their performance and well-being is affected by physical environmental factors such as temperature, vibration, noise, g-forces and time of day as well as time zone transitions, boring/stressful working environments, heights and enclosed spaces.
Components
Software
Non-physical, intangible aspects of the aviation system that govern how the aviation system operates and how information within the system is organised.
Software may be likened to the software that controls the operations of computer hardware.
Software includes rules, instructions, aviation law and regulations, policies, norms, orders, safety procedures, standard operating procedures, customs, practices, conventions, habits, symbology, supervisor commands and computer programmes.
Software can be included in a collection of documents such as the contents of charts, maps, publications, emergency operating manuals and procedural checklists.
Hardware
Physical elements of the aviation system such as aircraft (including controls, surfaces, displays, functional systems and seating), operator equipment, tools, materials, buildings, vehicles, computers, conveyor belts etc.
Environment
The context in which aircraft and aviation system resources (software, hardware, liveware) operate, made up of physical, organisational, economic, regulatory, political and social variables that may impact on the worker/operator.
Internal air transport environment relates to immediate work area and includes physical factors such as cabin/cockpit temperature, air pressure, humidity, noise, vibration and ambient light levels.
External air transport environment includes the physical environment outside the immediate work area such as weather (visibility/Turbulence), terrain, congested airspace and physical facilities and infrastructure including airports as well as broad organisational, economic, regulatory, political and social factors.
Liveware
Human element or people in the aviation system. For example, flight crew personnel who operate aircraft, cabin crew, ground crew, management and administration personnel.
The liveware component considers human performance, capabilities and limitations.
The four components of the SHELL model or aviation system do not act in isolation but instead interact with the central human component to provide areas for human factors analysis and consideration. The SHELL model indicates relationships between people and other system components and therefore provides a framework for optimising the relationship between people and their activities within the aviation system that is of primary concern to human factors. In fact, the International Civil Aviation Organisation has described human factors as a concept of people in their living and working situations; their interactions with machines (hardware), procedures (software) and the environment about them; and also their relationships with other people.
According to the SHELL model, a mismatch at the interface of the blocks/components where energy and information is interchanged can be a source of human error or system vulnerability that can lead to system failure in the form of an incident/accident. Aviation disasters tend to be characterised by mismatches at interfaces between system components, rather than catastrophic failures of individual components.
Interfaces
Liveware-Software (L-S)
Interaction between human operator and non-physical supporting systems in the workplace.
Involves designing software to match the general characteristics of human users and ensuring that the software (e.g. rules/procedures) is capable of being implemented with ease.
During training, flight crew members incorporate much of the software (e.g. procedural information) associated with flying and emergency situations into their memory in the form of knowledge and skills. However, more information is obtained by referring to manuals, checklists, maps and charts. In a physical sense these documents are regarded as hardware however in the information design of these documents adequate attention has to be paid to numerous aspects of the L-S interface.
For instance, by referring to cognitive ergonomics principles, the designer must consider currency and accuracy of information; user-friendliness of format and vocabulary; clarity of information; subdivision and indexing to facilitate user retrieval of information; presentation of numerical data; use of abbreviations, symbolic codes and other language devices; presentation of instructions using diagrams and/or sentences etc. The solutions adopted after consideration of these informational design factors play a crucial role in effective human performance at the L-S interface.
Mismatches at the L-S interface may occur through:
Insufficient/inappropriate procedures
Misinterpretation of confusing or ambiguous symbology/checklists
Confusing, misleading or cluttered documents, maps or charts
Irrational indexing of an operations manual.
A number of pilots have reported confusion in trying to maintain aircraft attitude through reference to the Head-Up-Display artificial horizon and 'pitch-ladder' symbology.
Liveware-Hardware (L-H)
Interaction between human operator and machine
Involves matching the physical features of the aircraft, cockpit or equipment with the general characteristics of human users while considering the task or job to be performed. Examples:
designing passenger and crew seats to fit the sitting characteristics of the human body
designing cockpit displays and controls to match the sensory, information processing and movement characteristics of human users while facilitating action sequencing, minimising workload (through location/layout) and including safeguards for incorrect/inadvertent operation.
Mismatches at the L-H interface may occur through:
poorly designed equipment
inappropriate or missing operational material
badly located or coded instruments and control devices
warning systems that fail in alerting, informational or guidance functions in abnormal situations etc.
The old 3-pointer aircraft altimeter encouraged errors because it was very difficult for pilots to tell what information related to which pointer.
Liveware-Environment (L-E)
Interaction between human operator and internal and external environments.
Involves adapting the environment to match human requirements. Examples:
Engineering systems to protect crews and passengers from discomfort, damage, stress and distraction caused by the physical environment.
Air conditioning systems to control aircraft cabin temperature
Sound-proofing to reduce noise
Pressurisation systems to control cabin air pressure
Protective systems to combat ozone concentrations
Using black-out curtains to obtain sleep during daylight house as a result of transmeridian travel and shift work
Expanding infrastructure, passenger terminals and airport facilities to accommodate more people due to larger jets (e.g. Airbus A380) and the growth in air transport
Examples of mismatches at the L-E interface include:
Reduced performance and errors resulting from disturbed biological rhythms (jet lag) as a result of long-range flying and irregular work-sleep patterns
Pilot perceptual errors induced by environmental conditions such as visual illusions during aircraft approach/landing at nighttime
Flawed operator performance and errors as a result of management failure to properly address issues at the L-E interface including:
Operator stress due to changes in air transport demand and capacity during times of economic boom and economic recession.
Biased crew decision making and operator short-cuts as a consequence of economic pressure brought on by airline competition and cost-cutting measures linked with deregulation.
Inadequate or unhealthy organisational environment reflecting a flawed operating philosophy, poor employee morale or negative organisational culture.
Liveware-Liveware (L-L)
Interaction between central human operator and any other person in the aviation system during performance of tasks.
Involves interrelationships among individuals within and between groups including maintenance personnel, engineers, designers, ground crew, flight crew, cabin crew, operations personnel, air traffic controllers, passengers, instructors, students, managers and supervisors.
Human-human/group interactions can positively or negatively influence behaviour and performance including the development and implementation of behavioural norms. Therefore, the L-L interface is largely concerned with:
interpersonal relations
leadership
crew cooperation, coordination and communication
dynamics of social interactions
teamwork
cultural interactions
personality and attitude interactions.
The importance of the L-L interface and the issues involved have contributed to the development of cockpit/crew resource management (CRM) programmes in an attempt to reduce error at the interface between aviation professionals
Examples of mismatches at the L-L interface include:
Communication errors due to misleading, ambiguous, inappropriate or poorly constructed communication between individuals. Communication errors have resulted in aviation accidents such as the double Boeing 747 disaster at Tenerife Airport in 1977.
Reduced performance and error from an imbalanced authority relationship between aircraft captain and first officer. For instance, an autocratic captain and an overly submissive first officer may cause the first officer to fail to speak up when something is wrong, or alternatively the captain may fail to listen.
The SHELL Model does not consider interfaces that are outside the scope of human factors. For instance, the hardware-hardware, hardware-environment and hardware-software interfaces are not considered as these interfaces do not involve the liveware component.
Aviation System Stability
Any change within the aviation SHELL system can have far-reaching repercussions. For example, a minor equipment change (hardware) requires an assessment of the impact of the change on operations and maintenance personnel (Liveware-Hardware) and the possibility of the need for alterations to procedures/training programmes (to optimise Liveware-Software interactions). Unless all potential effects of a change in the aviation system are properly addressed, it is possible that even a small system modification may produce undesirable consequences. Similarly, the aviation system must be continually reviewed to adjust for changes at the Liveware-Environment interface.
Uses
**Safety analysis tool**: The SHELL model can be used as a framework for collecting data about human performance and contributory component mismatches during aviation incident/accident analysis or investigation as recommended by the International Civil Aviation Organisation. Similarly, the SHELL model can be used to understand systemic human factors relationships during operational audits with the aim of reducing error, enhancing safety and improving processes For example, LOSA (Line Operations Safety Audit) is founded on Threat and error management (TEM) that considers SHELL interfaces. For instance, aircraft handling errors involve liveware-hardware interactions, procedural errors involve liveware-software interactions and communication errors involve liveware-liveware interactions.
**Licensing tool**: The SHELL model can be used to help clarify human performance needs, capabilities and limitations thereby enabling competencies to be defined from a safety management perspective.
**Training tool**: The SHELL model can be used to help an aviation organisation improve training interventions and the effectiveness of organisation safeguards against error.
References
External links
AviationKnowledge - Shell Model Interface Errors This AviationKnowledge page provides examples of aviation accidents where errors or mismatches at SHELL interfaces have either contributed to or caused accidents
AviationKnowledge - Shell Model Variants, You can also consult on two variants to the SHELL model:
SCHELL
SHELL-T.
AviationKnowledge - ICAO: Fundamental Human Factors Concepts This AviationKnowledge page is a synopsis of ICAO's digest number 1 and provides a good background context for the SHELL Model. ICAO's digest number 1 is accessed as CAP 719: Fundamental Human Factors Concepts that includes further information and examples of SHELL Model components and interfaces within the aviation context
AviationKnowledge - ICAO: Ergonomics This AviationKnowledge page is a synopsis of ICAO's digest number 6 and outlines information on ergonomics (the study of human-machine system design issues), human capabilities, hardware and flight deck design and the environment
AviationKnowledge - ICAO: Human Factors in Air Traffic Control** : This AviationKnowledge page is a synopsis of ICAO's digest number 8 and discusses aspects of the SHELL Model with respect to ATC
CAP 718: Human Factors in Aircraft Maintenance and Inspection** : This Civil Aviation Authority Publication discusses aspects of the SHELL Model with respect to aircraft maintenance and inspection
Aviation safety | 0.774413 | 0.988522 | 0.765524 |
Longitudinal wave | Longitudinal waves are waves in which the vibration of the medium is parallel to the direction the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called compressional or compression waves, because they produce compression and rarefaction when travelling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P-waves (created by earthquakes and explosions).
The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe some bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support.
Nomenclature
"Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience.
While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "ℓ-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books.
Sound waves
For longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula
where:
is the displacement of the point on the traveling sound wave;
is the distance from the point to the wave's source;
is the time elapsed;
is the amplitude of the oscillations,
is the speed of the wave; and
is the angular frequency of the wave.
The quantity is the time that the wave takes to travel the distance
The ordinary frequency of the wave is given by
The wavelength can be calculated as the relation between a wave's speed and ordinary frequency.
For sound waves, the amplitude of the wave is the difference between the pressure of the undisturbed air and the maximum pressure caused by the wave.
Sound's propagation speed depends on the type, temperature, and composition of the medium through which it propagates.
Speed of Longitudinal Waves
Isotropic medium
For isotropic solids and liquids, the speed of a longitudinal wave can be described by
where
is the elastic modulus, such that
where is the shear modulus and is the bulk modulus;
is the mass density of the medium.
Attenuation of longitudinal waves
The attenuation of a wave in a medium describes the loss of energy a wave carries as it propagates throughout the medium. This is caused by the scattering of the wave at interfaces, the loss of energy due to the friction between molecules, or geometric divergence. The study of attenuation of elastic waves in materials has increased in recent years, particularly within the study of polycrystalline materials where researchers aim to "nondestructively evaluate the degree of damage of engineering components" and to "develop improved procedures for characterizing microstructures" according to a research team led by R. Bruce Thompson in a Wave Motion publication.
Attenuation in viscoelastic materials
In viscoelastic materials, the attenuation coefficients per length for longitudinal waves and for transverse waves must satisfy the following ratio:
where and are the transverse and longitudinal wave speeds respectively.
Attenuation in polycrystalline materials
Polycrystalline materials are made up of various crystal grains which form the bulk material. Due to the difference in crystal structure and properties of these grains, when a wave propagating through a poly-crystal crosses a grain boundary, a scattering event occurs causing scattering based attenuation of the wave. Additionally it has been shown that the ratio rule for viscoelastic materials,
applies equally successfully to polycrystalline materials.
A current prediction for modeling attenuation of waves in polycrystalline materials with elongated grains is the second-order approximation (SOA) model which accounts the second order of inhomogeneity allowing for the consideration multiple scattering in the crystal system. This model predicts that the shape of the grains in a poly-crystal has little effect on attenuation.
Pressure waves
The equations for sound in a fluid given above also apply to acoustic waves in an elastic solid. Although solids also support transverse waves (known as S-waves in seismology), longitudinal sound waves in the solid exist with a velocity and wave impedance dependent on the material's density and its rigidity, the latter of which is described (as with sound in a gas) by the material's bulk modulus.
In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster.
Electromagnetics
Maxwell's equations lead to the prediction of electromagnetic waves in a vacuum, which are strictly transverse waves; due to the fact that they would need particles to vibrate upon, the electric and magnetic fields of which the wave consists are perpendicular to the direction of the wave's propagation. However plasma waves are longitudinal since these are not electromagnetic waves but density waves of charged particles, but which can couple to the electromagnetic field.
After Heaviside's attempts to generalize Maxwell's equations, Heaviside concluded that electromagnetic waves were not to be found as longitudinal waves in "free space" or homogeneous media. Maxwell's equations, as we now understand them, retain that conclusion: in free-space or other uniform isotropic dielectrics, electro-magnetic waves are strictly transverse. However electromagnetic waves can display a longitudinal component in the electric and/or magnetic fields when traversing birefringent materials, or inhomogeneous materials especially at interfaces (surface waves for instance) such as Zenneck waves.
In the development of modern physics, Alexandru Proca (1897–1955) was known for developing relativistic quantum field equations bearing his name (Proca's equations) which apply to the massive vector spin-1 mesons. In recent decades some other theorists, such as Jean-Pierre Vigier and Bo Lehnert of the Swedish Royal Society, have used the Proca equation in an attempt to demonstrate photon mass as a longitudinal electromagnetic component of Maxwell's equations, suggesting that longitudinal electromagnetic waves could exist in a Dirac polarized vacuum. However photon rest mass is strongly doubted by almost all physicists and is incompatible with the Standard Model of physics.
See also
Transverse wave
Sound
Acoustic wave
P-wave
Plasma waves
References
Further reading
Varadan, V. K., and Vasundara V. Varadan, "Elastic wave scattering and propagation". Attenuation due to scattering of ultrasonic compressional waves in granular media – A.J. Devaney, H. Levine, and T. Plona. Ann Arbor, Mich., Ann Arbor Science, 1982.
Schaaf, John van der, Jaap C. Schouten, and Cor M. van den Bleek, "Experimental Observation of Pressure Waves in Gas-Solids Fluidized Beds". American Institute of Chemical Engineers. New York, N.Y., 1997.
Russell, Dan, "Longitudinal and Transverse Wave Motion". Acoustics Animations, Pennsylvania State University, Graduate Program in Acoustics.
Longitudinal Waves, with animations "The Physics Classroom"
Wave mechanics
Articles containing video clips | 0.769321 | 0.995047 | 0.765511 |
Thermodynamic state | In thermodynamics, a thermodynamic state of a system is its condition at a specific time; that is, fully identified by values of a suitable set of parameters known as state variables, state parameters or thermodynamic variables. Once such a set of values of thermodynamic variables has been specified for a system, the values of all thermodynamic properties of the system are uniquely determined. Usually, by default, a thermodynamic state is taken to be one of thermodynamic equilibrium. This means that the state is not merely the condition of the system at a specific time, but that the condition is the same, unchanging, over an indefinitely long duration of time.
Properties that Define a Thermodynamic State
Temperature (T) represents the average kinetic energy of the particles in a system. It's a measure of how hot or cold a system is.
Pressure (P) is the force exerted by the particles of a system on a unit area of the container walls.
Volume (V) refers to the space occupied by the system.
Composition defines the amount of each component present for systems with more than one component (e.g., mixtures).
Thermodynamic Path
When a system undergoes a change from one state to another, it is said to traverse a path. The path can be described by how the properties change, like isothermal (constant temperature) or isobaric (constant pressure) paths.
Thermodynamics sets up an idealized conceptual structure that can be summarized by a formal scheme of definitions and postulates. Thermodynamic states are amongst the fundamental or primitive objects or notions of the scheme, for which their existence is primary and definitive, rather than being derived or constructed from other concepts.
A thermodynamic system is not simply a physical system. Rather, in general, infinitely many different alternative physical systems comprise a given thermodynamic system, because in general a physical system has vastly many more microscopic characteristics than are mentioned in a thermodynamic description. A thermodynamic system is a macroscopic object, the microscopic details of which are not explicitly considered in its thermodynamic description. The number of state variables required to specify the thermodynamic state depends on the system, and is not always known in advance of experiment; it is usually found from experimental evidence. The number is always two or more; usually it is not more than some dozen. Though the number of state variables is fixed by experiment, there remains choice of which of them to use for a particular convenient description; a given thermodynamic system may be alternatively identified by several different choices of the set of state variables. The choice is usually made on the basis of the walls and surroundings that are relevant for the thermodynamic processes that are to be considered for the system. For example, if it is intended to consider heat transfer for the system, then a wall of the system should be permeable to heat, and that wall should connect the system to a body, in the surroundings, that has a definite time-invariant temperature.
For equilibrium thermodynamics, in a thermodynamic state of a system, its contents are in internal thermodynamic equilibrium, with zero flows of all quantities, both internal and between system and surroundings. For Planck, the primary characteristic of a thermodynamic state of a system that consists of a single phase, in the absence of an externally imposed force field, is spatial homogeneity. For non-equilibrium thermodynamics, a suitable set of identifying state variables includes some macroscopic variables, for example a non-zero spatial gradient of temperature, that indicate departure from thermodynamic equilibrium. Such non-equilibrium identifying state variables indicate that some non-zero flow may be occurring within the system or between system and surroundings.
State variables and state functions
A thermodynamic system can be identified or described in various ways. Most directly, it can be identified by a suitable set of state variables. Less directly, it can be described by a suitable set of quantities that includes state variables and state functions.
The primary or original identification of the thermodynamic state of a body of matter is by directly measurable ordinary physical quantities. For some simple purposes, for a given body of given chemical constitution, a sufficient set of such quantities is 'volume and pressure'.
Besides the directly measurable ordinary physical variables that originally identify a thermodynamic state of a system, the system is characterized by further quantities called state functions, which are also called state variables, thermodynamic variables, state quantities, or functions of state. They are uniquely determined by the thermodynamic state as it has been identified by the original state variables. There are many such state functions. Examples are internal energy, enthalpy, Helmholtz free energy, Gibbs free energy, thermodynamic temperature, and entropy. For a given body, of a given chemical constitution, when its thermodynamic state has been fully defined by its pressure and volume, then its temperature is uniquely determined. Thermodynamic temperature is a specifically thermodynamic concept, while the original directly measureable state variables are defined by ordinary physical measurements, without reference to thermodynamic concepts; for this reason, it is helpful to regard thermodynamic temperature as a state function.
A passage from a given initial thermodynamic state to a given final thermodynamic state of a thermodynamic system is known as a thermodynamic process; usually this is transfer of matter or energy between system and surroundings. In any thermodynamic process, whatever may be the intermediate conditions during the passage, the total respective change in the value of each thermodynamic state variable depends only on the initial and final states. For an idealized continuous or quasi-static process, this means that infinitesimal incremental changes in such variables are exact differentials. Together, the incremental changes throughout the process, and the initial and final states, fully determine the idealized process.
In the most commonly cited simple example, an ideal gas, the thermodynamic variables would be any three variables out of the following four: amount of substance, pressure, temperature, and volume. Thus, the thermodynamic state would range over a three-dimensional state space. The remaining variable, as well as other quantities such as the internal energy and the entropy, would be expressed as state functions of these three variables. The state functions satisfy certain universal constraints, expressed in the laws of thermodynamics, and they depend on the peculiarities of the materials that compose the concrete system.
Various thermodynamic diagrams have been developed to model the transitions between thermodynamic states.
Equilibrium state
Physical systems found in nature are practically always dynamic and complex, but in many cases, macroscopic physical systems are amenable to description based on proximity to ideal conditions. One such ideal condition is that of a stable equilibrium state. Such a state is a primitive object of classical or equilibrium thermodynamics, in which it is called a thermodynamic state. Based on many observations, thermodynamics postulates that all systems that are isolated from the external environment will evolve so as to approach unique stable equilibrium states. There are a number of different types of equilibrium, corresponding to different physical variables, and a system reaches thermodynamic equilibrium when the conditions of all the relevant types of equilibrium are simultaneously satisfied. A few different types of equilibrium are listed below.
Thermal equilibrium: When the temperature throughout a system is uniform, the system is in thermal equilibrium.
Mechanical equilibrium: If at every point within a given system there is no change in pressure with time, and there is no movement of material, the system is in mechanical equilibrium.
Phase equilibrium: This occurs when the mass for each individual phase reaches a value that does not change with time.
Chemical equilibrium: In chemical equilibrium, the chemical composition of a system has settled and does not change with time.
References
Bibliography
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
A translation may be found here. A mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, .
Jaynes, E.T. (1965). Gibbs vs. Boltzmann entropies, Am. J. Phys., 33: 391–398.
Marsland, R. , Brown, H.R., Valente, G. (2015). Time and irreversibility in axiomatic thermodynamics, Am. J. Phys., 83(7): 628–634.
Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London.
Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
Zemanksy, M.W., Dittman, R.H. (1937/1981). Heat and Thermodynamics. An Intermediate Textbook, sixth edition, McGraw-Hill Book Company, New York, ISNM 0-07-072808-9.
See also
Excited state
Ground state
Stationary state
Thermodynamics | 0.781188 | 0.979927 | 0.765507 |
Path integral formulation | The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away.
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks.
The path integral has impacted a wide array of sciences, including polymer physics, quantum field theory, string theory and cosmology. In physics, it is a foundation for lattice gauge theory and quantum chromodynamics. It has been called the "most powerful formula in physics", with Stephen Wolfram also declaring it to be the "fundamental mathematical construct of modern quantum mechanics and quantum field theory".
The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion. This idea was extended to the use of the Lagrangian in quantum mechanics by Paul Dirac, whose 1933 paper gave birth to path integral formulation. The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier in his doctoral work under the supervision of John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point.
Quantum action principle
In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, ). For states with a definite energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle.
The Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity in the context of special relativity. The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames. The Lagrangian is a Lorentz scalar, while the Hamiltonian is the time component of a four-vector. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics.
The Hamiltonian is a function of the position and momentum at one time, and it determines the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transformation, and the condition that determines the classical equations of motion (the Euler–Lagrange equations) is that the action has an extremum.
In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. In classical mechanics, with discretization in time, the Legendre transform becomes
and
where the partial derivative with respect to holds fixed. The inverse Legendre transform is
where
and the partial derivative now is with respect to at fixed .
In quantum mechanics, the state is a superposition of different states with different values of , or different values of , and the quantities and can be interpreted as noncommuting operators. The operator is only definite on states that are indefinite with respect to . So consider two states separated in time and act with the operator corresponding to the Lagrangian:
If the multiplications implicit in this formula are reinterpreted as matrix multiplications, the first factor is
and if this is also interpreted as a matrix multiplication, the sum over all states integrates over all , and so it takes the Fourier transform in to change basis to . That is the action on the Hilbert space – change basis to at time .
Next comes
or evolve an infinitesimal time into the future.
Finally, the last factor in this interpretation is
which means change basis back to at a later time.
This is not very different from just ordinary time evolution: the factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just Fourier transforms to change to a pure basis from an intermediate basis.
Another way of saying this is that since the Hamiltonian is naturally a function of and , exponentiating this quantity and changing basis from to at each step allows the matrix element of to be expressed as a simple function along each path. This function is the quantum analog of the classical action. This observation is due to Paul Dirac.
Dirac further noted that one could square the time-evolution operator in the representation:
and this gives the time-evolution operator between time and time . While in the representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of and the later one with a fixed value of . The result is a sum over paths with a phase, which is the quantum action.
Classical limit
Crucially, Dirac identified the effect of the classical limit on the quantum form of the action principle:
That is, in the limit of action that is large compared to the Planck constant – the classical limit – the path integral is dominated by solutions that are in the neighborhood of stationary points of the action. The classical path arises naturally in the classical limit.
Feynman's interpretation
Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.
Feynman showed that Dirac's quantum action was, for most cases of interest, simply equal to the classical action, appropriately discretized. This means that the classical action is the phase acquired by quantum evolution between two fixed endpoints. He proposed to recover all of quantum mechanics from the following postulates:
The probability for an event is given by the squared modulus of a complex number called the "probability amplitude".
The probability amplitude is given by adding together the contributions of all paths in configuration space.
The contribution of a path is proportional to , where is the action given by the time integral of the Lagrangian along the path.
In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of the 3rd postulate over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the probability amplitude for a single particle to go from one space-time coordinate to another, it is correct to include paths in which the particle describes elaborate curlicues, curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase, or argument of the complex number. Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below).
Feynman showed that this formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics when the Hamiltonian is at most quadratic in the momentum. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action.
The path integral formulation of quantum field theory represents the transition amplitude (corresponding to the classical correlation function) as a weighted sum of all possible histories of the system from the initial to the final state. A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude.
Path integral in quantum mechanics
Time-slicing derivation
One common approach to deriving the path integral formula is to divide the time interval into small pieces. Once this is done, the Trotter product formula tells us that the noncommutativity of the kinetic and potential energy operators can be ignored.
For a particle in a smooth potential, the path integral is approximated by zigzag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position at time to at time , the time sequence
can be divided up into smaller segments , where , of fixed duration
This process is called time-slicing.
An approximation for the path integral can be computed as proportional to
where is the Lagrangian of the one-dimensional system with position variable and velocity considered (see below), and corresponds to the position at the th time step, if the time integral is approximated by a sum of terms.
In the limit , this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at in the initial state and at in the final state .
Actually is the classical Lagrangian of the one-dimensional system considered,
and the abovementioned "zigzagging" corresponds to the appearance of the terms
in the Riemann sum approximating the time integral, which are finally integrated over to with the integration measure , is an arbitrary value of the interval corresponding to , e.g. its center, .
Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute.
Path integral
In terms of the wave function in the position representation, the path integral formula reads as follows:
where denotes integration over all paths with and where is a normalization factor. Here is the action, given by
Free particle
The path integral representation gives the quantum amplitude to go from point to point as an integral over all paths. For a free-particle action (for simplicity let , )
the integral can be evaluated explicitly.
To do this, it is convenient to start without the factor in the exponential, so that large deviations are suppressed by small numbers, not by cancelling oscillatory contributions. The amplitude (or Kernel) reads:
Splitting the integral into time slices:
where the is interpreted as a finite collection of integrations at each integer multiple of . Each factor in the product is a Gaussian as a function of centered at with variance . The multiple integrals are a repeated convolution of this Gaussian with copies of itself at adjacent times:
where the number of convolutions is . The result is easy to evaluate by taking the Fourier transform of both sides, so that the convolutions become multiplications:
The Fourier transform of the Gaussian is another Gaussian of reciprocal variance:
and the result is
The Fourier transform gives , and it is a Gaussian again with reciprocal variance:
The proportionality constant is not really determined by the time-slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time slices the time evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process.
The result has a probability interpretation. The sum over all paths of the exponential factor can be seen as the sum over each path of the probability of selecting that path. The probability is the product over each segment of the probability of selecting that segment, so that each segment is probabilistically independently chosen. The fact that the answer is a Gaussian spreading linearly in time is the central limit theorem, which can be interpreted as the first historical evaluation of a statistical path integral.
The probability interpretation gives a natural normalization choice. The path integral should be defined so that
This condition normalizes the Gaussian and produces a kernel that obeys the diffusion equation:
For oscillatory path integrals, ones with an in the numerator, the time slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular, since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment . This is closely related to Wick rotation. Then the same convolution argument as before gives the propagation kernel:
which, with the same normalization as before (not the sum-squares normalization – this function has a divergent norm), obeys a free Schrödinger equation:
This means that any superposition of s will also obey the same equation, by linearity. Defining
then obeys the free Schrödinger equation just as does:
Simple harmonic oscillator
The Lagrangian for the simple harmonic oscillator is
Write its trajectory as the classical trajectory plus some perturbation, and the action as . The classical trajectory can be written as
This trajectory yields the classical action
Next, expand the deviation from the classical path as a Fourier series, and calculate the contribution to the action , which gives
This means that the propagator is
for some normalization
Using the infinite-product representation of the sinc function,
the propagator can be written as
Let . One may write this propagator in terms of energy eigenstates as
Using the identities and , this amounts to
One may absorb all terms after the first into , thereby obtaining
One may finally expand in powers of : All terms in this expansion get multiplied by the factor in the front, yielding terms of the form
Comparison to the above eigenstate expansion yields the standard energy spectrum for the simple harmonic oscillator,
Coulomb potential
Feynman's time-sliced approximation does not, however, exist for the most important quantum-mechanical path integrals of atoms, due to the singularity of the Coulomb potential at the origin. Only after replacing the time by another path-dependent pseudo-time parameter
the singularity is removed and a time-sliced approximation exists, which is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert. The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation.
The Schrödinger equation
The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times.
Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of , the path integral has most weight for close to . In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula.) The exponential of the action is
The first term rotates the phase of locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to times a diffusion process. To lowest order in they are additive; in any case one has with (1):
As mentioned, the spread in is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase that slowly varies from point to point from the potential:
and this is the Schrödinger equation. The normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment.
Equations of motion
Since the states obey the Schrödinger equation, the path integral must reproduce the Heisenberg equations of motion for the averages of and variables, but it is instructive to see this directly. The direct approach shows that the expectation values calculated from the path integral reproduce the usual ones of quantum mechanics.
Start by considering the path integral with some fixed initial state
Now at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: where is a different shift at each time but , since the endpoints are not integrated:
The change in the integral from the shift is, to first infinitesimal order in :
which, integrating by parts in , gives:
But this was just a shift of integration variables, which doesn't change the value of the integral for any choice of . The conclusion is that this first order variation is zero for an arbitrary initial state and at any arbitrary point in time:
this is the Heisenberg equation of motion.
If the action contains terms that multiply and , at the same moment in time, the manipulations above are only heuristic, because the multiplication rules for these quantities is just as noncommuting in the path integral as it is in the operator formalism.
Stationary-phase approximation
If the variation in the action exceeds by many orders of magnitude, we typically have destructive interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive interference. This can be shown using the method of stationary phase applied to the propagator. As decreases, the exponential in the integral oscillates rapidly in the complex domain for any change in the action. Thus, in the limit that goes to zero, only points where the classical action does not vary contribute to the propagator.
Canonical commutation relations
The formulation of the path integral does not make it clear at first sight that the quantities and do not commute. In the path integral, these are just integration variables and they have no obvious ordering. Feynman discovered that the non-commutativity is still present.
To see this, consider the simplest path integral, the brownian walk. This is not yet quantum mechanics, so in the path-integral the action is not multiplied by :
The quantity is fluctuating, and the derivative is defined as the limit of a discrete difference.
The distance that a random walk moves is proportional to , so that:
This shows that the random walk is not differentiable, since the ratio that defines the derivative diverges with probability one.
The quantity is ambiguous, with two possible meanings:
In elementary calculus, the two are only different by an amount that goes to 0 as goes to 0. But in this case, the difference between the two is not 0:
Let
Then is a rapidly fluctuating statistical quantity, whose average value is 1, i.e. a normalized "Gaussian process". The fluctuations of such a quantity can be described by a statistical Lagrangian
and the equations of motion for derived from extremizing the action corresponding to just set it equal to 1. In physics, such a quantity is "equal to 1 as an operator identity". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose.
Defining the time order to be the operator order:
This is called the Itō lemma in stochastic calculus, and the (euclideanized) canonical commutation relations in physics.
For a general statistical action, a similar argument shows that
and in quantum mechanics, the extra imaginary unit in the action converts this to the canonical commutation relation,
Particle in curved space
For a particle in curved space the kinetic term depends on the position, and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation (nonholonomic mapping explained here).
Measure-theoretic factors
Sometimes (e.g. a particle moving in curved space) we also have measure-theoretic factors in the functional integral:
This factor is needed to restore unitarity.
For instance, if
then it means that each spatial slice is multiplied by the measure . This measure cannot be expressed as a functional multiplying the measure because they belong to entirely different classes.
Expectation values and matrix elements
Matrix elements of the kind take the form
.
This generalizes to multiple operators, for example
,
and to the general expectation value
.
Euclidean path integrals
It is very common in path integrals to perform a Wick rotation from real to imaginary times. In the setting of quantum field theory, the Wick rotation changes the geometry of space-time from Lorentzian to Euclidean; as a result, Wick-rotated path integrals are often called Euclidean path integrals.
Wick rotation and the Feynman–Kac formula
If we replace by , the time-evolution operator is replaced by . (This change is known as a Wick rotation.) If we repeat the derivation of the path-integral formula in this setting, we obtain
,
where is the Euclidean action, given by
.
Note the sign change between this and the normal action, where the potential energy term is negative. (The term Euclidean is from the context of quantum field theory, where the change from real to imaginary time changes the space-time geometry from Lorentzian to Euclidean.)
Now, the contribution of the kinetic energy to the path integral is as follows:
where includes all the remaining dependence of the integrand on the path. This integral has a rigorous mathematical interpretation as integration against the Wiener measure, denoted . The Wiener measure, constructed by Norbert Wiener gives a rigorous foundation to Einstein's mathematical model of Brownian motion. The subscript indicates that the measure is supported on paths with .
We then have a rigorous version of the Feynman path integral, known as the Feynman–Kac formula:
,
where now satisfies the Wick-rotated version of the Schrödinger equation,
.
Although the Wick-rotated Schrödinger equation does not have a direct physical meaning, interesting properties of the Schrödinger operator can be extracted by studying it.
Much of the study of quantum field theories from the path-integral perspective, in both the mathematics and physics literatures, is done in the Euclidean setting, that is, after a Wick rotation. In particular, there are various results showing that if a Euclidean field theory with suitable properties can be constructed, one can then undo the Wick rotation to recover the physical, Lorentzian theory. On the other hand, it is much more difficult to give a meaning to path integrals (even Euclidean path integrals) in quantum field theory than in quantum mechanics.
Path integral and the partition function
The path integral is just the generalization of the integral above to all quantum mechanical problems—
is the action of the classical problem in which one investigates the path starting at time and ending at time , and denotes the integration measure over all paths. In the classical limit, , the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel.
The connection with statistical mechanics follows. Considering only paths that begin and end in the same configuration, perform the Wick rotation , i.e., make time imaginary, and integrate over all possible beginning-ending configurations. The Wick-rotated path integral—described in the previous subsection, with the ordinary action replaced by its "Euclidean" counterpart—now resembles the partition function of statistical mechanics defined in a canonical ensemble with inverse temperature proportional to imaginary time, . Strictly speaking, though, this is the partition function for a statistical field theory.
Clearly, such a deep analogy between quantum mechanics and statistical mechanics cannot be dependent on the formulation. In the canonical formulation, one sees that the unitary evolution operator of a state is given by
where the state is evolved from time . If one makes a Wick rotation here, and finds the amplitude to go from any state, back to the same state in (imaginary) time is given by
which is precisely the partition function of statistical mechanics for the same system at the temperature quoted earlier. One aspect of this equivalence was also known to Erwin Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Note, however, that the Euclidean path integral is actually in the form of a classical statistical mechanics model.
Quantum field theory
Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation
for two simultaneous spatial positions and , and this is not a relativistically invariant concept. The results of a calculation are covariant, but the symmetry is not apparent in intermediate stages. If naive field-theory calculations did not produce infinite answers in the continuum limit, this would not have been such a big problem – it would just have been a bad choice of coordinates. But the lack of symmetry means that the infinite quantities must be cut off, and the bad coordinates make it nearly impossible to cut off the theory without spoiling the symmetry. This makes it difficult to extract the physical predictions, which require a careful limiting procedure.
The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg-type operator algebra to operator product rules, which are new relations difficult to see in the old formalism.
Further, different choices of canonical variables lead to very different-seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete.
The price of a path integral representation is that the unitarity of a theory is no longer self-evident, but it can be proven by changing variables to some canonical representation. The path integral itself also deals with larger mathematical spaces than is usual, which requires more careful mathematics, not all of which has been fully worked out. The path integral historically was not immediately accepted, partly because it took many years to incorporate fermions properly. This required physicists to invent an entirely new mathematical object – the Grassmann variable – which also allowed changes of variables to be done naturally, as well as allowing constrained quantization.
The integration variables in the path integral are subtly non-commuting. The value of the product of two field operators at what looks like the same point depends on how the two points are ordered in space and time. This makes some naive identities fail.
Propagator
In relativistic theories, there is both a particle and field representation for every theory. The field representation is a sum over all field configurations, and the particle representation is a sum over different particle paths.
The nonrelativistic formulation is traditionally given in terms of particle paths, not fields. There, the path integral in the usual variables, with fixed boundary conditions, gives the probability amplitude for a particle to go from point to point in time :
This is called the propagator. Superposing different values of the initial position with an arbitrary initial state constructs the final state:
For a spatially homogeneous system, where is only a function of , the integral is a convolution, the final state is the initial state convolved with the propagator:
For a free particle of mass , the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time, and the solution must be a normalized Gaussian:
Taking the Fourier transform in produces another Gaussian:
and in -space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending to be zero for negative times, gives Green's function, or the frequency-space propagator:
which is the reciprocal of the operator that annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the -space representation.
The infinitesimal term in the denominator is a small positive number, which guarantees that the inverse Fourier transform in will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of where there is no singularity. This guarantees that propagates the particle into the future and is the reason for the subscript "F" on . The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time.
It is also possible to reexpress the nonrelativistic time evolution in terms of propagators going toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the Gaussian is replaced by . In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction:
Given the nearly identical only change is the sign of and , the parameter in Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past.
For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths that travel between two points in a fixed proper time, as measured along the path (these paths describe the trajectory of a particle in space and in time):
The integral above is not trivial to interpret because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arc length of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function can be evaluated when the sum is over paths in Euclidean space:
This describes a sum over all paths of length of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to , and each step is less likely the longer it is. By the central limit theorem, the result of many independent steps is a Gaussian of variance proportional to :
The usual definition of the relativistic propagator only asks for the amplitude is to travel from to , after summing over all the possible proper times it could take:
where is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor and can be absorbed into the constant :
This is the Schwinger representation. Taking a Fourier transform over the variable can be done for each value of separately, and because each separate contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in -space, the propagator can be reexpressed simply:
which is the Euclidean propagator for a scalar particle. Rotating to be imaginary gives the usual relativistic propagator, up to a factor of and an ambiguity, which will be clarified below:
This expression can be interpreted in the nonrelativistic limit, where it is convenient to split it by partial fractions:
For states where one nonrelativistic particle is present, the initial wavefunction has a frequency distribution concentrated near . When convolving with the propagator, which in space just means multiplying by the propagator, the second term is suppressed and the first term is enhanced. For frequencies near , the dominant first term has the form
This is the expression for the nonrelativistic Green's function of a free Schrödinger particle.
The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies that are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy.
The proper way to express this mathematically is that, adding a small suppression factor in proper time, the limit where of the first term must vanish, while the limit of the second term must vanish. In the Fourier transform, this means shifting the pole in slightly, so that the inverse Fourier transform will pick up a small decay factor in one of the time directions:
Without these terms, the pole contribution could not be unambiguously evaluated when taking the inverse Fourier transform of . The terms can be recombined:
which when factored, produces opposite-sign infinitesimal terms in each factor. This is the mathematically precise form of the relativistic particle propagator, free of any ambiguities. The term introduces a small imaginary part to the , which in the Minkowski version is a small exponential suppression of long paths.
So in the relativistic case, the Feynman path-integral representation of the propagator includes paths going backwards in time, which describe antiparticles. The paths that contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again.
Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses that are nonzero outside the light cone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Green's function that is only nonzero in the future in a relativistically invariant theory.
Functionals of fields
However, the path integral formulation is also extremely important in direct application to quantum field theory, in which the "paths" or histories being considered are not the motions of a single particle, but the possible time evolutions of a field over all space. The action is referred to technically as a functional of the field: , where the field is itself a function of space and time, and the square brackets are a reminder that the action depends on all the field's values everywhere, not just some particular value. One such given function of spacetime is called a field configuration. In principle, one integrates Feynman's amplitude over the class of all possible field configurations.
Much of the formal study of QFT is devoted to the properties of the resulting functional integral, and much effort (not yet entirely successful) has been made toward making these functional integrals mathematically precise.
Such a functional integral is extremely similar to the partition function in statistical mechanics. Indeed, it is sometimes called a partition function, and the two are essentially mathematically identical except for the factor of in the exponent in Feynman's postulate 3. Analytically continuing the integral to an imaginary time variable (called a Wick rotation) makes the functional integral even more like a statistical partition function and also tames some of the mathematical difficulties of working with these integrals.
Expectation values
In quantum field theory, if the action is given by the functional of field configurations (which only depends locally on the fields), then the time-ordered vacuum expectation value of polynomially bounded functional , , is given by
The symbol here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space-time. As stated above, the unadorned path integral in the denominator ensures proper normalization.
As a probability
Strictly speaking, the only question that can be asked in physics is: What fraction of states satisfying condition also satisfy condition ? The answer to this is a number between 0 and 1, which can be interpreted as a conditional probability, written as . In terms of path integration, since , this means
where the functional is the superposition of all incoming states that could lead to the states we are interested in. In particular, this could be a state corresponding to the state of the Universe just after the Big Bang, although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals, it is naturally normalised.
Schwinger–Dyson equations
Since this formulation of quantum mechanics is analogous to classical action principle, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case.
In the language of functional analysis, we can write the Euler–Lagrange equations as
(the left-hand side is a functional derivative; the equation means that the action is stationary under small changes in the field configuration). The quantum analogues of these equations are called the Schwinger–Dyson equations.
If the functional measure turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models), and if we assume that after a Wick rotation
which now becomes
for some , it goes to zero faster than a reciprocal of any polynomial for large values of , then we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation:
for any polynomially-bounded functional . In the deWitt notation this looks like
These equations are the analog of the on-shell EL equations. The time ordering is taken before the time derivatives inside the .
If (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then the generating functional of the source fields is defined to be
Note that
or
where
Basically, if is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick-rotated statistical mechanics analogue, because we have time ordering complications here!), then are its moments, and is its Fourier transform.
If is a functional of , then for an operator , is defined to be the operator that substitutes for . For example, if
and is a functional of , then
Then, from the properties of the functional integrals
we get the "master" Schwinger–Dyson equation:
or
If the functional measure is not translationally invariant, it might be possible to express it as the product , where is a functional and is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to . However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense.
In that case, we would have to replace the in this equation by another functional
If we expand this equation as a Taylor series about J 0, we get the entire set of Schwinger–Dyson equations.
Localization
The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory.
Ward–Takahashi identities
Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well.
Let's just assume for simplicity here that the symmetry in question is local (not local in the sense of a gauge symmetry, but in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian, and that
for some function where only depends locally on (and possibly the spacetime position).
If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless or something. Here, is a derivation that generates the one parameter group in question. We could have antiderivations as well, such as BRST and supersymmetry.
Let's also assume
for any polynomially-bounded functional . This property is called the invariance of the measure, and this does not hold in general. (See anomaly (physics) for more details.)
Then,
which implies
where the integral is over the boundary. This is the quantum analog of Noether's theorem.
Now, let's assume even further that is a local integral
where
so that\
where
(this is assuming the Lagrangian only depends on and its first partial derivatives! More general Lagrangians would require a modification to this definition!). We're not insisting that is the generator of a symmetry (i.e. we are not insisting upon the gauge principle), but just that is. And we also assume the even stronger assumption that the functional measure is locally invariant:
Then, we would have
Alternatively,
The above two equations are the Ward–Takahashi identities.
Now for the case where , we can forget about all the boundary conditions and locality assumptions. We'd simply have
Alternatively,
Caveats
Need for regulators and renormalization
Path integrals as they are defined here require the introduction of regulators. Changing the scale of the regulator leads to the renormalization group. In fact, renormalization is the major obstruction to making path integrals well-defined.
Ordering prescription
Regardless of whether one works in configuration space or phase space, when equating the operator formalism and the path integral formulation, an ordering prescription is required to resolve the ambiguity in the correspondence between non-commutative operators and the commutative functions that appear in path integrands. For example, the operator can be translated back as either , , or depending on whether one chooses the , , or Weyl ordering prescription; conversely, can be translated to either , , or for the same respective choice of ordering prescription.
Path integral in quantum-mechanical interpretation
In one interpretation of quantum mechanics, the "sum over histories" interpretation, the path integral is taken to be fundamental, and reality is viewed as a single indistinguishable "class" of paths that all share the same events. For this interpretation, it is crucial to understand what exactly an event is. The sum-over-histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality.
Some advocates of interpretations of quantum mechanics emphasizing decoherence have attempted to make more rigorous the notion of extracting a classical-like "coarse-grained" history from the space of all possible histories.
Quantum gravity
Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction, and his work has been extended by Hawking and others. Approaches that use this method include causal dynamical triangulations and spinfoam models.
Quantum tunneling
Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation, the tunneling rate can be determined to be of the form
with the effective action and pre-exponential factor . This form is specifically useful in a dissipative system, in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion, the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling. From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted.
See also
Static forces and virtual-particle exchange
Feynman checkerboard
Berezin integral
Propagators
Wheeler–Feynman absorber theory
Feynman–Kac formula
Path integrals in polymer science
Remarks
References
Bibliography
This course, designed for mathematicians, is a rigorous introduction to perturbative quantum field theory, using the language of functional integrals.
The 1942 thesis. Also includes Dirac's 1933 paper and Feynman's 1948 publication.
The historical reference written by the inventor of the path integral formulation himself and one of his students.
Highly readable textbook; introduction to relativistic QFT for particle physics.
Discusses the definition of Path Integrals for systems whose kinematical variables are the generators of a real separable, connected Lie group with irreducible, square integrable representations.
A great introduction to Path Integrals (Chapter 1) and QFT in general.
External links
Path integral on Scholarpedia
Path Integrals in Quantum Theories: A Pedagogic 1st Step
A mathematically rigorous approach to perturbative path integrals via animation on YouTube
Feynman's Infinite Quantum Paths | PBS Space Time. July 7, 2017. (Video, 15:48)
Concepts in physics
Statistical mechanics
Quantum mechanics
Quantum field theory
Differential equations
Articles containing video clips
Mathematical physics
Integrals | 0.767852 | 0.99694 | 0.765502 |
Microevolution | Microevolution is the change in allele frequencies that occurs over time within a population. This change is due to four different processes: mutation, selection (natural and artificial), gene flow and genetic drift. This change happens over a relatively short (in evolutionary terms) amount of time compared to the changes termed macroevolution.
Population genetics is the branch of biology that provides the mathematical structure for the study of the process of microevolution. Ecological genetics concerns itself with observing microevolution in the wild. Typically, observable instances of evolution are examples of microevolution; for example, bacterial strains that have antibiotic resistance.
Microevolution provides the raw material for macroevolution.
Difference from macroevolution
Macroevolution is guided by sorting of interspecific variation ("species selection"), as opposed to sorting of intraspecific variation in microevolution. Species selection may occur as (a) effect-macroevolution, where organism-level traits (aggregate traits) affect speciation and extinction rates, and (b) strict-sense species selection, where species-level traits (e.g. geographical range) affect speciation and extinction rates. Macroevolution does not produce evolutionary novelties, but it determines their proliferation within the clades in which they evolved, and it adds species-level traits as non-organismic factors of sorting to this process.
Four processes
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication. Errors are introduced particularly often in the process of DNA replication, in the polymerization of the second strand. These errors can also be induced by the organism itself, by cellular processes such as hypermutation. Mutations can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the proofreading ability of DNA polymerases. (Without proofreading error rates are a thousandfold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment making some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation).
Mutation can result in several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations. Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system.
Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger families of genes of shared ancestry. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions.
Here, domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA.
Selection
Selection is the process by which heritable traits that make it more likely for an organism to survive and successfully reproduce become more common in a population over successive generations.
It is sometimes valuable to distinguish between naturally occurring selection, natural selection, and selection that is a manifestation of choices made by humans, artificial selection. This distinction is rather diffuse. Natural selection is nevertheless the dominant part of selection.
The natural genetic variation within a population of organisms means that some individuals will survive more successfully than others in their current environment. Factors which affect reproductive success are also important, an issue which Charles Darwin developed in his ideas on sexual selection.
Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype which gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the speciation (the emergence of new species).
Natural selection is one of the cornerstones of modern biology. The term was introduced by Darwin in his groundbreaking 1859 book On the Origin of Species, in which natural selection was described by analogy to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favored for reproduction. The concept of natural selection was originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, nothing was known of modern genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical and molecular genetics is termed the modern evolutionary synthesis. Natural selection remains the primary explanation for adaptive evolution.
Genetic drift
Genetic drift is the change in the relative frequency in which a gene variant (allele) occurs in a population due to random sampling. That is, the alleles in the offspring in the population are a random sample of those in the parents. And chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction or percentage of its gene copies compared to the total number of gene alleles that share a particular form.
Genetic drift is an evolutionary process which leads to changes in allele frequencies over time. It may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and may be beneficial, neutral, or detrimental to reproductive success.
The effect of genetic drift is larger in small populations, and smaller in large populations. Vigorous debates wage among scientists over the relative importance of genetic drift compared with natural selection. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution which claims that most of the changes in the genetic material are caused by genetic drift. The predictions of neutral theory, based on genetic drift, do not fit recent data on whole genomes well: these data suggest that the frequencies of neutral alleles change primarily due to selection at linked sites, rather than due to genetic drift by means of sampling error.
Gene flow
Gene flow is the exchange of genes between populations, which are usually of the same species. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer.
Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes.
Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridization in developing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploid hybrids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations.
Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria.
Gene flow is the transfer of alleles from one population to another.
Migration into or out of a population may be responsible for a marked change in allele frequencies. Immigration may also result in the addition of new genetic variants to the established gene pool of a particular species or population.
There are a number of factors that affect the rate of gene flow between different populations. One of the most significant factors is mobility, as greater mobility of an individual tends to give it greater migratory potential. Animals tend to be more mobile than plants, although pollen and seeds may be carried great distances by animals or wind.
Maintained gene flow between two populations can also lead to a combination of the two gene pools, reducing the genetic variation between the two groups. It is for this reason that gene flow strongly acts against speciation, by recombining the gene pools of the groups, and thus, repairing the developing differences in genetic variation that would have led to full speciation and creation of daughter species.
For example, if a species of grass grows on both sides of a highway, pollen is likely to be transported from one side to the other and vice versa. If this pollen is able to fertilise the plant where it ends up and produce viable offspring, then the alleles in the pollen have effectively been able to move from the population on one side of the highway to the other.
Origin and extended use of the term
Origin
The term microevolution was first used by botanist Robert Greenleaf Leavitt in the journal Botanical Gazette in 1909, addressing what he called the "mystery" of how formlessness gives rise to form.
..The production of form from formlessness in the egg-derived individual, the multiplication of parts and the orderly creation of diversity among them, in an actual evolution, of which anyone may ascertain the facts, but of which no one has dissipated the mystery in any significant measure. This microevolution forms an integral part of the grand evolution problem and lies at the base of it, so that we shall have to understand the minor process before we can thoroughly comprehend the more general one...
However, Leavitt was using the term to describe what we would now call developmental biology; it was not until Russian Entomologist Yuri Filipchenko used the terms "macroevolution" and "microevolution" in 1927 in his German language work, Variabilität und Variation, that it attained its modern usage. The term was later brought into the English-speaking world by Filipchenko's student Theodosius Dobzhansky in his book Genetics and the Origin of Species (1937).
Use in creationism
In young Earth creationism and baraminology a central tenet is that evolution can explain diversity in a limited number of created kinds which can interbreed (which they call "microevolution") while the formation of new "kinds" (which they call "macroevolution") is impossible. This acceptance of "microevolution" only within a "kind" is also typical of old Earth creationism.
Scientific organizations such as the American Association for the Advancement of Science describe microevolution as small scale change within species, and macroevolution as the formation of new species, but otherwise not being different from microevolution. In macroevolution, an accumulation of microevolutionary changes leads to speciation. The main difference between the two processes is that one occurs within a few generations, whilst the other takes place over thousands of years (i.e. a quantitative difference). Essentially they describe the same process; although evolution beyond the species level results in beginning and ending generations which could not interbreed, the intermediate generations could.
Opponents to creationism argue that changes in the number of chromosomes can be accounted for by intermediate stages in which a single chromosome divides in generational stages, or multiple chromosomes fuse, and cite the chromosome difference between humans and the other great apes as an example. Creationists insist that since the actual divergence between the other great apes and humans was not observed, the evidence is circumstantial.
Describing the fundamental similarity between macro and microevolution in his authoritative textbook "Evolutionary Biology," biologist Douglas Futuyma writes,
Contrary to the claims of some antievolution proponents, evolution of life forms beyond the species level (i.e. speciation) has indeed been observed and documented by scientists on numerous occasions. In creation science, creationists accepted speciation as occurring within a "created kind" or "baramin", but objected to what they called "third level-macroevolution" of a new genus or higher rank in taxonomy. There is ambiguity in the ideas as to where to draw a line on "species", "created kinds", and what events and lineages fall within the rubric of microevolution or macroevolution.
See also
Punctuated equilibrium - due to gene flow, major evolutionary changes may be rare
References
External links
Microevolution (UC Berkeley)
Microevolution vs Macroevolution
Evolutionary biology concepts
Population genetics | 0.776255 | 0.98612 | 0.76548 |
Invariant mass | The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, it is a characteristic of the system's total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center-of-momentum frame exists for the system, then the invariant mass of a system is equal to its total mass in that "rest frame". In other reference frames, where the system's momentum is nonzero, the total mass (a.k.a. relativistic mass) of the system is greater than the invariant mass, but the invariant mass remains unchanged.
Because of mass–energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared. Similarly, the total energy of the system is its total (relativistic) mass times the speed of light squared.
Systems whose four-momentum is a null vector (for example, a single photon or many photons moving in exactly the same direction) have zero invariant mass and are referred to as massless. A physical object or particle moving faster than the speed of light would have space-like four-momenta (such as the hypothesized tachyon), and these do not appear to exist. Any time-like four-momentum possesses a reference frame where the momentum (3-dimensional) is zero, which is a center of momentum frame. In this case, invariant mass is positive and is referred to as the rest mass.
If objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects' rest masses. This is also equal to the total energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass. Since the mass of systems must be measured with a weight or mass scale in a center of momentum frame in which the entire system has zero momentum, such a scale always measures the system's invariant mass. For example, a scale would measure the kinetic energy of the molecules in a bottle of gas to be part of invariant mass of the bottle, and thus also its rest mass. The same is true for massless particles in such system, which add invariant mass and also rest mass to systems, according to their energy.
For an isolated massive system, the center of mass of the system moves in a straight line with a steady subluminal velocity (with a velocity depending on the reference frame used to view it). Thus, an observer can always be placed to move along with it. In this frame, which is the center-of-momentum frame, the total momentum is zero, and the system as a whole may be thought of as being "at rest" if it is a bound system (like a bottle of gas). In this frame, which exists under these assumptions, the invariant mass of the system is equal to the total system energy (in the zero-momentum frame) divided by . This total energy in the center of momentum frame, is the minimum energy which the system may be observed to have, when seen by various observers from various inertial frames.
Note that for reasons above, such a rest frame does not exist for single photons, or rays of light moving in one direction. When two or more photons move in different directions, however, a center of mass frame (or "rest frame" if the system is bound) exists. Thus, the mass of a system of several photons moving in different directions is positive, which means that an invariant mass exists for this system even though it does not exist for each photon.
Sum of rest masses
The invariant mass of a system includes the mass of any kinetic energy of the system constituents that remains in the center of momentum frame, so the invariant mass of a system may be greater than sum of the invariant masses (rest masses) of its separate constituents. For example, rest mass and invariant mass are zero for individual photons even though they may add mass to the invariant mass of systems. For this reason, invariant mass is in general not an additive quantity (although there are a few rare situations where it may be, as is the case when massive particles in a system without potential or kinetic energy can be added to a total mass).
Consider the simple case of two-body system, where object A is moving towards another object B which is initially at rest (in any particular frame of reference). The magnitude of invariant mass of this two-body system (see definition below) is different from the sum of rest mass (i.e. their respective mass when stationary). Even if we consider the same system from center-of-momentum frame, where net momentum is zero, the magnitude of the system's invariant mass is not equal to the sum of the rest masses of the particles within it.
The kinetic energy of such particles and the potential energy of the force fields increase the total energy above the sum of the particle rest masses, and both terms contribute to the invariant mass of the system. The sum of the particle kinetic energies as calculated by an observer is smallest in the center of momentum frame (again, called the "rest frame" if the system is bound).
They will often also interact through one or more of the fundamental forces, giving them a potential energy of interaction, possibly negative.
As defined in particle physics
In particle physics, the invariant mass is equal to the mass in the rest frame of the particle, and can be calculated by the particle's energy and its momentum as measured in any frame, by the energy–momentum relation:
or in natural units where ,
This invariant mass is the same in all frames of reference (see also special relativity). This equation says that the invariant mass is the pseudo-Euclidean length of the four-vector , calculated using the relativistic version of the Pythagorean theorem which has a different sign for the space and time dimensions. This length is preserved under any Lorentz boost or rotation in four dimensions, just like the ordinary length of a vector is preserved under rotations. In quantum theory the invariant mass is a parameter in the relativistic Dirac equation for an elementary particle. The Dirac quantum operator corresponds to the particle four-momentum vector.
Since the invariant mass is determined from quantities which are conserved during a decay, the invariant mass calculated using the energy and momentum of the decay products of a single particle is equal to the mass of the particle that decayed.
The mass of a system of particles can be calculated from the general formula:
where
is the invariant mass of the system of particles, equal to the mass of the decay particle.
is the sum of the energies of the particles
is the vector sum of the momentum of the particles (includes both magnitude and direction of the momenta)
The term invariant mass is also used in inelastic scattering experiments. Given an inelastic reaction with total incoming energy larger than the total detected energy (i.e. not all outgoing particles are detected in the experiment), the invariant mass (also known as the "missing mass") of the reaction is defined as follows (in natural units):
If there is one dominant particle which was not detected during an experiment, a plot of the invariant mass will show a sharp peak at the mass of the missing particle.
In those cases when the momentum along one direction cannot be measured (i.e. in the case of a neutrino, whose presence is only inferred from the missing energy) the transverse mass is used.
Example: two-particle collision
In a two-particle collision (or a two-particle decay) the square of the invariant mass (in natural units) is
Massless particles
The invariant mass of a system made of two massless particles whose momenta form an angle has a convenient expression:
Collider experiments
In particle collider experiments, one often defines the angular position of a particle in terms of an azimuthal angle and pseudorapidity . Additionally the transverse momentum, , is usually measured. In this case if the particles are massless, or highly relativistic then the invariant mass becomes:
Rest energy
Rest energy (also called rest mass energy) is the energy associated with a particle's invariant mass.
The rest energy of a particle is defined as: where is the speed of light in vacuum. In general, only differences in energy have physical significance.
The concept of rest energy follows from the special theory of relativity that leads to Einstein's famous conclusion about equivalence of energy and mass. See .
See also
Mass in special relativity
Invariant (physics)
Transverse mass
References
Citations
Theory of relativity
Mass
Energy (physics)
Physical quantities | 0.772701 | 0.990652 | 0.765478 |
Rayleigh–Taylor instability | The Rayleigh–Taylor instability, or RT instability (after Lord Rayleigh and G. I. Taylor), is an instability of an interface between two fluids of different densities which occurs when the lighter fluid is pushing the heavier fluid. Examples include the behavior of water suspended above oil in the gravity of Earth, mushroom clouds like those from volcanic eruptions and atmospheric nuclear explosions, supernova explosions in which expanding core gas is accelerated into denser shell gas, instabilities in plasma fusion reactors and inertial confinement fusion.
Water suspended atop oil is an everyday example of Rayleigh–Taylor instability, and it may be modeled by two completely plane-parallel layers of immiscible fluid, the denser fluid on top of the less dense one and both subject to the Earth's gravity. The equilibrium here is unstable to any perturbations or disturbances of the interface: if a parcel of heavier fluid is displaced downward with an equal volume of lighter fluid displaced upwards, the potential energy of the configuration is lower than the initial state. Thus the disturbance will grow and lead to a further release of potential energy, as the denser material moves down under the (effective) gravitational field, and the less dense material is further displaced upwards. This was the set-up as studied by Lord Rayleigh. The important insight by G. I. Taylor was his realisation that this situation is equivalent to the situation when the fluids are accelerated, with the less dense fluid accelerating into the denser fluid. This occurs deep underwater on the surface of an expanding bubble and in a nuclear explosion.
As the RT instability develops, the initial perturbations progress from a linear growth phase into a non-linear growth phase, eventually developing "plumes" flowing upwards (in the gravitational buoyancy sense) and "spikes" falling downwards. In the linear phase, the fluid movement can be closely approximated by linear equations, and the amplitude of perturbations is growing exponentially with time. In the non-linear phase, perturbation amplitude is too large for a linear approximation, and non-linear equations are required to describe fluid motions. In general, the density disparity between the fluids determines the structure of the subsequent non-linear RT instability flows (assuming other variables such as surface tension and viscosity are negligible here). The difference in the fluid densities divided by their sum is defined as the Atwood number, A. For A close to 0, RT instability flows take the form of symmetric "fingers" of fluid; for A close to 1, the much lighter fluid "below" the heavier fluid takes the form of larger bubble-like plumes.
This process is evident not only in many terrestrial examples, from salt domes to weather inversions, but also in astrophysics and electrohydrodynamics. For example, RT instability structure is evident in the Crab Nebula, in which the expanding pulsar wind nebula powered by the Crab pulsar is sweeping up ejected material from the supernova explosion 1000 years ago. The RT instability has also recently been discovered in the Sun's outer atmosphere, or solar corona, when a relatively dense solar prominence overlies a less dense plasma bubble. This latter case resembles magnetically modulated RT instabilities.
Note that the RT instability is not to be confused with the Plateau–Rayleigh instability (also known as Rayleigh instability) of a liquid jet. This instability, sometimes called the hosepipe (or firehose) instability, occurs due to surface tension, which acts to break a cylindrical jet into a stream of droplets having the same total volume but higher surface area.
Many people have witnessed the RT instability by looking at a lava lamp, although some might claim this is more accurately described as an example of Rayleigh–Bénard convection due to the active heating of the fluid layer at the bottom of the lamp.
Stages of development and eventual evolution into turbulent mixing
The evolution of the RTI follows four main stages. In the first stage, the perturbation amplitudes are small when compared to their wavelengths, the equations of motion can be linearized, resulting in exponential instability growth. In the early portion of this stage, a sinusoidal initial perturbation retains its sinusoidal shape. However, after the end of this first stage, when non-linear effects begin to appear, one observes the beginnings of the formation of the ubiquitous mushroom-shaped spikes (fluid structures of heavy fluid growing into light fluid) and bubbles (fluid structures of light fluid growing into heavy fluid). The growth of the mushroom structures continues in the second stage and can be modeled using buoyancy drag models, resulting in a growth rate that is approximately constant in time. At this point, nonlinear terms in the equations of motion can no longer be ignored. The spikes and bubbles then begin to interact with one another in the third stage. Bubble merging takes place, where the nonlinear interaction of mode coupling acts to combine smaller spikes and bubbles to produce larger ones. Also, bubble competition takes places, where spikes and bubbles of smaller wavelength that have become saturated are enveloped by larger ones that have not yet saturated. This eventually develops into a region of turbulent mixing, which is the fourth and final stage in the evolution. It is generally assumed that the mixing region that finally develops is self-similar and turbulent, provided that the Reynolds number is sufficiently large.
Linear stability analysis
The inviscid two-dimensional Rayleigh–Taylor (RT) instability provides an excellent springboard into the mathematical study of stability because of the simple nature of the base state. Consider a base state in which there is an interface, located at that separates fluid media with different densities, for and for . The gravitatioanl acceleration is described by the vector . The velocity field and pressure field in this equilibrium state, denoted with an overbar, are given by
where the reference location for the pressure is taken to be at . Let this interface be slightly perturbed, so that it assumes the position . Correspondingly, the base state is also slightly perturbed. In the linear theory, we can write
where is the real wavenumber in the -direction and is the growth rate of the perturbation. Then the linear stability analysis based on the inviscid governing equations shows that
Thus, if , the base state is stable and while if , it is unstable for all wavenumbers. If the interface has a surface tension , then the dispersion relation becomes
which indicates that the instability occurs only for a range of wavenumbers where ; that is to say, surface tension stabilises large wavenumbers or small length scales. Then the maximum growth rate occurs at the wavenumber and its value is
The perturbation introduced to the system is described by a velocity field of infinitesimally small amplitude, Because the fluid is assumed incompressible, this velocity field has the streamfunction representation
where the subscripts indicate partial derivatives. Moreover, in an initially stationary incompressible fluid, there is no vorticity, and the fluid stays irrotational, hence . In the streamfunction representation, Next, because of the translational invariance of the system in the x-direction, it is possible to make the ansatz
where is a spatial wavenumber. Thus, the problem reduces to solving the equation
The domain of the problem is the following: the fluid with label 'L' lives in the region , while the fluid with the label 'G' lives in the upper half-plane . To specify the solution fully, it is necessary to fix conditions at the boundaries and interface. This determines the wave speed c, which in turn determines the stability properties of the system.
The first of these conditions is provided by details at the boundary. The perturbation velocities should satisfy a no-flux condition, so that fluid does not leak out at the boundaries Thus, on , and on . In terms of the streamfunction, this is
The other three conditions are provided by details at the interface .
Continuity of vertical velocity: At , the vertical velocities match, . Using the stream function representation, this gives
Expanding about gives
where H.O.T. means 'higher-order terms'. This equation is the required interfacial condition.
The free-surface condition: At the free surface , the kinematic condition holds:
Linearizing, this is simply
where the velocity is linearized on to the surface . Using the normal-mode and streamfunction representations, this condition is , the second interfacial condition.
Pressure relation across the interface: For the case with surface tension, the pressure difference over the interface at is given by the Young–Laplace equation:
where σ is the surface tension and κ is the curvature of the interface, which in a linear approximation is
Thus,
However, this condition refers to the total pressure (base+perturbed), thus
(As usual, The perturbed quantities can be linearized onto the surface z=0.) Using hydrostatic balance, in the form
this becomes
The perturbed pressures are evaluated in terms of streamfunctions, using the horizontal momentum equation of the linearised Euler equations for the perturbations,
with to yield
Putting this last equation and the jump condition on together,
Substituting the second interfacial condition and using the normal-mode representation, this relation becomes
where there is no need to label (only its derivatives) because
at
Solution
Now that the model of stratified flow has been set up, the solution is at hand. The streamfunction equation with the boundary conditions has the solution
The first interfacial condition states that at , which forces The third interfacial condition states that
Plugging the solution into this equation gives the relation
The A cancels from both sides and we are left with
To understand the implications of this result in full, it is helpful to consider the case of zero surface tension. Then,
and clearly
If , and c is real. This happens when the lighter fluid sits on top;
If , and c is purely imaginary. This happens when the heavier fluid sits on top.
Now, when the heavier fluid sits on top, , and
where is the Atwood number. By taking the positive solution, we see that the solution has the form
and this is associated to the interface position η by: Now define
When the two layers of the fluid are allowed to have a relative velocity, the instability is generalized to the Kelvin–Helmholtz–Rayleigh–Taylor instability, which includes both the Kelvin–Helmholtz instability and the Rayleigh–Taylor instability as special cases. It was recently discovered that the fluid equations governing the linear dynamics of the system admit a parity-time symmetry, and the Kelvin–Helmholtz–Rayleigh–Taylor instability occurs when and only when the parity-time symmetry breaks spontaneously.
Vorticity explanation
The RT instability can be seen as the result of baroclinic torque created by the misalignment of the pressure and density gradients at the perturbed interface, as described by the two-dimensional inviscid vorticity equation, , where ω is vorticity, ρ density and p is the pressure. In this case the dominant pressure gradient is hydrostatic, resulting from the acceleration.
When in the unstable configuration, for a particular harmonic component of the initial perturbation, the torque on the interface creates vorticity that will tend to increase the misalignment of the gradient vectors. This in turn creates additional vorticity, leading to further misalignment. This concept is depicted in the figure, where it is observed that the two counter-rotating vortices have velocity fields that sum at the peak and trough of the perturbed interface. In the stable configuration, the vorticity, and thus the induced velocity field, will be in a direction that decreases the misalignment and therefore stabilizes the system.
A much simpler explanation of the basic physics of the Rayleigh-Taylor instability can be found in Ref.20.
Late-time behaviour
The analysis in the previous section breaks down when the amplitude of the perturbation is large. The growth then becomes non-linear as the spikes and bubbles of the instability tangle and roll up into vortices. Then, as in the figure, numerical simulation of the full problem is required to describe the system.
See also
Saffman–Taylor instability
Richtmyer–Meshkov instability
Kelvin–Helmholtz instability
Mushroom cloud
Plateau–Rayleigh instability
Salt fingering
Hydrodynamic stability
Kármán vortex street
Fluid thread breakup
Rayleigh–Bénard convection
Notes
20.^ A. R. Piriz, O. D. Cortazar, J. J. López Cela, and N. A. Tahir, "The Rayleigh-Taylor instability", Am. J. Phys.74, 1095(2006)
References
Original research papers
(Original paper is available at: https://www.irphe.fr/~clanet/otherpaperfile/articles/Rayleigh/rayleigh1883.pdf .)
Other
xvii+238 pages.
626 pages.
External links
Java demonstration of the RT instability in fluids
Actual images and videos of RT fingers
Experiments on Rayleigh–Taylor instability at the University of Arizona
plasma Rayleigh–Taylor instability experiment at California Institute of Technology
Fluid dynamics
Fluid dynamic instabilities
Plasma instabilities | 0.771492 | 0.992204 | 0.765477 |
Centrifugal governor | A centrifugal governor is a specific type of governor with a feedback system that controls the speed of an engine by regulating the flow of fuel or working fluid, so as to maintain a near-constant speed. It uses the principle of proportional control.
Centrifugal governors, also known as "centrifugal regulators" and "fly-ball governors", were invented by Christiaan Huygens and used to regulate the distance and pressure between millstones in windmills in the 17th century. In 1788, James Watt adapted one to control his steam engine where it regulates the admission of steam into the cylinder(s), a development that proved so important he is sometimes called the inventor. Centrifugal governors' widest use was on steam engines during the Steam Age in the 19th century. They are also found on stationary internal combustion engines and variously fueled turbines, and in some modern striking clocks.
A simple governor does not maintain an exact speed but a speed range, since under increasing load the governor opens the throttle as the speed (RPM) decreases.
Operation
The devices shown are on steam engines. Power is supplied to the governor from the engine's output shaft by a belt or chain connected to the lower belt wheel. The governor is connected to a throttle valve that regulates the flow of working fluid (steam) supplying the prime mover. As the speed of the prime mover increases, the central spindle of the governor rotates at a faster rate, and the kinetic energy of the balls increases. This allows the two masses on lever arms to move outwards and upwards against gravity. If the motion goes far enough, this motion causes the lever arms to pull down on a thrust bearing, which moves a beam linkage, which reduces the aperture of a throttle valve. The rate of working-fluid entering the cylinder is thus reduced and the speed of the prime mover is controlled, preventing over-speeding.
Mechanical stops may be used to limit the range of throttle motion, as seen near the masses in the image at right.
Non-gravitational regulation
A limitation of the two-arm, two-ball governor is its reliance on gravity, and that the governor must stay upright relative to the surface of the Earth for gravity to retract the balls when the governor slows down.
Governors can be built that do not use gravitational force, by using a single straight arm with weights on both ends, a center pivot attached to a spinning axle, and a spring that tries to force the weights towards the center of the spinning axle. The two weights on opposite ends of the pivot arm counterbalance any gravitational effects, but both weights use centrifugal force to work against the spring and attempt to rotate the pivot arm towards a perpendicular axis relative to the spinning axle.
Spring-retracted non-gravitational governors are commonly used in single-phase alternating current (AC) induction motors to turn off the starting field coil when the motor's rotational speed is high enough.
They are also commonly used in snowmobile and all-terrain vehicle (ATV) continuously variable transmissions (CVT), both to engage/disengage vehicle motion and to vary the transmission's pulley diameter ratio in relation to the engine revolutions per minute.
History
Centrifugal governors were invented by Christiaan Huygens and used to regulate the distance and pressure between millstones in windmills in the 17th century.
James Watt designed his first governor in 1788 following a suggestion from his business partner Matthew Boulton. It was a conical pendulum governor and one of the final series of innovations Watt had employed for steam engines. A giant statue of Watt's governor stands at Smethwick in the English West Midlands.
Uses
Centrifugal governors' widest use was on steam engines during the Steam Age in the 19th century. They are also found on stationary internal combustion engines and variously fueled turbines, and in some modern striking clocks.
Centrifugal governors are used in many modern repeating watches to limit the speed of the striking train, so the repeater does not run too quickly.
Another kind of centrifugal governor consists of a pair of masses on a spindle inside a cylinder, the masses or the cylinder being coated with pads, somewhat like a centrifugal clutch or a drum brake. This is used in a spring-loaded record player and a spring-loaded telephone dial to limit the speed.
Dynamic systems
The centrifugal governor is often used in the cognitive sciences as an example of a dynamic system, in which the representation of information cannot be clearly separated from the operations being applied to the representation. And, because the governor is a servomechanism, its analysis in a dynamic system is not trivial. In 1868, James Clerk Maxwell wrote a famous paper "On Governors" that is widely considered a classic in feedback control theory. Maxwell distinguishes moderators (a centrifugal brake) and governors which control motive power input. He considers devices by James Watt, Professor James Thomson, Fleeming Jenkin, William Thomson, Léon Foucault and Carl Wilhelm Siemens (a liquid governor).
Natural selection
In his famous 1858 paper to the Linnean Society, which led Darwin to publish On the Origin of Species, Alfred Russel Wallace used governors as a metaphor for the evolutionary principle:
The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.
The cybernetician and anthropologist Gregory Bateson thought highly of Wallace's analogy and discussed the topic in his 1979 book Mind and Nature: A Necessary Unity, and other scholars have continued to explore the connection between natural selection and systems theory.
Culture
A centrifugal governor is part of the city seal of Manchester, New Hampshire in the U.S. and is also used on the city flag. A 2017 effort to change the design was rejected by voters.
A stylized centrifugal governor is also part of the coat of arms of the Swedish Work Environment Authority.
See also
Cataract (beam engine)
Centrifugal switch
Hit and miss engine
References
External links
British inventions
Control devices
Cybernetics
Inventions by Christiaan Huygens
Mechanical power control
Mechanisms (engineering)
Rotating machines
Scottish inventions
Steam engine governors | 0.771832 | 0.991766 | 0.765477 |
On shell and off shell | In physics, particularly in quantum field theory, configurations of a physical system that satisfy classical equations of motion are called on the mass shell (on shell); while those that do not are called off the mass shell (off shell).
In quantum field theory, virtual particles are termed off shell because they do not satisfy the energy–momentum relation; real exchange particles do satisfy this relation and are termed on (mass) shell. In classical mechanics for instance, in the action formulation, extremal solutions to the variational principle are on shell and the Euler–Lagrange equations give the on-shell equations. Noether's theorem regarding differentiable symmetries of physical action and conservation laws is another on-shell theorem.
Mass shell
Mass shell is a synonym for mass hyperboloid, meaning the hyperboloid in energy–momentum space describing the solutions to the equation:
,
the mass–energy equivalence formula which gives the energy in terms of the momentum and the rest mass of a particle. The equation for the mass shell is also often written in terms of the four-momentum; in Einstein notation with metric signature (+,−,−,−) and units where the speed of light , as . In the literature, one may also encounter if the metric signature used is (−,+,+,+).
The four-momentum of an exchanged virtual particle is , with mass . The four-momentum of the virtual particle is the difference between the four-momenta of the incoming and outgoing particles.
Virtual particles corresponding to internal propagators in a Feynman diagram are in general allowed to be off shell, but the amplitude for the process will diminish depending on how far off shell they are. This is because the -dependence of the propagator is determined by the four-momenta of the incoming and outgoing particles. The propagator typically has singularities on the mass shell.
When speaking of the propagator, negative values for that satisfy the equation are thought of as being on shell, though the classical theory does not allow negative values for the energy of a particle. This is because the propagator incorporates into one expression the cases in which the particle carries energy in one direction, and in which its antiparticle carries energy in the other direction; negative and positive on-shell then simply represent opposing flows of positive energy.
Scalar field
An example comes from considering a scalar field in D-dimensional Minkowski space. Consider a Lagrangian density given by . The action
The Euler–Lagrange equation for this action can be found by varying the field and its derivative and setting the variation to zero, and is:
Now, consider an infinitesimal spacetime translation . The Lagrangian density is a scalar, and so will infinitesimally transform as under the infinitesimal transformation. On the other hand, by Taylor expansion, we have in general
Substituting for and noting that (since the variations are independent at each point in spacetime):
Since this has to hold for independent translations , we may "divide" by and write:
This is an example of an equation that holds off shell, since it is true for any fields configuration regardless of whether it respects the equations of motion (in this case, the Euler–Lagrange equation given above). However, we can derive an on shell equation by simply substituting the Euler–Lagrange equation:
We can write this as:
And if we define the quantity in parentheses as , we have:
This is an instance of Noether's theorem. Here, the conserved quantity is the stress–energy tensor, which is only conserved on shell, that is, if the equations of motion are satisfied.
References
Quantum field theory | 0.778512 | 0.983248 | 0.76547 |
Length contraction | Length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. It is also known as Lorentz contraction or Lorentz–FitzGerald contraction (after Hendrik Lorentz and George Francis FitzGerald) and is usually only noticeable at a substantial fraction of the speed of light. Length contraction is only in the direction in which the body is travelling. For standard objects, this effect is negligible at everyday speeds, and can be ignored for all regular purposes, only becoming significant as the object approaches the speed of light relative to the observer.
History
Length contraction was postulated by George FitzGerald (1889) and Hendrik Antoon Lorentz (1892) to explain the negative outcome of the Michelson–Morley experiment and to rescue the hypothesis of the stationary aether (Lorentz–FitzGerald contraction hypothesis).
Although both FitzGerald and Lorentz alluded to the fact that electrostatic fields in motion were deformed ("Heaviside-Ellipsoid" after Oliver Heaviside, who derived this deformation from electromagnetic theory in 1888), it was considered an ad hoc hypothesis, because at this time there was no sufficient reason to assume that intermolecular forces behave the same way as electromagnetic ones. In 1897 Joseph Larmor developed a model in which all forces are considered to be of electromagnetic origin, and length contraction appeared to be a direct consequence of this model. Yet it was shown by Henri Poincaré (1905) that electromagnetic forces alone cannot explain the electron's stability. So he had to introduce another ad hoc hypothesis: non-electric binding forces (Poincaré stresses) that ensure the electron's stability, give a dynamical explanation for length contraction, and thus hide the motion of the stationary aether.
Albert Einstein (1905) is credited with removing the ad hoc character from the contraction hypothesis, by deriving this contraction from his postulates instead of experimental data. Hermann Minkowski gave the geometrical interpretation of all relativistic effects by introducing his concept of four-dimensional spacetime.
Basis in relativity
First it is necessary to carefully consider the methods for measuring the lengths of resting and moving objects. Here, "object" simply means a distance with endpoints that are always mutually at rest, i.e., that are at rest in the same inertial frame of reference. If the relative velocity between an observer (or his measuring instruments) and the observed object is zero, then the proper length of the object can simply be determined by directly superposing a measuring rod. However, if the relative velocity is greater than zero, then one can proceed as follows:
The observer installs a row of clocks that either are synchronized a) by exchanging light signals according to the Poincaré–Einstein synchronization, or b) by "slow clock transport", that is, one clock is transported along the row of clocks in the limit of vanishing transport velocity. Now, when the synchronization process is finished, the object is moved along the clock row and every clock stores the exact time when the left or the right end of the object passes by. After that, the observer only has to look at the position of a clock A that stored the time when the left end of the object was passing by, and a clock B at which the right end of the object was passing by at the same time. It's clear that distance AB is equal to length of the moving object. Using this method, the definition of simultaneity is crucial for measuring the length of moving objects.
Another method is to use a clock indicating its proper time , which is traveling from one endpoint of the rod to the other in time as measured by clocks in the rod's rest frame. The length of the rod can be computed by multiplying its travel time by its velocity, thus in the rod's rest frame or in the clock's rest frame.
In Newtonian mechanics, simultaneity and time duration are absolute and therefore both methods lead to the equality of and . Yet in relativity theory the constancy of light velocity in all inertial frames in connection with relativity of simultaneity and time dilation destroys this equality. In the first method an observer in one frame claims to have measured the object's endpoints simultaneously, but the observers in all other inertial frames will argue that the object's endpoints were not measured simultaneously. In the second method, times and are not equal due to time dilation, resulting in different lengths too.
The deviation between the measurements in all inertial frames is given by the formulas for Lorentz transformation and time dilation (see Derivation). It turns out that the proper length remains unchanged and always denotes the greatest length of an object, and the length of the same object measured in another inertial reference frame is shorter than the proper length. This contraction only occurs along the line of motion, and can be represented by the relation
where
is the length observed by an observer in motion relative to the object
is the proper length (the length of the object in its rest frame)
is the Lorentz factor, defined as where
is the relative velocity between the observer and the moving object
is the speed of light
Replacing the Lorentz factor in the original formula leads to the relation
In this equation both and are measured parallel to the object's line of movement. For the observer in relative movement, the length of the object is measured by subtracting the simultaneously measured distances of both ends of the object. For more general conversions, see the Lorentz transformations. An observer at rest observing an object travelling very close to the speed of light would observe the length of the object in the direction of motion as very near zero.
Then, at a speed of (30 million mph, 0.0447) contracted length is 99.9% of the length at rest; at a speed of (95 million mph, 0.141), the length is still 99%. As the magnitude of the velocity approaches the speed of light, the effect becomes prominent.
Symmetry
The principle of relativity (according to which the laws of nature are invariant across inertial reference frames) requires that length contraction is symmetrical: If a rod is at rest in an inertial frame , it has its proper length in and its length is contracted in . However, if a rod rests in , it has its proper length in and its length is contracted in . This can be vividly illustrated using symmetric Minkowski diagrams, because the Lorentz transformation geometrically corresponds to a rotation in four-dimensional spacetime.
Magnetic forces
Magnetic forces are caused by relativistic contraction when electrons are moving relative to atomic nuclei. The magnetic force on a moving charge next to a current-carrying wire is a result of relativistic motion between electrons and protons.
In 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. In the electrons' frame of reference, the moving wire contracts slightly, causing the protons of the opposite wire to be locally denser. As the electrons in the opposite wire are moving as well, they do not contract (as much). This results in an apparent local imbalance between electrons and protons; the moving electrons in one wire are attracted to the extra protons in the other. The reverse can also be considered. To the static proton's frame of reference, the electrons are moving and contracted, resulting in the same imbalance. The electron drift velocity is relatively very slow, on the order of a meter an hour but the force between an electron and proton is so enormous that even at this very slow speed the relativistic contraction causes significant effects.
This effect also applies to magnetic particles without current, with current being replaced with electron spin.
Experimental verifications
Any observer co-moving with the observed object cannot measure the object's contraction, because he can judge himself and the object as at rest in the same inertial frame in accordance with the principle of relativity (as it was demonstrated by the Trouton–Rankine experiment). So length contraction cannot be measured in the object's rest frame, but only in a frame in which the observed object is in motion. In addition, even in such a non-co-moving frame, direct experimental confirmations of length contraction are hard to achieve, because (a) at the current state of technology, objects of considerable extension cannot be accelerated to relativistic speeds, and (b) the only objects traveling with the speed required are atomic particles, whose spatial extensions are too small to allow a direct measurement of contraction.
However, there are indirect confirmations of this effect in a non-co-moving frame:
It was the negative result of a famous experiment, that required the introduction of length contraction: the Michelson–Morley experiment (and later also the Kennedy–Thorndike experiment). In special relativity its explanation is as follows: In its rest frame the interferometer can be regarded as at rest in accordance with the relativity principle, so the propagation time of light is the same in all directions. Although in a frame in which the interferometer is in motion, the transverse beam must traverse a longer, diagonal path with respect to the non-moving frame thus making its travel time longer, the factor by which the longitudinal beam would be delayed by taking times L/(c−v) and L/(c+v) for the forward and reverse trips respectively is even longer. Therefore, in the longitudinal direction the interferometer is supposed to be contracted, in order to restore the equality of both travel times in accordance with the negative experimental result(s). Thus the two-way speed of light remains constant and the round trip propagation time along perpendicular arms of the interferometer is independent of its motion & orientation.
Given the thickness of the atmosphere as measured in Earth's reference frame, muons' extremely short lifespan shouldn't allow them to make the trip to the surface, even at the speed of light, but they do nonetheless. From the Earth reference frame, however, this is made possible only by the muon's time being slowed down by time dilation. However, in the muon's frame, the effect is explained by the atmosphere being contracted, shortening the trip.
Heavy ions that are spherical when at rest should assume the form of "pancakes" or flat disks when traveling nearly at the speed of lightand in fact, the results obtained from particle collisions can only be explained when the increased nucleon density due to length contraction is considered.
The ionization ability of electrically charged particles with large relative velocities is higher than expected. In pre-relativistic physics the ability should decrease at high velocities, because the time in which ionizing particles in motion can interact with the electrons of other atoms or molecules is diminished; however, in relativity, the higher-than-expected ionization ability can be explained by length contraction of the Coulomb field in frames in which the ionizing particles are moving, which increases their electrical field strength normal to the line of motion.
In synchrotrons and free-electron lasers, relativistic electrons were injected into an undulator, so that synchrotron radiation is generated. In the proper frame of the electrons, the undulator is contracted which leads to an increased radiation frequency. Additionally, to find out the frequency as measured in the laboratory frame, one has to apply the relativistic Doppler effect. So, only with the aid of length contraction and the relativistic Doppler effect, the extremely small wavelength of undulator radiation can be explained.
Reality of length contraction
In 1911 Vladimir Varićak asserted that one sees the length contraction in an objective way, according to Lorentz, while it is "only an apparent, subjective phenomenon, caused by the manner of our clock-regulation and length-measurement", according to Einstein. Einstein published a rebuttal:
Einstein also argued in that paper, that length contraction is not simply the product of arbitrary definitions concerning the way clock regulations and length measurements are performed. He presented the following thought experiment: Let A'B' and A"B" be the endpoints of two rods of the same proper length L0, as measured on x' and x" respectively. Let them move in opposite directions along the x* axis, considered at rest, at the same speed with respect to it. Endpoints A'A" then meet at point A*, and B'B" meet at point B*. Einstein pointed out that length A*B* is shorter than A'B' or A"B", which can also be demonstrated by bringing one of the rods to rest with respect to that axis.
Paradoxes
Due to superficial application of the contraction formula, some paradoxes can occur. Examples are the ladder paradox and Bell's spaceship paradox. However, those paradoxes can be solved by a correct application of the relativity of simultaneity. Another famous paradox is the Ehrenfest paradox, which proves that the concept of rigid bodies is not compatible with relativity, reducing the applicability of Born rigidity, and showing that for a co-rotating observer the geometry is in fact non-Euclidean.
Visual effects
Length contraction refers to measurements of position made at simultaneous times according to a coordinate system. This could suggest that if one could take a picture of a fast moving object, that the image would show the object contracted in the direction of motion. However, such visual effects are completely different measurements, as such a photograph is taken from a distance, while length contraction can only directly be measured at the exact location of the object's endpoints. It was shown by several authors such as Roger Penrose and James Terrell that moving objects generally do not appear length contracted on a photograph. This result was popularized by Victor Weisskopf in a Physics Today article. For instance, for a small angular diameter, a moving sphere remains circular and is rotated. This kind of visual rotation effect is called Penrose-Terrell rotation.
Derivation
Length contraction can be derived in several ways:
Known moving length
In an inertial reference frame S, let and denote the endpoints of an object in motion. In this frame the object's length is measured, according to the above conventions, by determining the simultaneous positions of its endpoints at . Meanwhile the proper length of this object, as measured in its rest frame S', can be calculated by using the Lorentz transformation. Transforming the time coordinates from S into S' results in different times, but this is not problematic, since the object is at rest in S' where it does not matter when the endpoints are measured. Therefore the transformation of the spatial coordinates suffices, which gives:
Since , and by setting and , the proper length in S' is given by
Therefore the object's length, measured in the frame S, is contracted by a factor :
Likewise, according to the principle of relativity, an object that is at rest in S will also be contracted in S'. By exchanging the above signs and primes symmetrically, it follows that
Thus an object at rest in S, when measured in S', will have the contracted length
Known proper length
Conversely, if the object rests in S and its proper length is known, the simultaneity of the measurements at the object's endpoints has to be considered in another frame S', as the object constantly changes its position there. Therefore, both spatial and temporal coordinates must be transformed:
Computing length interval as well as assuming simultaneous time measurement , and by plugging in proper length , it follows:
Equation (2) gives
which, when plugged into (1), demonstrates that becomes the contracted length :
.
Likewise, the same method gives a symmetric result for an object at rest in S':
.
Using time dilation
Length contraction can also be derived from time dilation, according to which the rate of a single "moving" clock (indicating its proper time ) is lower with respect to two synchronized "resting" clocks (indicating ). Time dilation was experimentally confirmed multiple times, and is represented by the relation:
Suppose a rod of proper length at rest in and a clock at rest in are moving along each other with speed . Since, according to the principle of relativity, the magnitude of relative velocity is the same in either reference frame, the respective travel times of the clock between the rod's endpoints are given by in and in , thus and . By inserting the time dilation formula, the ratio between those lengths is:
.
Therefore, the length measured in is given by
So since the clock's travel time across the rod is longer in than in (time dilation in ), the rod's length is also longer in than in (length contraction in ). Likewise, if the clock were at rest in and the rod in , the above procedure would give
Geometrical considerations
Additional geometrical considerations show that length contraction can be regarded as a trigonometric phenomenon, with analogy to parallel slices through a cuboid before and after a rotation in E3 (see left half figure at the right). This is the Euclidean analog of boosting a cuboid in E1,2. In the latter case, however, we can interpret the boosted cuboid as the world slab of a moving plate.
Image: Left: a rotated cuboid in three-dimensional euclidean space E3. The cross section is longer in the direction of the rotation than it was before the rotation. Right: the world slab of a moving thin plate in Minkowski spacetime (with one spatial dimension suppressed) E1,2, which is a boosted cuboid. The cross section is thinner in the direction of the boost than it was before the boost. In both cases, the transverse directions are unaffected and the three planes meeting at each corner of the cuboids are mutually orthogonal (in the sense of E1,2 at right, and in the sense of E3 at left).
In special relativity, Poincaré transformations are a class of affine transformations which can be characterized as the transformations between alternative Cartesian coordinate charts on Minkowski spacetime corresponding to alternative states of inertial motion (and different choices of an origin). Lorentz transformations are Poincaré transformations which are linear transformations (preserve the origin). Lorentz transformations play the same role in Minkowski geometry (the Lorentz group forms the isotropy group of the self-isometries of the spacetime) which are played by rotations in euclidean geometry. Indeed, special relativity largely comes down to studying a kind of noneuclidean trigonometry in Minkowski spacetime, as suggested by the following table:
References
External links
Physics FAQ: Can You See the Lorentz–Fitzgerald Contraction? Or: Penrose-Terrell Rotation; The Barn and the Pole
Special relativity
Length
Hendrik Lorentz | 0.769323 | 0.994984 | 0.765465 |
Homogeneity and heterogeneity | Homogeneity and heterogeneity are concepts relating to the uniformity of a substance, process or image. A homogeneous feature is uniform in composition or character (i.e. color, shape, size, weight, height, distribution, texture, language, income, disease, temperature, radioactivity, architectural design, etc.); one that is heterogeneous is distinctly nonuniform in at least one of these qualities.
Etymology and spelling
The words homogeneous and heterogeneous come from Medieval Latin homogeneus and heterogeneus, from Ancient Greek ὁμογενής (homogenēs) and ἑτερογενής (heterogenēs), from ὁμός (homos, "same") and ἕτερος (heteros, "other, another, different") respectively, followed by γένος (genos, "kind"); -ous is an adjectival suffix.
Alternate spellings omitting the last -e- (and the associated pronunciations) are common, but mistaken: homogenous is strictly a biological/pathological term which has largely been replaced by homologous. But use of homogenous to mean homogeneous has seen a rise since 2000, enough for it to now be considered an "established variant". Similarly, heterogenous is a spelling traditionally reserved to biology and pathology, referring to the property of an object in the body having its origin outside the body.
Scaling
The concepts are the same to every level of complexity. From atoms to galaxies, plants, animals, humans, and other living organisms all share both a common or unique set of complexities.
Hence, an element may be homogeneous on a larger scale, compared to being heterogeneous on a smaller scale. This is known as an effective medium approximation.
Examples
Various disciplines understand heterogeneity, or being heterogeneous, in different ways.
Biology
Environmental heterogeneity
Environmental heterogeneity (EH) is a hypernym for different environmental factors that contribute to the diversity of species, like climate, topography, and land cover. Biodiversity is correlated with geodiversity on a global scale. Heterogeneity in geodiversity features and environmental variables are indicators of environmental heterogeneity. They drive biodiversity at local and regional scales.
Scientific literature in ecology contains a big number of different terms for environmental heterogeneity, often undefined or conflicting in their meaning. and are a synonyms of environmental heterogeneity.
Chemistry
Homogeneous and heterogeneous mixtures
In chemistry, a heterogeneous mixture consists of either or both of 1) multiple states of matter or 2) hydrophilic and hydrophobic substances in one mixture; an example of the latter would be a mixture of water, octane, and silicone grease. Heterogeneous solids, liquids, and gases may be made homogeneous by melting, stirring, or by allowing time to pass for diffusion to distribute the molecules evenly. For example, adding dye to water will create a heterogeneous solution at first, but will become homogeneous over time. Entropy allows for heterogeneous substances to become homogeneous over time.
A heterogeneous mixture is a mixture of two or more compounds. Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a salad, trail mix, and concrete (not cement). A mixture can be determined to be homogeneous when everything is settled and equal, and the liquid, gas, the object is one color or the same form. Various models have been proposed to model the concentrations in different phases. The phenomena to be considered are mass rates and reaction.
Homogeneous and heterogeneous reactions
Homogeneous reactions are chemical reactions in which the reactants and products are in the same phase, while heterogeneous reactions have reactants in two or more phases. Reactions that take place on the surface of a catalyst of a different phase are also heterogeneous. A reaction between two gases or two miscible liquids is homogeneous. A reaction between a gas and a liquid, a gas and a solid or a liquid and a solid is heterogeneous.
Geology
Earth is a heterogeneous substance in many aspects; for instance, rocks (geology) are inherently heterogeneous, usually occurring at the micro-scale and mini-scale.
Linguistics
In formal semantics, homogeneity is the phenomenon in which plural expressions imply "all" when asserted but "none" when negated. For example, the English sentence "Robin read the books" means that Robin read all the books, while "Robin didn't read the books" means that she read none of them. Neither sentence can be asserted if Robin read exactly half of the books. This is a puzzle because the negative sentence does not appear to be the classical negation of the sentence. A variety of explanations have been proposed including that natural language operates on a trivalent logic.
Information technology
With information technology, heterogeneous computing occurs in a network comprising different types of computers, potentially with vastly differing memory sizes, processing power and even basic underlying architecture.
Mathematics and statistics
In algebra, homogeneous polynomials have the same number of factors of a given kind.
In the study of binary relations, a homogeneous relation R is on a single set (R ⊆ X × X) while a heterogeneous relation concerns possibly distinct sets (R ⊆ X × Y, X = Y or X ≠ Y).
In statistical meta-analysis, study heterogeneity is when multiple studies on an effect are measuring somewhat different effects due to differences in subject population, intervention, choice of analysis, experimental design, etc.; this can cause problems in attempts to summarize the meaning of the studies.
Medicine
In medicine and genetics, a genetic or allelic heterogeneous condition is one where the same disease or condition can be caused, or contributed to, by several factors, or in genetic terms, by varying or different genes or alleles.
In cancer research, cancer cell heterogeneity is thought to be one of the underlying reasons that make treatment of cancer difficult.
Physics
In physics, "heterogeneous" is understood to mean "having physical properties that vary within the medium".
Sociology
In sociology, "heterogeneous" may refer to a society or group that includes individuals of differing ethnicities, cultural backgrounds, sexes, or ages. Diverse is the more common synonym in the context.
See also
Complete spatial randomness
Heterologous
Epidemiology
Spatial analysis
Statistical hypothesis testing
Homogeneity blockmodeling
References
External links
The following cited pages in this book cover the meaning of "homogeneity" across disciplines:
Chemical reactions
Scientific terminology
de:Heterogenität
eu:Homogeneo eta heterogeneo | 0.769475 | 0.994774 | 0.765454 |
Scalar potential | In mathematical physics, scalar potential describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
A scalar potential is a fundamental concept in vector analysis and physics (the adjective scalar is frequently omitted if there is no danger of confusion with vector potential). The scalar potential is an example of a scalar field. Given a vector field , the scalar potential is defined such that:
where is the gradient of and the second part of the equation is minus the gradient for a function of the Cartesian coordinates . In some cases, mathematicians may use a positive sign in front of the gradient to define the potential. Because of this definition of in terms of the gradient, the direction of at any point is the direction of the steepest decrease of at that point, its magnitude is the rate of that decrease per unit length.
In order for to be described in terms of a scalar potential only, any of the following equivalent statements have to be true:
where the integration is over a Jordan arc passing from location to location and is evaluated at location .
where the integral is over any simple closed path, otherwise known as a Jordan curve.
The first of these conditions represents the fundamental theorem of the gradient and is true for any vector field that is a gradient of a differentiable single valued scalar field . The second condition is a requirement of so that it can be expressed as the gradient of a scalar function. The third condition re-expresses the second condition in terms of the curl of using the fundamental theorem of the curl. A vector field that satisfies these conditions is said to be irrotational (conservative).
Scalar potentials play a prominent role in many areas of physics and engineering. The gravity potential is the scalar potential associated with the gravity per unit mass, i.e., the acceleration due to the field, as a function of position. The gravity potential is the gravitational potential energy per unit mass. In electrostatics the electric potential is the scalar potential associated with the electric field, i.e., with the electrostatic force per unit charge. The electric potential is in this case the electrostatic potential energy per unit charge. In fluid dynamics, irrotational lamellar fields have a scalar potential only in the special case when it is a Laplacian field. Certain aspects of the nuclear force can be described by a Yukawa potential. The potential play a prominent role in the Lagrangian and Hamiltonian formulations of classical mechanics. Further, the scalar potential is the fundamental quantity in quantum mechanics.
Not every vector field has a scalar potential. Those that do are called conservative, corresponding to the notion of conservative force in physics. Examples of non-conservative forces include frictional forces, magnetic forces, and in fluid mechanics a solenoidal field velocity field. By the Helmholtz decomposition theorem however, all vector fields can be describable in terms of a scalar potential and corresponding vector potential. In electrodynamics, the electromagnetic scalar and vector potentials are known together as the electromagnetic four-potential.
Integrability conditions
If is a conservative vector field (also called irrotational, curl-free, or potential), and its components have continuous partial derivatives, the potential of with respect to a reference point is defined in terms of the line integral:
where is a parametrized path from to ,
The fact that the line integral depends on the path only through its terminal points and is, in essence, the path independence property of a conservative vector field. The fundamental theorem of line integrals implies that if is defined in this way, then , so that is a scalar potential of the conservative vector field . Scalar potential is not determined by the vector field alone: indeed, the gradient of a function is unaffected if a constant is added to it. If is defined in terms of the line integral, the ambiguity of reflects the freedom in the choice of the reference point .
Altitude as gravitational potential energy
An example is the (nearly) uniform gravitational field near the Earth's surface. It has a potential energy
where is the gravitational potential energy and is the height above the surface. This means that gravitational potential energy on a contour map is proportional to altitude. On a contour map, the two-dimensional negative gradient of the altitude is a two-dimensional vector field, whose vectors are always perpendicular to the contours and also perpendicular to the direction of gravity. But on the hilly region represented by the contour map, the three-dimensional negative gradient of always points straight downwards in the direction of gravity; . However, a ball rolling down a hill cannot move directly downwards due to the normal force of the hill's surface, which cancels out the component of gravity perpendicular to the hill's surface. The component of gravity that remains to move the ball is parallel to the surface:
where is the angle of inclination, and the component of perpendicular to gravity is
This force , parallel to the ground, is greatest when is 45 degrees.
Let be the uniform interval of altitude between contours on the contour map, and let be the distance between two contours. Then
so that
However, on a contour map, the gradient is inversely proportional to , which is not similar to force : altitude on a contour map is not exactly a two-dimensional potential field. The magnitudes of forces are different, but the directions of the forces are the same on a contour map as well as on the hilly region of the Earth's surface represented by the contour map.
Pressure as buoyant potential
In fluid mechanics, a fluid in equilibrium, but in the presence of a uniform gravitational field is permeated by a uniform buoyant force that cancels out the gravitational force: that is how the fluid maintains its equilibrium. This buoyant force is the negative gradient of pressure:
Since buoyant force points upwards, in the direction opposite to gravity, then pressure in the fluid increases downwards. Pressure in a static body of water increases proportionally to the depth below the surface of the water. The surfaces of constant pressure are planes parallel to the surface, which can be characterized as the plane of zero pressure.
If the liquid has a vertical vortex (whose axis of rotation is perpendicular to the surface), then the vortex causes a depression in the pressure field. The surface of the liquid inside the vortex is pulled downwards as are any surfaces of equal pressure, which still remain parallel to the liquids surface. The effect is strongest inside the vortex and decreases rapidly with the distance from the vortex axis.
The buoyant force due to a fluid on a solid object immersed and surrounded by that fluid can be obtained by integrating the negative pressure gradient along the surface of the object:
Scalar potential in Euclidean space
In 3-dimensional Euclidean space , the scalar potential of an irrotational vector field is given by
where is an infinitesimal volume element with respect to . Then
This holds provided is continuous and vanishes asymptotically to zero towards infinity, decaying faster than and if the divergence of likewise vanishes towards infinity, decaying faster than .
Written another way, let
be the Newtonian potential. This is the fundamental solution of the Laplace equation, meaning that the Laplacian of is equal to the negative of the Dirac delta function:
Then the scalar potential is the divergence of the convolution of with :
Indeed, convolution of an irrotational vector field with a rotationally invariant potential is also irrotational. For an irrotational vector field , it can be shown that
Hence
as required.
More generally, the formula
holds in -dimensional Euclidean space with the Newtonian potential given then by
where is the volume of the unit -ball. The proof is identical. Alternatively, integration by parts (or, more rigorously, the properties of convolution) gives
See also
Gradient theorem
Fundamental theorem of vector analysis
Equipotential (isopotential) lines and surfaces
Notes
References
External links
Potentials
Vector calculus
Potential | 0.776217 | 0.986074 | 0.765407 |
Synchrotron radiation | Synchrotron radiation (also known as magnetobremsstrahlung radiation) is the electromagnetic radiation emitted when relativistic charged particles are subject to an acceleration perpendicular to their velocity. It is produced artificially in some types of particle accelerators or naturally by fast electrons moving through magnetic fields. The radiation produced in this way has a characteristic polarization, and the frequencies generated can range over a large portion of the electromagnetic spectrum.
Synchrotron radiation is similar to bremsstrahlung radiation, which is emitted by a charged particle when the acceleration is parallel to the direction of motion. The general term for radiation emitted by particles in a magnetic field is gyromagnetic radiation, for which synchrotron radiation is the ultra-relativistic special case. Radiation emitted by charged particles moving non-relativistically in a magnetic field is called cyclotron emission. For particles in the mildly relativistic range (≈85% of the speed of light), the emission is termed gyro-synchrotron radiation.
In astrophysics, synchrotron emission occurs, for instance, due to ultra-relativistic motion of a charged particle around a black hole. When the source follows a circular geodesic around the black hole, the synchrotron radiation occurs for orbits close to the photosphere where the motion is in the ultra-relativistic regime.
History
Synchrotron radiation was first observed by technician Floyd Haber, on April 24, 1947, at the 70 MeV electron synchrotron of the General Electric research laboratory in Schenectady, New York. While this was not the first synchrotron built, it was the first with a transparent vacuum tube, allowing the radiation to be directly observed.
As recounted by Herbert Pollock:
Description
A direct consequence of Maxwell's equations is that accelerated charged particles always emit electromagnetic radiation. Synchrotron radiation is the special case of charged particles moving at relativistic speed undergoing acceleration perpendicular to their direction of motion, typically in a magnetic field. In such a field, the force due to the field is always perpendicular to both the direction of motion and to the direction of field, as shown by the Lorentz force law.
The power carried by the radiation is found (in SI units) by the relativistic Larmor formula:
where
is the vacuum permittivity,
is the particle charge,
is the magnitude of the acceleration,
is the speed of light,
is the Lorentz factor,
,
is the radius of curvature of the particle trajectory.
The force on the emitting electron is given by the Abraham–Lorentz–Dirac force.
When the radiation is emitted by a particle moving in a plane, the radiation is linearly polarized when observed in that plane, and circularly polarized when observed at a small angle. Considering quantum mechanics, however, this radiation is emitted in discrete packets of photons and has significant effects in accelerators called quantum excitation. For a given acceleration, the average energy of emitted photons is proportional to and the emission rate to .
From accelerators
Circular accelerators will always produce gyromagnetic radiation as the particles are deflected in the magnetic field. However, the quantity and properties of the radiation are highly dependent on the nature of the acceleration taking place. For example, due to the difference in mass, the factor of in the formula for the emitted power means that electrons radiate energy at approximately 1013 times the rate of protons.
Energy loss from synchrotron radiation in circular accelerators was originally considered a nuisance, as additional energy must be supplied to the beam in order to offset the losses. However, beginning in the 1980s, circular electron accelerators known as light sources have been constructed to deliberately produce intense beams of synchrotron radiation for research.
In astronomy
Synchrotron radiation is also generated by astronomical objects, typically where relativistic electrons spiral (and hence change velocity) through magnetic fields.
Two of its characteristics include power-law energy spectra and polarization. It is considered to be one of the most powerful tools in the study of extra-solar magnetic fields wherever relativistic charged particles are present. Most known cosmic radio sources emit synchrotron radiation. It is often used to estimate the strength of large cosmic magnetic fields as well as analyze the contents of the interstellar and intergalactic media.
History of detection
This type of radiation was first detected in a jet emitted by Messier 87 in 1956 by Geoffrey R. Burbidge, who saw it as confirmation of a prediction by Iosif S. Shklovsky in 1953. However, it had been predicted earlier (1950) by Hannes Alfvén and Nicolai Herlofson. Solar flares accelerate particles that emit in this way, as suggested by R. Giovanelli in 1948 and described by J.H. Piddington in 1952.
T. K. Breus noted that questions of priority on the history of astrophysical synchrotron radiation are complicated, writing:
From supermassive black holes
It has been suggested that supermassive black holes produce synchrotron radiation in "jets", generated by the gravitational acceleration of ions in their polar magnetic fields. The nearest such observed jet is from the core of the galaxy Messier 87. This jet is interesting for producing the illusion of superluminal motion as observed from the frame of Earth. This phenomenon is caused because the jets are traveling very near the speed of light and at a very small angle towards the observer. Because at every point of their path the high-velocity jets are emitting light, the light they emit does not approach the observer much more quickly than the jet itself. Light emitted over hundreds of years of travel thus arrives at the observer over a much smaller time period, giving the illusion of faster than light travel, despite the fact that there is actually no violation of special relativity.
Pulsar wind nebulae
A class of astronomical sources where synchrotron emission is important is pulsar wind nebulae, also known as plerions, of which the Crab nebula and its associated pulsar are archetypal.
Pulsed emission gamma-ray radiation from the Crab has recently been observed up to ≥25 GeV, probably due to synchrotron emission by electrons trapped in the strong magnetic field around the pulsar.
Polarization in the Crab nebula at energies from 0.1 to 1.0 MeV, illustrates this typical property of synchrotron radiation.
Interstellar and intergalactic media
Much of what is known about the magnetic environment of the interstellar medium and intergalactic medium is derived from observations of synchrotron radiation. Cosmic ray electrons moving through the medium interact with relativistic plasma and emit synchrotron radiation which is detected on Earth. The properties of the radiation allow astronomers to make inferences about the magnetic field strength and orientation in these regions. However, accurate calculations of field strength cannot be made without knowing the relativistic electron density.
In supernovae
When a star explodes in a supernova, the fastest ejecta move at semi-relativistic speeds approximately 10% the speed of light. This blast wave gyrates electrons in ambient magnetic fields and generates synchrotron emission, revealing the radius of the blast wave at the location of the emission. Synchrotron emission can also reveal the strength of the magnetic field at the front of the shock wave, as well as the circumstellar density it encounters, but strongly depends on the choice of energy partition between the magnetic field, proton kinetic energy, and electron kinetic energy. Radio synchrotron emission has allowed astronomers to shed light on mass loss and stellar winds that occur just prior to stellar death.
See also
Notes
References
Brau, Charles A. Modern Problems in Classical Electrodynamics. Oxford University Press, 2004. .
Jackson, John David. Classical Electrodynamics. John Wiley & Sons, 1999.
External links
Cosmic Magnetobremsstrahlung (synchrotron Radiation), by Ginzburg, V. L., Syrovatskii, S. I., ARAA, 1965
Developments in the Theory of Synchrotron Radiation and its Reabsorption, by Ginzburg, V. L., Syrovatskii, S. I., ARAA, 1969
Lightsources.org
BioSync – a structural biologist's resource for high energy data collection facilities
X-Ray Data Booklet
Particle physics
Synchrotron-related techniques
Electromagnetic radiation
Experimental particle physics | 0.770622 | 0.993219 | 0.765396 |
Fluid mechanics | Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.
It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion.
It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
History
The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739).
Inviscid flow was further analyzed by various mathematicians (Jean le Rond d'Alembert, Joseph Louis Lagrange, Pierre-Simon Laplace, Siméon Denis Poisson) and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Further mathematical justification was provided by Claude-Louis Navier and George Gabriel Stokes in the Navier–Stokes equations, and boundary layers were investigated (Ludwig Prandtl, Theodore von Kármán), while various scientists such as Osborne Reynolds, Andrey Kolmogorov, and Geoffrey Ingram Taylor advanced the understanding of fluid viscosity and turbulence.
Main branches
Fluid statics
Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium; and is contrasted with fluid dynamics, the study of fluids in motion. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of water is always level whatever the shape of its container. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspects of geophysics and astrophysics (for example, in understanding plate tectonics and anomalies in the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields.
Fluid dynamics
Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and movements on aircraft, determining the mass flow rate of petroleum through pipelines, predicting evolving weather patterns, understanding nebulae in interstellar space and modeling explosions. Some fluid-dynamical principles are used in traffic engineering and crowd dynamics.
Relationship to continuum mechanics
Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table.
In a mechanical view, a fluid is a substance that does not support shear stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress.
Assumptions
The assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. Fundamentally, every fluid mechanical system is assumed to obey:
Conservation of mass
Conservation of energy
Conservation of momentum
The continuum assumption
For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.
The is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale. Fluid properties can vary continuously from one volume element to another and are average values of the molecular properties. The continuum hypothesis can lead to inaccurate results in applications like supersonic speed flows, or molecular flows on nano scale. Those problems for which the continuum hypothesis fails can be solved using statistical mechanics. To determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, but molecular approach (statistical mechanics) can be applied to find the fluid motion for larger Knudsen numbers.
Navier–Stokes equations
The Navier–Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are differential equations that describe the force balance at a given point within a fluid. For an incompressible fluid with vector velocity field , the Navier–Stokes equations are
.
These differential equations are the analogues for deformable materials to Newton's equations of motion for particles – the Navier–Stokes equations describe changes in momentum (force) in response to pressure and viscosity, parameterized by the kinematic viscosity . Occasionally, body forces, such as the gravitational force or Lorentz force are added to the equations.
Solutions of the Navier–Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms, only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow in which the Reynolds number is small. For more complex cases, especially those involving turbulence, such as global weather systems, aerodynamics, hydrodynamics and many more, solutions of the Navier–Stokes equations can currently only be found with the help of computers. This branch of science is called computational fluid dynamics.
Inviscid and viscous fluids
An inviscid fluid has no viscosity, . In practice, an inviscid flow is an idealization, one that facilitates mathematical treatment. In fact, purely inviscid flows are only known to be realized in the case of superfluidity. Otherwise, fluids are generally viscous, a property that is often most important within a boundary layer near a solid surface, where the flow must match onto the no-slip condition at the solid. In some cases, the mathematics of a fluid mechanical system can be treated by assuming that the fluid outside of boundary layers is inviscid, and then matching its solution onto that for a thin laminar boundary layer.
For fluid flow over a porous boundary, the fluid velocity can be discontinuous between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition). Further, it is useful at low subsonic speeds to assume that gas is incompressible—that is, the density of the gas does not change even though the speed and static pressure change.
Newtonian versus non-Newtonian fluids
A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gasses, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth.
By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time—this behavior is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner.
Equations for a Newtonian fluid
The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is
where
is the shear stress exerted by the fluid ("drag"),
is the fluid viscosity—a constant of proportionality, and
is the velocity gradient perpendicular to the direction of shear.
For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it. If the fluid is incompressible the equation governing the viscous stress (in Cartesian coordinates) is
where
is the shear stress on the face of a fluid element in the direction
is the velocity in the direction
is the direction coordinate.
If the fluid is not incompressible the general form for the viscous stress in a Newtonian fluid is
where is the second viscosity coefficient (or bulk viscosity). If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types. Non-Newtonian fluids can be either plastic, Bingham plastic, pseudoplastic, dilatant, thixotropic, rheopectic, viscoelastic.
In some applications, another rough broad division among fluids is made: ideal and non-ideal fluids. An ideal fluid is non-viscous and offers no resistance whatsoever to a shearing force. An ideal fluid really does not exist, but in some calculations, the assumption is justifiable. One example of this is the flow far from solid surfaces. In many cases, the viscous effects are concentrated near the solid boundaries (such as in boundary layers) while in regions of the flow field far away from the boundaries the viscous effects can be neglected and the fluid there is treated as it were inviscid (ideal flow). When the viscosity is neglected, the term containing the viscous stress tensor in the Navier–Stokes equation vanishes. The equation reduced in this form is called the Euler equation.
See also
Transport phenomena
Aerodynamics
Applied mechanics
Bernoulli's principle
Communicating vessels
Computational fluid dynamics
Compressor map
Secondary flow
Different types of boundary conditions in fluid dynamics
Fluid–structure interaction
Immersed boundary method
Stochastic Eulerian Lagrangian method
Stokesian dynamics
Smoothed-particle hydrodynamics
References
Further reading
External links
Free Fluid Mechanics books
Annual Review of Fluid Mechanics. .
CFDWiki – the Computational Fluid Dynamics reference wiki.
Educational Particle Image Velocimetry – resources and demonstrations
Civil engineering | 0.767462 | 0.997298 | 0.765388 |
Non-exercise activity thermogenesis | Non-exercise activity thermogenesis, also known as non-exercise physical activity (NEPA), is energy expenditure during activities that are not part of a structured exercise program. NEAT includes physical activity at the workplace, hobbies, standing instead of sitting, walking around, climbing stairs, doing chores, and fidgeting. Besides differences in body composition, it represents most of the variation in energy expenditure across individuals and populations, accounting from 6-10 percent to as much as 50 percent of energy expenditure in highly active individuals.
Relationship with obesity
NEAT is the main component of activity-related energy expenditure in obese individuals, as most do not do any physical exercise. NEAT is also lower in obese individuals than the general population.
NEAT may be reduced in individuals who have lost weight, which some hypothesize contributes to difficulties in achieving and sustaining weight loss.
In Western countries, occupations have shifted from physical labor to sedentary work, which results in a loss of energy expenditure. Strenuous physical labor can require 1500 calories or more per day than desk work.
Relationship with exercise
It is debated whether there is a significant reduction in NEAT after beginning a structured exercise program.
Health benefits
Lack of NEAT is posited as an explanation for health harms for prolonged sitting.
Measurement
Accelerometers and questionnaires can be used to estimate NEAT.
References
Human physiology
Metabolism | 0.781632 | 0.979211 | 0.765383 |
Facilitated diffusion | Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient according to the principles of diffusion.
Facilitated diffusion differs from simple diffusion in several ways:
The transport relies on molecular binding between the cargo and the membrane-embedded channel or carrier protein.
The rate of facilitated diffusion is saturable with respect to the concentration difference between the two phases; unlike free diffusion which is linear in the concentration difference.
The temperature dependence of facilitated transport is substantially different due to the presence of an activated binding event, as compared to free diffusion where the dependence on temperature is mild.
Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that comprise the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, small polar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids).
Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation.
Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism. Some examples of proteins that mediate this process are glucose transporters, organic cation transport proteins, urea transporter, monocarboxylate transporter 8 and monocarboxylate transporter 10.
In vivo model of facilitated diffusion
Many physical and biochemical processes are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes. Facilitated diffusion is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted.
Intracellular facilitated diffusion
Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as E. coli, facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol.
Facilitated diffusion of proteins on Chromatin
The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity.
For oxygen
The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled.
For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced.
For carbon monoxide
Facilitated diffusion of carbon monoxide is similar to that of oxygen. Carbon monoxide also combines with hemoglobin and myoglobin, but carbon monoxide has a dissociation velocity that 100 times less than that of oxygen. Its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen.
For glucose
Since glucose is a large molecule, its diffusion across a membrane is difficult. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary.
See also
Transmembrane channels
Major facilitator superfamily
References
External links
Facilitated Diffusion - Description and Animation
Facilitated Diffusion- Definition and Supplement
Diffusion
Transport proteins | 0.768682 | 0.995702 | 0.765378 |
Chemical kinetics | Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a reaction occurs but in itself tells nothing about its rate. Chemical kinetics includes investigations of how experimental conditions influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that also can describe the characteristics of a chemical reaction.
History
The pioneering work of chemical kinetics was done by German chemist Ludwig Wilhelmy in 1850. He experimentally studied the rate of inversion of sucrose and he used integrated rate law for the determination of the reaction kinetics of this reaction. His work was noticed 34 years later by Wilhelm Ostwald. After Wilhelmy, Peter Waage and Cato Guldberg published 1864 the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances.
Van 't Hoff studied chemical dynamics and in 1884 published his famous "Études de dynamique chimique". In 1901 he was awarded by the first Nobel Prize in Chemistry "in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions". After van 't Hoff, chemical kinetics deals with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero order reactions (for which reaction rates are independent of concentration), first order reactions, and second order reactions, and can be derived for others. Elementary reactions follow the law of mass action, but the rate law of stepwise reactions has to be derived by combining the rate laws of the various elementary steps, and can become rather complex. In consecutive reactions, the rate-determining step often determines the kinetics. In consecutive first order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction.
Gorban and Yablonsky have suggested that the history of chemical dynamics can be divided into three eras. The first is the van 't Hoff wave searching for the general laws of chemical reactions and relating kinetics to thermodynamics. The second may be called the Semenov-Hinshelwood wave with emphasis on reaction mechanisms, especially for chain reactions. The third is associated with Aris and the detailed mathematical description of chemical reaction networks.
Factors affecting reaction rate
Nature of the reactants
The reaction rate varies depending upon what substances are reacting. Acid/base reactions, the formation of salts, and ion exchange are usually fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be slower.
The nature and strength of bonds in reactant molecules greatly influence the rate of their transformation into products.
Physical state
The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in separate phases, the reaction is limited to the interface between the reactants. Reaction can occur only at their area of contact; in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant the greater its surface area per unit volume and the more contact it with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branches — one does not start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions ( are those reactions in which solute and solvent not mix properly)
Surface area of solid state
In a solid, only those particles that are at the surface can be involved in a reaction. Crushing a solid into smaller parts means that more particles are present at the surface, and the frequency of collisions between these and reactant particles increases, and so reaction occurs more rapidly. For example, Sherbet (powder) is a mixture of very fine powder of malic acid (a weak organic acid) and sodium hydrogen carbonate. On contact with the saliva in the mouth, these chemicals quickly dissolve and react, releasing carbon dioxide and providing for the fizzy sensation. Also, fireworks manufacturers modify the surface area of solid reactants to control the rate at which the fuels in fireworks are oxidised, using this to create diverse effects. For example, finely divided aluminium confined in a shell explodes violently. If larger pieces of aluminium are used, the reaction is slower and sparks are seen as pieces of burning metal are ejected.
Concentration
The reactions are due to collisions of reactant species. The frequency with which the molecules or ions collide depends upon their concentrations. The more crowded the molecules are, the more likely they are to collide and react with one another. Thus, an increase in the concentrations of the reactants will usually result in the corresponding increase in the reaction rate, while a decrease in the concentrations will usually have a reverse effect. For example, combustion will occur more rapidly in pure oxygen than in air (21% oxygen).
The rate equation shows the detailed dependence of the reaction rate on the concentrations of reactants and other species present. The mathematical forms depend on the reaction mechanism. The actual rate equation for a given reaction is determined experimentally and provides information about the reaction mechanism. The mathematical expression of the rate equation is often given by
Here is the reaction rate constant, is the molar concentration of reactant i and is the partial order of reaction for this reactant. The partial order for a reactant can only be determined experimentally and is often not indicated by its stoichiometric coefficient.
Temperature
Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the Maxwell–Boltzmann distribution of molecular energies.
The effect of temperature on the reaction rate constant usually obeys the Arrhenius equation , where A is the pre-exponential factor or A-factor, Ea is the activation energy, R is the molar gas constant and T is the absolute temperature.
At a given temperature, the chemical rate of a reaction depends on the value of the A-factor, the magnitude of the activation energy, and the concentrations of the reactants. Usually, rapid reactions require relatively small activation energies.
The 'rule of thumb' that the rate of chemical reactions doubles for every 10 °C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the α (temperature coefficient) is often between 1.5 and 2.5.
The kinetics of rapid reactions can be studied with the temperature jump method. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. A particularly useful form of temperature jump apparatus is a shock tube, which can rapidly increase a gas's temperature by more than 1000 degrees.
Catalysts
A catalyst is a substance that alters the rate of a chemical reaction but it remains chemically unchanged afterwards. The catalyst increases the rate of the reaction by providing a new reaction mechanism to occur with in a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis–Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibrium, as the catalyst speeds up the backward and forward reactions equally.
In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation.
Pressure
Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution.
In addition to this straightforward mass-action effect, the rate coefficients themselves can change due to pressure. The rate coefficients and products of many high-temperature gas-phase reactions change if an inert gas is added to the mixture; variations on this effect are called fall-off and chemical activation. These phenomena are due to exothermic or endothermic reactions occurring faster than heat transfer, causing the reacting molecules to have non-thermal energy distributions (non-Boltzmann distribution). Increasing the pressure increases the heat transfer rate between the reacting molecules and the rest of the system, reducing this effect.
Condensed-phase rate coefficients can also be affected by pressure, although rather high pressures are required for a measurable effect because ions and molecules are not very compressible. This effect is often studied using diamond anvils.
A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium.
Absorption of light
The activation energy for a chemical reaction can be provided when one reactant molecule absorbs light of suitable wavelength and is promoted to an excited state. The study of reactions initiated by light is photochemistry, one prominent example being photosynthesis.
Experimental methods
The experimental determination of reaction rates involves measuring how the concentrations of reactants or products change over time. For example, the concentration of a reactant can be measured by spectrophotometry at a wavelength where no other reactant or product in the system absorbs light.
For reactions which take at least several minutes, it is possible to start the observations after the reactants have been mixed at the temperature of interest.
Fast reactions
For faster reactions, the time required to mix the reactants and bring them to a specified temperature may be comparable or longer than the half-life of the reaction. Special methods to start fast reactions without slow mixing step include
Stopped flow methods, which can reduce the mixing time to the order of a millisecond The stopped flow methods have limitation, for example, we need to consider the time it takes to mix gases or solutions and are not suitable if the half-life is less than about a hundredth of a second.
Chemical relaxation methods such as temperature jump and pressure jump, in which a pre-mixed system initially at equilibrium is perturbed by rapid heating or depressurization so that it is no longer at equilibrium, and the relaxation back to equilibrium is observed. For example, this method has been used to study the neutralization H3O+ + OH− with a half-life of 1 μs or less under ordinary conditions.
Flash photolysis, in which a laser pulse produces highly excited species such as free radicals, whose reactions are then studied.
Equilibrium
While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal (the principle of dynamic equilibrium) and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the Haber–Bosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the Belousov–Zhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium.
Free energy
In general terms, the free energy change (ΔG) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two products, the thermodynamically most stable one will form in general, except in special circumstances when the reaction is said to be under kinetic reaction control. The Curtin–Hammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a distinct product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships.
The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes.
Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. It is also provides information in corrosion engineering.
Applications and models
The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur.
Chemical Kinetics is frequently validated and explored through modeling in specialized packages as a function of ordinary differential equation-solving (ODE-solving) and curve-fitting.
Numerical methods
In some cases, equations are unsolvable analytically, but can be solved using numerical methods if data values are given. There are two different ways to do this, by either using software programmes or mathematical methods such as the Euler method. Examples of software for chemical kinetics are i) Tenua, a Java app which simulates chemical reactions numerically and allows comparison of the simulation to real data, ii) Python coding for calculations and estimates and iii) the Kintecus software compiler to model, regress, fit and optimize reactions.
-Numerical integration: for a 1st order reaction A → B
The differential equation of the reactant A is:
It can also be expressed as
which is the same as
To solve the differential equations with Euler and Runge-Kutta methods we need to have the initial values.
See also
Autocatalytic reactions and order creation
Corrosion engineering
Detonation
Electrochemical kinetics
Flame speed
Heterogenous catalysis
Intrinsic low-dimensional manifold
MLAB chemical kinetics modeling package
Nonthermal surface reaction
PottersWheel Matlab toolbox to fit chemical rate constants to experimental data
Reaction progress kinetic analysis
References
External links
Chemistry applets
University of Waterloo
Chemical Kinetics of Gas Phase Reactions
Kinpy: Python code generator for solving kinetic equations
Reaction rate law and reaction profile - a question of temperature, concentration, solvent and catalyst - how fast will a reaction proceed (Video by SciFox on TIB AV-Portal)
Jacobus Henricus van 't Hoff | 0.76941 | 0.994699 | 0.765331 |
Astrophysics for People in a Hurry | Astrophysics for People in a Hurry is a 2017 popular science book by Neil deGrasse Tyson, centering around a number of basic questions about the universe. Published on May 2, 2017, by W. W. Norton & Company, the book is a collection of Tyson's essays that appeared in Natural History magazine at various times from 1997 to 2007.
Contents
Neil deGrasse Tyson's Astrophysics for People in a Hurry is a popular introduction to the main concepts and issues of modern astrophysics. The author explains the origin and structure of the Universe, the force of gravity, light, dark matter and dark energy, about our place in the Cosmos and how we try to understand its laws. The book is written in a simple and lively language, using vivid analogies. It is intended for a wide range of readers who want to get a general idea of astrophysics without complex formulas and details. The book consists of 12 short chapters, based on essays published in Natural History magazine.
Sales
The book debuted at #1 on The New York Times Non-Fiction Best Seller list when it first appeared in May, 2017. It sold 48,416 copies in its first week, making it the second-most-purchased overall in the U.S. for that week (behind the children's fiction novel The Dark Prophecy). A year later, it remained in the top five and had sold in excess of one million copies.
Reception
In Kirkus Reviews, the reviewer praised Tyson's "down-to-earth wit" and stated that the book "shows once again [Tyson's] masterly skills at explaining complex scientific concepts in a lucid, readable fashion."
The book's accessible language is noted in a review in BBC Sky at Night magazine. The reviewer suggests that the reader who spends their time on Tyson's work, will have a good understanding of "every part of our known Universe, how it came to be and what still keeps physicists up at night".
Tyson was nominated for the Grammy Award for Best Spoken Word Album.
References
Books by Neil deGrasse Tyson
Astronomy books
Cosmology books
2017 non-fiction books
Popular physics books
W. W. Norton & Company books | 0.781648 | 0.979113 | 0.765322 |
Boltzmann distribution | In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant).
The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.
The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"
The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.
The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.
The distribution
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as
where:
is the exponential function,
is the probability of state ,
is the energy of state ,
is the Boltzmann constant,
is the absolute temperature of the system,
is the number of all states accessible to the system of interest,
(denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1.
Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy
subject to the normalization constraint that and the constraint that equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies . In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where approaches zero from above or below, respectively.)
The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.
The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as
where:
is the probability of state ,
the probability of state ,
is the energy of state ,
is the energy of state .
The corresponding ratio of populations of energy levels must also take their degeneracies into account.
The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state .
where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is
This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.
The softmax function commonly used in machine learning is related to the Boltzmann distribution:
Generalized Boltzmann distribution
Distribution of the form
is called generalized Boltzmann distribution by some authors.
The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations.
The generalized Boltzmann distribution has the following properties:
It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics.
It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average.
In statistical mechanics
The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects:
Canonical ensemble (general case)
The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form.
Statistical frequencies of subsystems' states (in a non-interacting collection)
When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form.
Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles)
In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form.
Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:
When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead.
If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium.
With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively.
In mathematics
In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure.
In statistics and machine learning, it is called a log-linear model.
In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning, as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced.
In economics
The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries.
The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.
See also
Bose–Einstein statistics
Fermi–Dirac statistics
Negative temperature
Softmax function
References
Statistical mechanics
Distribution | 0.767186 | 0.997565 | 0.765318 |
Mass in special relativity | The word "mass" has two meanings in special relativity: invariant mass (also called rest mass) is an invariant quantity which is the same for all observers in all reference frames, while the relativistic mass is dependent on the velocity of the observer. According to the concept of mass–energy equivalence, invariant mass is equivalent to rest energy, while relativistic mass is equivalent to relativistic energy (also called total energy).
The term "relativistic mass" tends not to be used in particle and nuclear physics and is often avoided by writers on special relativity, in favor of referring to the body's relativistic energy. In contrast, "invariant mass" is usually preferred over rest energy. The measurable inertia and the warping of spacetime by a body in a given frame of reference is determined by its relativistic mass, not merely its invariant mass. For example, photons have zero rest mass but contribute to the inertia (and weight in a gravitational field) of any system containing them.
The concept is generalized in mass in general relativity.
Rest mass
The term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles. The more general invariant mass (calculated with a more complicated formula) loosely corresponds to the "rest mass" of a "system". Thus, invariant mass is a natural unit of mass used for systems which are being viewed from their center of momentum frame (COM frame), as when any closed system (for example a bottle of hot gas) is weighed, which requires that the measurement be taken in the center of momentum frame where the system has no net momentum. Under such circumstances the invariant mass is equal to the relativistic mass (discussed below), which is the total energy of the system divided by c2 (the speed of light squared).
The concept of invariant mass does not require bound systems of particles, however. As such, it may also be applied to systems of unbound particles in high-speed relative motion. Because of this, it is often employed in particle physics for systems which consist of widely separated high-energy particles. If such systems were derived from a single particle, then the calculation of the invariant mass of such systems, which is a never-changing quantity, will provide the rest mass of the parent particle (because it is conserved over time).
It is often convenient in calculation that the invariant mass of a system is the total energy of the system (divided by ) in the COM frame (where, by definition, the momentum of the system is zero). However, since the invariant mass of any system is also the same quantity in all inertial frames, it is a quantity often calculated from the total energy in the COM frame, then used to calculate system energies and momenta in other frames where the momenta are not zero, and the system total energy will necessarily be a different quantity than in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, and it is thus conserved, so long as the system is closed to all influences. (The technical term is isolated system meaning that an idealized boundary is drawn around the system, and no mass/energy is allowed across it.)
Relativistic mass
The relativistic mass is the sum total quantity of energy in a body or system (divided by ). Thus, the mass in the formula
is the relativistic mass. For a particle of non-zero rest mass moving at a speed relative to the observer, one finds
In the center of momentum frame, and the relativistic mass equals the rest mass. In other frames, the relativistic mass (of a body or system of bodies) includes a contribution from the "net" kinetic energy of the body (the kinetic energy of the center of mass of the body), and is larger the faster the body moves. Thus, unlike the invariant mass, the relativistic mass depends on the observer's frame of reference. However, for given single frames of reference and for isolated systems, the relativistic mass is also a conserved quantity.
The relativistic mass is also the proportionality factor between velocity and momentum,
Newton's second law remains valid in the form
When a body emits light of frequency and wavelength as a photon of energy , the mass of the body decreases by , which some interpret as the relativistic mass of the emitted photon since it also fulfills . Although some authors present relativistic mass as a fundamental concept of the theory, it has been argued that this is wrong as the fundamentals of the theory relate to space–time. There is disagreement over whether the concept is pedagogically useful. It explains simply and quantitatively why a body subject to a constant acceleration cannot reach the speed of light, and why the mass of a system emitting a photon decreases. In relativistic quantum chemistry, relativistic mass is used to explain electron orbital contraction in heavy elements.
The notion of mass as a property of an object from Newtonian mechanics does not bear a precise relationship to the concept in relativity.
Relativistic mass is not referenced in nuclear and particle physics, and a survey of introductory textbooks in 2005 showed that only 5 of 24 texts used the concept, although it is still prevalent in popularizations.
If a stationary box contains many particles, its weight increases in its rest frame the faster the particles are moving. Any energy in the box (including the kinetic energy of the particles) adds to the mass, so that the relative motion of the particles contributes to the mass of the box. But if the box itself is moving (its center of mass is moving), there remains the question of whether the kinetic energy of the overall motion should be included in the mass of the system. The invariant mass is calculated excluding the kinetic energy of the system as a whole (calculated using the single velocity of the box, which is to say the velocity of the box's center of mass), while the relativistic mass is calculated including invariant mass plus the kinetic energy of the system which is calculated from the velocity of the center of mass.
Relativistic vs. rest mass
Relativistic mass and rest mass are both traditional concepts in physics, but the relativistic mass corresponds to the total energy. The relativistic mass is the mass of the system as it would be measured on a scale, but in some cases (such as the box above) this fact remains true only because the system on average must be at rest to be weighed (it must have zero net momentum, which is to say, the measurement is in its center of momentum frame). For example, if an electron in a cyclotron is moving in circles with a relativistic velocity, the mass of the cyclotron+electron system is increased by the relativistic mass of the electron, not by the electron's rest mass. But the same is also true of any closed system, such as an electron-and-box, if the electron bounces at high speed inside the box. It is only the lack of total momentum in the system (the system momenta sum to zero) which allows the kinetic energy of the electron to be "weighed". If the electron is stopped and weighed, or the scale were somehow sent after it, it would not be moving with respect to the scale, and again the relativistic and rest masses would be the same for the single electron (and would be smaller). In general, relativistic and rest masses are equal only in systems which have no net momentum and the system center of mass is at rest; otherwise they may be different.
The invariant mass is proportional to the value of the total energy in one reference frame, the frame where the object as a whole is at rest (as defined below in terms of center of mass). This is why the invariant mass is the same as the rest mass for single particles. However, the invariant mass also represents the measured mass when the center of mass is at rest for systems of many particles. This special frame where this occurs is also called the center of momentum frame, and is defined as the inertial frame in which the center of mass of the object is at rest (another way of stating this is that it is the frame in which the momenta of the system's parts add to zero). For compound objects (made of many smaller objects, some of which may be moving) and sets of unbound objects (some of which may also be moving), only the center of mass of the system is required to be at rest, for the object's relativistic mass to be equal to its rest mass.
A so-called massless particle (such as a photon, or a theoretical graviton) moves at the speed of light in every frame of reference. In this case there is no transformation that will bring the particle to rest. The total energy of such particles becomes smaller and smaller in frames which move faster and faster in the same direction. As such, they have no rest mass, because they can never be measured in a frame where they are at rest. This property of having no rest mass is what causes these particles to be termed "massless". However, even massless particles have a relativistic mass, which varies with their observed energy in various frames of reference.
Invariant mass
The invariant mass is the ratio of four-momentum (the four-dimensional generalization of classical momentum) to four-velocity:
and is also the ratio of four-acceleration to four-force when the rest mass is constant. The four-dimensional form of Newton's second law is:
Relativistic energy–momentum equation
The relativistic expressions for and obey the relativistic energy–momentum relation:
where the m is the rest mass, or the invariant mass for systems, and is the total energy.
The equation is also valid for photons, which have :
and therefore
A photon's momentum is a function of its energy, but it is not proportional to the velocity, which is always .
For an object at rest, the momentum is zero, therefore
Note that the formula is true only for particles or systems with zero momentum.
The rest mass is only proportional to the total energy in the rest frame of the object.
When the object is moving, the total energy is given by
To find the form of the momentum and energy as a function of velocity, it can be noted that the four-velocity, which is proportional to , is the only four-vector associated with the particle's motion, so that if there is a conserved four-momentum , it must be proportional to this vector. This allows expressing the ratio of energy to momentum as
resulting in a relation between and :
This results in
and
these expressions can be written as
where the factor
When working in units where , known as the natural unit system, all the relativistic equations are simplified and the quantities energy, momentum, and mass have the same natural dimension:
The equation is often written this way because the difference is the relativistic length of the energy momentum four-vector, a length which is associated with rest mass or invariant mass in systems. Where and , this equation again expresses the mass–energy equivalence .
The mass of composite systems
The rest mass of a composite system is not the sum of the rest masses of the parts, unless all the parts are at rest. The total mass of a composite system includes the kinetic energy and field energy in the system.
The total energy of a composite system can be determined by adding together the sum of the energies of its components. The total momentum of the system, a vector quantity, can also be computed by adding together the momenta of all its components. Given the total energy and the length (magnitude) of the total momentum vector , the invariant mass is given by:
In the system of natural units where , for systems of particles (whether bound or unbound) the total system invariant mass is given equivalently by the following:
Where, again, the particle momenta are first summed as vectors, and then the square of their resulting total magnitude (Euclidean norm) is used. This results in a scalar number, which is subtracted from the scalar value of the square of the total energy.
For such a system, in the special center of momentum frame where momenta sum to zero, again the system mass (called the invariant mass) corresponds to the total system energy or, in units where , is identical to it. This invariant mass for a system remains the same quantity in any inertial frame, although the system total energy and total momentum are functions of the particular inertial frame which is chosen, and will vary in such a way between inertial frames as to keep the invariant mass the same for all observers. Invariant mass thus functions for systems of particles in the same capacity as "rest mass" does for single particles.
Note that the invariant mass of an isolated system (i.e., one closed to both mass and energy) is also independent of observer or inertial frame, and is a constant, conserved quantity for isolated systems and single observers, even during chemical and nuclear reactions. The concept of invariant mass is widely used in particle physics, because the invariant mass of a particle's decay products is equal to its rest mass. This is used to make measurements of the mass of particles like the Z boson or the top quark.
Conservation versus invariance of mass in special relativity
Total energy is an additive conserved quantity (for single observers) in systems and in reactions between particles, but rest mass (in the sense of being a sum of particle rest masses) may not be conserved through an event in which rest masses of particles are converted to other types of energy, such as kinetic energy. Finding the sum of individual particle rest masses would require multiple observers, one for each particle rest inertial frame, and these observers ignore individual particle kinetic energy. Conservation laws require a single observer and a single inertial frame.
In general, for isolated systems and single observers, relativistic mass is conserved (each observer sees it constant over time), but is not invariant (that is, different observers see different values). Invariant mass, however, is both conserved and invariant (all single observers see the same value, which does not change over time).
The relativistic mass corresponds to the energy, so conservation of energy automatically means that relativistic mass is conserved for any given observer and inertial frame. However, this quantity, like the total energy of a particle, is not invariant. This means that, even though it is conserved for any observer during a reaction, its absolute value will change with the frame of the observer, and for different observers in different frames.
By contrast, the rest mass and invariant masses of systems and particles are conserved also invariant. For example: A closed container of gas (closed to energy as well) has a system "rest mass" in the sense that it can be weighed on a resting scale, even while it contains moving components. This mass is the invariant mass, which is equal to the total relativistic energy of the container (including the kinetic energy of the gas) only when it is measured in the center of momentum frame. Just as is the case for single particles, the calculated "rest mass" of such a container of gas does not change when it is in motion, although its "relativistic mass" does change.
The container may even be subjected to a force which gives it an overall velocity, or else (equivalently) it may be viewed from an inertial frame in which it has an overall velocity (that is, technically, a frame in which its center of mass has a velocity). In this case, its total relativistic mass and energy increase. However, in such a situation, although the container's total relativistic energy and total momentum increase, these energy and momentum increases subtract out in the invariant mass definition, so that the moving container's invariant mass will be calculated as the same value as if it were measured at rest, on a scale.
Closed (meaning totally isolated) systems
All conservation laws in special relativity (for energy, mass, and momentum) require isolated systems, meaning systems that are totally isolated, with no mass–energy allowed in or out, over time. If a system is isolated, then both total energy and total momentum in the system are conserved over time for any observer in any single inertial frame, though their absolute values will vary, according to different observers in different inertial frames. The invariant mass of the system is also conserved, but does not change with different observers. This is also the familiar situation with single particles: all observers calculate the same particle rest mass (a special case of the invariant mass) no matter how they move (what inertial frame they choose), but different observers see different total energies and momenta for the same particle.
Conservation of invariant mass also requires the system to be enclosed so that no heat and radiation (and thus invariant mass) can escape. As in the example above, a physically enclosed or bound system does not need to be completely isolated from external forces for its mass to remain constant, because for bound systems these merely act to change the inertial frame of the system or the observer. Though such actions may change the total energy or momentum of the bound system, these two changes cancel, so that there is no change in the system's invariant mass. This is just the same result as with single particles: their calculated rest mass also remains constant no matter how fast they move, or how fast an observer sees them move.
On the other hand, for systems which are unbound, the "closure" of the system may be enforced by an idealized surface, inasmuch as no mass–energy can be allowed into or out of the test-volume over time, if conservation of system invariant mass is to hold during that time. If a force is allowed to act on (do work on) only one part of such an unbound system, this is equivalent to allowing energy into or out of the system, and the condition of "closure" to mass–energy (total isolation) is violated. In this case, conservation of invariant mass of the system also will no longer hold. Such a loss of rest mass in systems when energy is removed, according to where is the energy removed, and is the change in rest mass, reflect changes of mass associated with movement of energy, not "conversion" of mass to energy.
The system invariant mass vs. the individual rest masses of parts of the system
Again, in special relativity, the rest mass of a system is not required to be equal to the sum of the rest masses of the parts (a situation which would be analogous to gross mass-conservation in chemistry). For example, a massive particle can decay into photons which individually have no mass, but which (as a system) preserve the invariant mass of the particle which produced them. Also a box of moving non-interacting particles (e.g., photons, or an ideal gas) will have a larger invariant mass than the sum of the rest masses of the particles which compose it. This is because the total energy of all particles and fields in a system must be summed, and this quantity, as seen in the center of momentum frame, and divided by , is the system's invariant mass.
In special relativity, mass is not "converted" to energy, for all types of energy still retain their associated mass. Neither energy nor invariant mass can be destroyed in special relativity, and each is separately conserved over time in closed systems. Thus, a system's invariant mass may change only because invariant mass is allowed to escape, perhaps as light or heat. Thus, when reactions (whether chemical or nuclear) release energy in the form of heat and light, if the heat and light is not allowed to escape (the system is closed and isolated), the energy will continue to contribute to the system rest mass, and the system mass will not change. Only if the energy is released to the environment will the mass be lost; this is because the associated mass has been allowed out of the system, where it contributes to the mass of the surroundings.
History of the relativistic mass concept
Transverse and longitudinal mass
Concepts that were similar to what nowadays is called "relativistic mass", were already developed before the advent of special relativity. For example, it was recognized by J. J. Thomson in 1881 that a charged body is harder to set in motion than an uncharged body, which was worked out in more detail by Oliver Heaviside (1889) and George Frederick Charles Searle (1897). So the electrostatic energy behaves as having some sort of electromagnetic mass , which can increase the normal mechanical mass of the bodies.
Then, it was pointed out by Thomson and Searle that this electromagnetic mass also increases with velocity. This was further elaborated by Hendrik Lorentz (1899, 1904) in the framework of Lorentz ether theory. He defined mass as the ratio of force to acceleration, not as the ratio of momentum to velocity, so he needed to distinguish between the mass parallel to the direction of motion and the mass perpendicular to the direction of motion (where is the Lorentz factor, is the relative velocity between the ether and the object, and is the speed of light). Only when the force is perpendicular to the velocity, Lorentz's mass is equal to what is now called "relativistic mass". Max Abraham (1902) called longitudinal mass and transverse mass (although Abraham used more complicated expressions than Lorentz's relativistic ones). So, according to Lorentz's theory no body can reach the speed of light because the mass becomes infinitely large at this velocity.
Albert Einstein also initially used the concepts of longitudinal and transverse mass in his 1905 electrodynamics paper (equivalent to those of Lorentz, but with a different by an unfortunate force definition, which was later corrected), and in another paper in 1906. However, he later abandoned velocity dependent mass concepts (see quote at the end of next section).
The precise relativistic expression (which is equivalent to Lorentz's) relating force and acceleration for a particle with non-zero rest mass moving in the x direction with velocity v and associated Lorentz factor is
Relativistic mass
In special relativity, an object that has nonzero rest mass cannot travel at the speed of light. As the object approaches the speed of light, the object's energy and momentum increase without bound.
In the first years after 1905, following Lorentz and Einstein, the terms longitudinal and transverse mass were still in use. However, those expressions were replaced by the concept of relativistic mass, an expression which was first defined by Gilbert N. Lewis and Richard C. Tolman in 1909. They defined the total energy and mass of a body as
and of a body at rest
with the ratio
Tolman in 1912 further elaborated on this concept, and stated: "the expression m0(1 − v/c)−1/2 is best suited for the mass of a moving body."
In 1934, Tolman argued that the relativistic mass formula holds for all particles, including those moving at the speed of light, while the formula only applies to a slower-than-light particle (a particle with a nonzero rest mass). Tolman remarked on this relation that "We have, moreover, of course the experimental verification of the expression in the case of moving electrons ... We shall hence have no hesitation in accepting the expression as correct in general for the mass of a moving particle."
When the relative velocity is zero, is simply equal to 1, and the relativistic mass is reduced to the rest mass as one can see in the next two equations below. As the velocity increases toward the speed of light c, the denominator of the right side approaches zero, and consequently approaches infinity. While Newton's second law remains valid in the form
the derived form is not valid because in is generally not a constant (see the section above on transverse and longitudinal mass).
Even though Einstein initially used the expressions "longitudinal" and "transverse" mass in two papers (see previous section), in his first paper on (1905) he treated as what would now be called the rest mass. Einstein never derived an equation for "relativistic mass", and in later years he expressed his dislike of the idea:
Popular science and textbooks
The concept of relativistic mass is widely used in popular science writing and in high school and undergraduate textbooks. Authors such as Okun and A. B. Arons have argued against this as archaic and confusing, and not in accord with modern relativistic theory.
Arons wrote:
For many years it was conventional to enter the discussion of dynamics through derivation of the relativistic mass, that is the mass–velocity relation, and this is probably still the dominant mode in textbooks. More recently, however, it has been increasingly recognized that relativistic mass is a troublesome and dubious concept. [See, for example, Okun (1989).]... The sound and rigorous approach to relativistic dynamics is through direct development of that expression for momentum that ensures conservation of momentum in all frames: rather than through relativistic mass.
C. Alder takes a similarly dismissive stance on mass in relativity. Writing on said subject matter, he says that "its introduction into the theory of special relativity was much in the way of a historical accident", noting towards the widespread knowledge of and how the public's interpretation of the equation has largely informed how it is taught in higher education. He instead supposes that the difference between rest and relativistic mass should be explicitly taught, so that students know why mass should be thought of as invariant "in most discussions of inertia".
Many contemporary authors such as Taylor and Wheeler avoid using the concept of relativistic mass altogether:
While spacetime has the unbounded geometry of Minkowski space, the velocity-space is bounded by and has the geometry of hyperbolic geometry where relativistic mass plays an analogous role to that of Newtonian mass in the barycentric coordinates of Euclidean geometry. The connection of velocity to hyperbolic geometry enables the 3-velocity-dependent relativistic mass to be related to the 4-velocity Minkowski formalism.
See also
Tests of relativistic energy and momentum
References
External links
Usenet Physics FAQ
"Does mass change with velocity?" by Philip Gibbs et al., 2002, retrieved August 10, 2006
"What is the mass of a photon?" by Matt Austern et al., 1998, retrieved June 27, 2007
Mass as a Variable Quantity
Special relativity
Mass | 0.770734 | 0.992951 | 0.765301 |
Quasistatic process | In thermodynamics, a quasi-static process, also known as a quasi-equilibrium process (from Latin quasi, meaning ‘as if’), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process. Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness.
Only in a quasi-static thermodynamic process can we exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at any instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains P or T, it implies a quasi-static process.
Relation to reversible process
While all reversible processes are quasi-static, most authors do not require a general quasi-static process to maintain equilibrium between system and surroundings and avoid dissipation, which are defining characteristics of a reversible process. For example, quasi-static compression of a system by a piston subject to friction is irreversible; although the system is always in internal thermal equilibrium, the friction ensures the generation of dissipative entropy, which goes against the definition of reversibility. Any engineer would remember to include friction when calculating the dissipative entropy generation.
An example of a quasi-static process that is not idealizable as reversible is slow heat transfer between two bodies on two finitely different temperatures, where the heat transfer rate is controlled by a poorly conductive partition between the two bodies. In this case, no matter how slowly the process takes place, the state of the composite system consisting of the two bodies is far from equilibrium, since thermal equilibrium for this composite system requires that the two bodies be at the same temperature. Nevertheless, the entropy change for each body can be calculated using the Clausius equality for reversible heat transfer.
PV-work in various quasi-static processes
Constant pressure: Isobaric processes,
Constant volume: Isochoric processes,
Constant temperature: Isothermal processes, where (pressure) varies with (volume) via , so
Polytropic processes,
See also
Entropy
Reversible process (thermodynamics)
References
Thermodynamic processes
Statistical mechanics | 0.772834 | 0.990252 | 0.7653 |
Stone–von Neumann theorem | In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.
Representation issues of the commutation relations
In quantum mechanics, physical observables are represented mathematically by linear operators on Hilbert spaces.
For a single particle moving on the real line , there are two important observables: position and momentum. In the Schrödinger representation quantum description of such a particle, the position operator and momentum operator are respectively given by
on the domain of infinitely differentiable functions of compact support on . Assume to be a fixed non-zero real number—in quantum theory is the reduced Planck constant, which carries units of action (energy times time).
The operators , satisfy the canonical commutation relation Lie algebra,
Already in his classic book, Hermann Weyl observed that this commutation law was impossible to satisfy for linear operators , acting on finite-dimensional spaces unless vanishes. This is apparent from taking the trace over both sides of the latter equation and using the relation ; the left-hand side is zero, the right-hand side is non-zero. Further analysis shows that any two self-adjoint operators satisfying the above commutation relation cannot be both bounded (in fact, a theorem of Wielandt shows the relation cannot be satisfied by elements of any normed algebra). For notational convenience, the nonvanishing square root of may be absorbed into the normalization of and , so that, effectively, it is replaced by 1. We assume this normalization in what follows.
The idea of the Stone–von Neumann theorem is that any two irreducible representations of the canonical commutation relations are unitarily equivalent. Since, however, the operators involved are necessarily unbounded (as noted above), there are tricky domain issues that allow for counter-examples. To obtain a rigorous result, one must require that the operators satisfy the exponentiated form of the canonical commutation relations, known as the Weyl relations. The exponentiated operators are bounded and unitary. Although, as noted below, these relations are formally equivalent to the standard canonical commutation relations, this equivalence is not rigorous, because (again) of the unbounded nature of the operators. (There is also a discrete analog of the Weyl relations, which can hold in a finite-dimensional space, namely Sylvester's clock and shift matrices in the finite Heisenberg group, discussed below.)
Uniqueness of representation
One would like to classify representations of the canonical commutation relation by two self-adjoint operators acting on separable Hilbert spaces, up to unitary equivalence. By Stone's theorem, there is a one-to-one correspondence between self-adjoint operators and (strongly continuous) one-parameter unitary groups.
Let and be two self-adjoint operators satisfying the canonical commutation relation, , and and two real parameters. Introduce and , the corresponding unitary groups given by functional calculus. (For the explicit operators and defined above, these are multiplication by and pullback by translation .) A formal computation (using a special case of the Baker–Campbell–Hausdorff formula) readily yields
Conversely, given two one-parameter unitary groups and satisfying the braiding relation
formally differentiating at 0 shows that the two infinitesimal generators satisfy the above canonical commutation relation. This braiding formulation of the canonical commutation relations (CCR) for one-parameter unitary groups is called the Weyl form of the CCR.
It is important to note that the preceding derivation is purely formal. Since the operators involved are unbounded, technical issues prevent application of the Baker–Campbell–Hausdorff formula without additional domain assumptions. Indeed, there exist operators satisfying the canonical commutation relation but not the Weyl relations. Nevertheless, in "good" cases, we expect that operators satisfying the canonical commutation relation will also satisfy the Weyl relations.
The problem thus becomes classifying two jointly irreducible one-parameter unitary groups and which satisfy the Weyl relation on separable Hilbert spaces. The answer is the content of the Stone–von Neumann theorem: all such pairs of one-parameter unitary groups are unitarily equivalent. In other words, for any two such and acting jointly irreducibly on a Hilbert space , there is a unitary operator so that
where and are the explicit position and momentum operators from earlier. When is in this equation, so, then, in the -representation, it is evident that is unitarily equivalent to , and the spectrum of must range along the entire real line. The analog argument holds for .
There is also a straightforward extension of the Stone–von Neumann theorem to degrees of freedom.
Historically, this result was significant, because it was a key step in proving that Heisenberg's matrix mechanics, which presents quantum mechanical observables and dynamics in terms of infinite matrices, is unitarily equivalent to Schrödinger's wave mechanical formulation (see Schrödinger picture),
Representation theory formulation
In terms of representation theory, the Stone–von Neumann theorem classifies certain unitary representations of the Heisenberg group. This is discussed in more detail in the Heisenberg group section, below.
Informally stated, with certain technical assumptions, every representation of the Heisenberg group is equivalent to the position operators and momentum operators on . Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension .
More formally, there is a unique (up to scale) non-trivial central strongly continuous unitary representation.
This was later generalized by Mackey theory – and was the motivation for the introduction of the Heisenberg group in quantum physics.
In detail:
The continuous Heisenberg group is a central extension of the abelian Lie group by a copy of ,
the corresponding Heisenberg algebra is a central extension of the abelian Lie algebra (with trivial bracket) by a copy of ,
the discrete Heisenberg group is a central extension of the free abelian group by a copy of , and
the discrete Heisenberg group modulo is a central extension of the free abelian -group by a copy of .
In all cases, if one has a representation , where is an algebra and the center maps to zero, then one simply has a representation of the corresponding abelian group or algebra, which is Fourier theory.
If the center does not map to zero, one has a more interesting theory, particularly if one restricts oneself to central representations.
Concretely, by a central representation one means a representation such that the center of the Heisenberg group maps into the center of the algebra: for example, if one is studying matrix representations or representations by operators on a Hilbert space, then the center of the matrix algebra or the operator algebra is the scalar matrices. Thus the representation of the center of the Heisenberg group is determined by a scale value, called the quantization value (in physics terms, the Planck constant), and if this goes to zero, one gets a representation of the abelian group (in physics terms, this is the classical limit).
More formally, the group algebra of the Heisenberg group over its field of scalars K, written , has center , so rather than simply thinking of the group algebra as an algebra over the field , one may think of it as an algebra over the commutative algebra . As the center of a matrix algebra or operator algebra is the scalar matrices, a -structure on the matrix algebra is a choice of scalar matrix – a choice of scale. Given such a choice of scale, a central representation of the Heisenberg group is a map of -algebras , which is the formal way of saying that it sends the center to a chosen scale.
Then the Stone–von Neumann theorem is that, given the standard quantum mechanical scale (effectively, the value of ħ), every strongly continuous unitary representation is unitarily equivalent to the standard representation with position and momentum.
Reformulation via Fourier transform
Let be a locally compact abelian group and be the Pontryagin dual of . The Fourier–Plancherel transform defined by
extends to a C*-isomorphism from the group C*-algebra of and , i.e. the spectrum of is precisely . When is the real line , this is Stone's theorem characterizing one-parameter unitary groups. The theorem of Stone–von Neumann can also be restated using similar language.
The group acts on the *-algebra by right translation : for in and in ,
Under the isomorphism given above, this action becomes the natural action of on :
So a covariant representation corresponding to the *-crossed product
is a unitary representation of and of such that
It is a general fact that covariant representations are in one-to-one correspondence with *-representation of the corresponding crossed product. On the other hand, all irreducible representations of
are unitarily equivalent to the , the compact operators on . Therefore, all pairs are unitarily equivalent. Specializing to the case where yields the Stone–von Neumann theorem.
Heisenberg group
The above canonical commutation relations for , are identical to the commutation relations that specify the Lie algebra of the general Heisenberg group for a positive integer. This is the Lie group of square matrices of the form
In fact, using the Heisenberg group, one can reformulate the Stone von Neumann theorem in the language of representation theory.
Note that the center of consists of matrices . However, this center is not the identity operator in Heisenberg's original CCRs. The Heisenberg group Lie algebra generators, e.g. for , are
and the central generator is not the identity.
All these representations are unitarily inequivalent; and any irreducible representation which is not trivial on the center of is unitarily equivalent to exactly one of these.
Note that is a unitary operator because it is the composition of two operators which are easily seen to be unitary: the translation to the left by and multiplication by a function of absolute value 1. To show is multiplicative is a straightforward calculation. The hard part of the theorem is showing the uniqueness; this claim, nevertheless, follows easily from the Stone–von Neumann theorem as stated above. We will sketch below a proof of the corresponding Stone–von Neumann theorem for certain finite Heisenberg groups.
In particular, irreducible representations , of the Heisenberg group which are non-trivial on the center of are unitarily equivalent if and only if for any in the center of .
One representation of the Heisenberg group which is important in number theory and the theory of modular forms is the theta representation, so named because the Jacobi theta function is invariant under the action of the discrete subgroup of the Heisenberg group.
Relation to the Fourier transform
For any non-zero , the mapping
is an automorphism of which is the identity on the center of . In particular, the representations and are unitarily equivalent. This means that there is a unitary operator on such that, for any in ,
Moreover, by irreducibility of the representations , it follows that up to a scalar, such an operator is unique (cf. Schur's lemma). Since is unitary, this scalar multiple is uniquely determined and hence such an operator is unique.
This means that, ignoring the factor of in the definition of the Fourier transform,
This theorem has the immediate implication that the Fourier transform is unitary, also known as the Plancherel theorem. Moreover,
From this fact the Fourier inversion formula easily follows.
Example: Segal–Bargmann space
The Segal–Bargmann space is the space of holomorphic functions on that are square-integrable with respect to a Gaussian measure. Fock observed in 1920s that the operators
acting on holomorphic functions, satisfy the same commutation relations as the usual annihilation and creation operators, namely,
In 1961, Bargmann showed that is actually the adjoint of with respect to the inner product coming from the Gaussian measure. By taking appropriate linear combinations of and , one can then obtain "position" and "momentum" operators satisfying the canonical commutation relations. It is not hard to show that the exponentials of these operators satisfy the Weyl relations and that the exponentiated operators act irreducibly. The Stone–von Neumann theorem therefore applies and implies the existence of a unitary map from to the Segal–Bargmann space that intertwines the usual annihilation and creation operators with the operators and . This unitary map is the Segal–Bargmann transform.
Representations of finite Heisenberg groups
The Heisenberg group is defined for any commutative ring . In this section let us specialize to the field for a prime. This field has the property that there is an embedding of as an additive group into the circle group . Note that is finite with cardinality . For finite Heisenberg group one can give a simple proof of the Stone–von Neumann theorem using simple properties of character functions of representations. These properties follow from the orthogonality relations for characters of representations of finite groups.
For any non-zero in define the representation on the finite-dimensional inner product space by
It follows that
By the orthogonality relations for characters of representations of finite groups this fact implies the corresponding Stone–von Neumann theorem for Heisenberg groups , particularly:
Irreducibility of
Pairwise inequivalence of all the representations .
Actually, all irreducible representations of on which the center acts nontrivially arise in this way.
Generalizations
The Stone–von Neumann theorem admits numerous generalizations. Much of the early work of George Mackey was directed at obtaining a formulation of the theory of induced representations developed originally by Frobenius for finite groups to the context of unitary representations of locally compact topological groups.
See also
Oscillator representation
Wigner–Weyl transform
CCR and CAR algebras (for bosons and fermions respectively)
Segal–Bargmann space
Moyal product
Weyl algebra
Stone's theorem on one-parameter unitary groups
Hille–Yosida theorem
C0-semigroup
Notes
References
Rosenberg, Jonathan (2004) "A Selective History of the Stone–von Neumann Theorem" Contemporary Mathematics 365. American Mathematical Society.
Summers, Stephen J. (2001). "On the Stone–von Neumann Uniqueness Theorem and Its Ramifications." In John von Neumann and the foundations of quantum physics, pp. 135-152. Springer, Dordrecht, 2001, online.
Functional analysis
Mathematical quantization
Theorems in functional analysis
Theorems in mathematical physics
John von Neumann | 0.778653 | 0.982835 | 0.765287 |
Euler's pump and turbine equation | The Euler pump and turbine equations are the most fundamental equations in the field of turbomachinery. These equations govern the power, efficiencies and other factors that contribute to the design of turbomachines. With the help of these equations the head developed by a pump and the head utilised by a turbine can be easily determined. As the name suggests these equations were formulated by Leonhard Euler in the eighteenth century. These equations can be derived from the moment of momentum equation when applied for a pump or a turbine.
Conservation of angular momentum
A consequence of Newton's second law of mechanics is the conservation of the angular momentum (or the “moment of momentum”) which is fundamental to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. The variation of angular momentum at inlet and outlet, an external torque and friction moments due to shear stresses act on an impeller or a diffuser.
Since no pressure forces are created on cylindrical surfaces in the circumferential direction, it is possible to write:
(1.13)
Velocity triangles
The color triangles formed by velocity vectors u,c and w are called velocity triangles and are helpful in explaining how pumps work.
and are the absolute velocities of the fluid at the inlet and outlet respectively.
and are the relative velocities of the fluid with respect to the blade at the inlet and outlet respectively.
and are the velocities of the blade at the inlet and outlet respectively.
is angular velocity.
Figures 'a' and 'b' show impellers with backward and forward-curved vanes respectively.
Euler's pump equation
Based on Eq.(1.13), Euler developed the equation for the pressure head created by an impeller:
(1)
(2)
Yth : theoretical specific supply; Ht : theoretical head pressure; g: gravitational acceleration
For the case of a Pelton turbine the static component of the head is
zero, hence the equation reduces to:
Usage
Euler’s pump and turbine equations can be used to predict the effect that
changing the impeller geometry has on the head. Qualitative estimations can
be made from the impeller geometry about the performance of the
turbine/pump.
This equation can be written as rothalpy invariance:
where is constant across the rotor blade.
See also
Euler equations (fluid dynamics)
List of topics named after Leonhard Euler
Rothalpy
References
Turbines
Pumps
Gas compressors
Ventilation fans
Fluid dynamics
Leonhard Euler | 0.784456 | 0.975526 | 0.765257 |
Thermodynamic square | The thermodynamic square (also known as the thermodynamic wheel, Guggenheim scheme or Born square) is a mnemonic diagram attributed to Max Born and used to help determine thermodynamic relations. Born presented the thermodynamic square in a 1929 lecture. The symmetry of thermodynamics appears in a paper by F.O. Koenig. The corners represent common conjugate variables while the sides represent thermodynamic potentials. The placement and relation among the variables serves as a key to recall the relations they constitute.
A mnemonic used by students to remember the Maxwell relations (in thermodynamics) is "Good Physicists Have Studied Under Very Fine Teachers", which helps them remember the order of the variables in the square, in clockwise direction. Another mnemonic used here is "Valid Facts and Theoretical Understanding Generate Solutions to Hard Problems", which gives the letter in the normal left-to-right writing direction. Both times A has to be identified with F, another common symbol for Helmholtz free energy. To prevent the need for this switch the following mnemonic is also widely used:"Good Physicists Have Studied Under Very Ambitious Teachers"; another one is Good Physicists Have SUVAT, in reference to the equations of motion. One other useful variation of the mnemonic when the symbol E is used for internal energy instead of U is the following: "Some Hard Problems Go To Finish Very Easy".
Use
Derivatives of thermodynamic potentials
The thermodynamic square is mostly used to compute the derivative of any thermodynamic potential of interest. Suppose for example one desires to compute the derivative of the internal energy . The following procedure should be considered:
Place oneself in the thermodynamic potential of interest, namely (, , , ). In our example, that would be .
The two opposite corners of the potential of interest represent the coefficients of the overall result. If the coefficient lies on the left hand side of the square, a negative sign should be added. In our example, an intermediate result would be .
In the opposite corner of each coefficient, you will find the associated differential. In our example, the opposite corner to would be (volume) and the opposite corner for would be (entropy). In our example, an interim result would be: . Notice that the sign convention will affect only the coefficients, not the differentials.
Finally, always add , where denotes the chemical potential. Therefore, we would have: .
The Gibbs–Duhem equation can be derived by using this technique. Notice though that the final addition of the differential of the chemical potential has to be generalized.
Maxwell relations
The thermodynamic square can also be used to find the first-order derivatives in the common Maxwell relations. The following procedure should be considered:
Looking at the four corners of the square and make a shape with the quantities of interest.
Read the shape in two different ways by seeing it as L and ⅃. The L will give one side of the relation and the ⅃ will give the other. Note that the partial derivative is taken along the vertical stem of L (and ⅃) while the last corner is held constant.
Use L to find .
Similarly, use ⅃ to find . Again, notice that the sign convention affects only the variable held constant in the partial derivative, not the differentials.
Finally, use above equations to get the Maxwell relation: .
By rotating the shape (randomly, for example by 90 degrees counterclockwise into a shape) other relations such as:
can be found.
Natural variables of thermodynamic potentials
Finally, the potential at the center of each side is a natural function of the variables at the corner of that side. So, is a natural function of and , and is a natural function of and .
Further reading
Bejan, Adrian. Advanced Engineering Thermodynamics, John Wiley & Sons, 3rd ed., 2006, p. 231 ("star diagram").
References
Science mnemonics
Thermodynamics | 0.783701 | 0.976462 | 0.765254 |
Statcoulomb | The franklin (Fr), statcoulomb (statC), or electrostatic unit of charge (esu) is the unit of measurement for electrical charge used in the centimetre–gram–second electrostatic units variant (CGS-ESU) and Gaussian systems of units. It is a derived unit given by
That is, it is defined so that the CGS-ESU quantity that the proportionality constant in Coulomb's law is a dimensionless quantity equal to 1.
It can be converted to the corresponding SI quantity using
The International System of Units uses the coulomb (C) as its unit of electric charge. The conversion between the units coulomb and the statcoulomb depends on the context. The most common contexts are:
For electric charge:
For electric flux (ΦD):
The symbol "≘" ('corresponds to') is used instead of "=" because the two sides are not interchangeable, as discussed below. The numerical part of the conversion factor of is very close to 10 times the numeric value of the speed of light when expressed in the unit metre/second, with a small uncertainty. In the context of electric flux, the SI and CGS units for an electric displacement field (D) are related by:
due to the relation between the metre and the centimetre. The coulomb is an extremely large charge rarely encountered in electrostatics, while the statcoulomb is closer to everyday charges.
Definition and relation to CGS base units
The statcoulomb is defined such that if two stationary spherically symmetric objects each carry a charge of 1 statC and are apart, the force of mutual electrical repulsion will be 1 dyne. This repulsion is governed by Coulomb's law, which in the CGS-Gaussian system states:
where F is the force, q and q are the two charges, and r is the distance between the charges. Performing dimensional analysis on Coulomb's law, the dimension of electrical charge in CGS must be [mass]1/2 [length]3/2 [time]−1. (This statement is not true in the International System of Quantities upon which the SI is based; see below.) We can be more specific in light of the definition above: Substituting F = 1 dyn, q = q = 1 statC, and r = 1 cm, we get:
as expected.
Dimensional relation between statcoulomb and coulomb
General incompatibility
Coulomb's law in the Gaussian unit system and the SI are respectively:
Since ε0, the vacuum permittivity, is dimensionless, the coulomb is dimensionally equivalent to [mass]1/2 [length]3/2 [time]−1, unlike the statcoulomb. In fact, it is impossible to express the coulomb in terms of mass, length, and time alone.
Consequently, a conversion equation like "1 C = n statC" is misleading: the units on the two sides are not consistent. One freely switch between coulombs and statcoulombs within a formula or equation, as one would freely switch between centimetres and metres. One can, however, find a between coulombs and statcoulombs in different contexts. As described below, "1 C " when describing the charge of objects. In other words, if a physical object has a charge of 1 C, it also has a charge of . Likewise, "1 C " when describing an electric displacement field flux.
As a unit of charge
The statcoulomb is defined as follows: If two stationary objects each carry a charge of 1 statC and are 1 cm apart in vacuum, they will electrically repel each other with a force of 1 dyne. From this definition, it is straightforward to find an equivalent charge in coulombs. Using the SI equation
and setting = 1 dyn = 10−5 N and = 1 cm = 10−2 m, and then solving for , the result is
Therefore, an object with a CGS charge of 1 statC has a charge of approximately .
As a unit of electric displacement field or flux
An electric flux (specifically, a flux of the electric displacement field ) has units of charge: statC in CGS and coulombs in SI. The conversion factor can be derived from Gauss's law:
where
Therefore, the conversion factor for flux and the conversion factor for charge differ by a ratio of 4π:
Notes
Units of electrical charge
Centimetre–gram–second system of units | 0.776703 | 0.985253 | 0.765248 |
Anthropogenic cloud | A homogenitus, anthropogenic or artificial cloud is a cloud induced by human activity. Although most clouds covering the sky have a purely natural origin, since the beginning of the Industrial Revolution, the use of fossil fuels and water vapor and other gases emitted by nuclear, thermal and geothermal power plants yield significant alterations of the local weather conditions. These new atmospheric conditions can thus enhance cloud formation.
Various methods have been proposed for creating and utilizing this weather phenomenon. Experiments have also been carried out for various studies. For example, Russian scientists have been studying artificial clouds for more than 50 years. But by far the greatest number of anthropogenic clouds are airplane contrails (condensation trails) and rocket trails.
Anthropogenesis
Three conditions are needed to form an anthropogenic cloud:
The air must be near saturation of its water vapor,
The air must be cooled to the dew point temperature with respect to water (or ice) to condensate (or sublimate) part of the water vapor,
The air must contain condensation nuclei, small solid particles, where condensation/sublimation starts.
The current use of fossil fuels enhances any of these three conditions. First, fossil fuel combustion generates water vapor. Additionally, this combustion also generates the formation of small solid particles that can act as condensation nuclei. Finally, all the combustion processes emit energy that enhance vertical upward movements.
Despite all the processes involving the combustion of fossil fuels, only some human activities, such as, thermal power plants, commercial aircraft or chemical industries modify enough the atmospheric conditions to produce clouds that can use the qualifier homogenitus due to its anthropic origin.
Cloud classification
The International Cloud Atlas published by the World Meteorological Organization compiles the proposal made by Luke Howard at the beginning of the 19th century, and all the subsequent modifications. Each cloud has a name in Latin, and clouds are classified according to their genus, species, and variety:
There are 10 genera (plural of genus) (e.g. cumulus, stratus, etc...).
There is a number of species for these genera that describe the form, the dimensions, internal structure, and type of vertical movement (e.g. stratus nebulosus for stratus covering the whole sky). Species are mutually exclusive.
Species can further be divided into varieties that describe their transparence or their arrangement (e.g. stratus nebulosus opacus for thick stratus covering the whole sky).
Further terms can be added to describe the origin of the cloud. Homogenitus is a suffix that signifies that a cloud originates from human activity. For instance, Cumulus originated by human activity is called Cumulus homogenitus and abbreviated as CUh. If a homogenitus cloud of one genus changes to another genus type, it is termed a homomutatus cloud.
Generating process
The international cloud classification divides the different genera into three main groups of clouds according to their altitude:
High clouds
Middle clouds
Low clouds
Homogenitus clouds can be generated by different sources in the high and low levels.
High homogenitus
Despite the fact that the three genera of high clouds, Cirrus, Cirrocumulus and Cirrostratus, form at the top of the troposphere, far from the earth surface, they may have an anthropogenic origin. In this case, the process that causes their formation is almost always the same: commercial and military aircraft flight. Exhaust products from the combustion of the kerosene (or sometimes gasoline) expelled by engines provide water vapor to this region of the troposphere.
In addition, the strong contrast between the cold air of the high troposphere layers and the warm and moist air ejected by aircraft engines causes rapid deposition of water vapor, forming small ice crystals. This process is also enhanced by the presence of abundant nuclei of condensation produced as a result of combustion. These clouds are commonly known as condensation trails (contrails), and are initially lineal cirrus clouds that could be called Cirrus homogenitus (Cih). The large temperature difference between the air exhausted and the ambient air generates small-scale convection processes, which favor the evolution of the condensation trails to Cirrocumulus homogenitus (Cch).
Depending on the atmospheric conditions at the upper part of the troposphere, where the plane is flying, these high clouds rapidly disappear or persist. When the air is dry and stable, the water rapidly evaporates inside the contrails and can only observed up to several hundreds of meters from the plane. On the other hand, if humidity is high enough, there exists an ice oversaturation, and the homogenitus get wide and can exist for hours. In the later case, depending on the wind conditions, Cch may evolve to Cirrus homogenitus (Cih) or Cirrostratus homogenitus (Csh). The existence and persistence of these three types of high anthropogenic clouds may indicate the approximation of air stability. In some cases, when there is a large density of air traffic, these high homogenitus may inhibit the formation of natural high clouds, because the contrails capture most of the water vapor.
Low homogenitus
The lowest part of the atmosphere is the region most influenced by human activity, through the emission of water vapor, warm air, and condensation nuclei. When the atmosphere is stable, the additional contribution of warm and moist air from emissions enhances fog formation or produces layers of Stratus homogenitus (Sth). If the air is not stable, this warm and moist air emitted by human activities creates a convective movement that can reach the lifted condensation level, producing an anthropogenic cumulus cloud, or Cumulus homogenitus (Cuh). This type of clouds may be also observed over the polluted air covering some cities and industrial areas under high-pressure conditions.
Stratocumulus homogenitus (Sch) are anthropogenic clouds that may be formed by the evolution of Sth in a slightly unstable atmosphere or of Cuh in a stable atmosphere.
Finally, the large, towering Cumulonimbus (Cb) presents such a great vertical development that only in some particular cases can they be created by anthropic causes. For instance, large fires may cause the formation of flammagenitus clouds, which can evolve to Cumulonimbus flammagenitus (CbFg, or CbFgh if anthropogenic); very large explosions, such as nuclear explosions, produce mushroom clouds, a distinctive subtype of cumulonimbus flammagenitus.
Experiments
Anthropogenic cloud can be generated in laboratory or in situ to study its properties or use it for other purpose. A cloud chambers is a sealed environment containing a supersaturated vapor of water or alcohol. When a charged particle (for example, an alpha or beta particle) interacts with the mixture, the fluid is ionized. The resulting ions act as condensation nuclei, around which a mist will form (because the mixture is on the point of condensation). Cloud seeding, a form of weather modification, is the attempt to change the amount or type of precipitation that falls from clouds, by dispersing substances into the air that serve as cloud condensation or ice nuclei, which alter the microphysical processes within the cloud. The usual intent is to increase precipitation (rain or snow), but hail and fog suppression are also widely practiced in airports.
Numerous experiments have been done with those two methods in the troposphere. At higher altitudes, NASA studied inducing noctilucent clouds in 1960 and 2009. In 1984 satellites from three nations took part in an artificial cloud experiment as part of a study of solar winds and comets. In 1969, a European satellite released and ignited barium and copper oxide at an altitude of 43,000 miles in space to create a 2,000 mile mauve and green plume visible for 22 minutes. It was part of a study of magnetic and electric fields.
Plans to create artificial clouds over soccer tournaments in the Middle East were suggested in 2011 as a way to help shade and cool down Qatar's 2022 FIFA World Cup.
Influence on climate
There are many studies dealing with the importance and effects of high anthropic clouds (Penner, 1999; Minna et al., 1999, 2003–2004; Marquart et al., 2002–2003; Stuber and Foster, 2006, 2007), but not about anthropic clouds in general. For the particular case of Cia due to contrails, IPCC estimates positive radiative forcing around 0.01 Wm−2.
When annotating the weather data, using the suffix that indicates the cloud origin allows differentiating these clouds from the ones with natural origin. Once this notation is established, after several years of observations, the influence of homogenitus on earth climate will be clearly analyzed.
See also
Contrail
Chemtrail conspiracy theory
Environmental impact of aviation
Global dimming
References
Bibliography
Howard, L. 1804: On the modification of clouds and the principles of their production, suspension and destruction: being the substance of an essay read before the Askesian Society in session 1802–03. J. Taylor. London.
IPCC 2007 AR4 WGI WGIII.
Marquart, S, and B. Mayer, 2002: Towards a reliable GCM estimation on contrail radiative forcing. Geophys. Res. Lett., 29, 1179, doi:10.1029/2001GL014075.
Marquart S., Ponater M., Mager F., and Sausen R., 2003: Future Development of contrail Cover, Optical Depth, and Radiative Forcing: Impacts of Increasing Air Traffic and Climate Change. Journal of climatology, 16, 2890–2904
Mazon J, Costa M, Pino D, Lorente J, 2012: Clouds caused by human activities. Weather, 67, 11, 302–306.
Meteorological glossary of American meteorological Society: http://glossary.ametsoc.org/?p=1&query=pyrocumulus&submit=Search
Minnis P., Kirk J. and Nordeen L., Weaver S., 2003. Contrail Frequency over the United States from Surface Observations. American Meteorology Society, 16, 3447–3462
Minnis, P., J. Ayers, R. Palikonda, and D. Phan, 2004: Contrails, cirrus trends, and climate. J. Climate, 14, 555–561.
Norris, J. R., 1999: On trends and possible artifacts in global ocean cloud cover between 1952 and 1995. J. Climate, 12, 1864–1870.
Penner, J., D. Lister, D. Griggs, D. Dokken, and M. McFarland, 1999: Special Report on Aviation and the Global Atmosphere. Cambridge University Press, 373 pp.
Stuber, N., and P. Forster, 2007: The impact of diurnal variations of air traffic on contrail radiative forcing. Atmos. Chem. Phys., 7, 3153–3162.
Stuber, N., and P. Forster, G. Rädel, and K. Shine, 2006: The importance of the diurnal and annual cycle of air traffic for contrail radiative forcing. Nature, 441, 864–867.
World Meteorological Organization (1975). International Cloud Atlas: Manual on the observation of clouds and other meteors. WMO-No. 407. I (text). Geneva: World Meteorological Organization. .
World Meteorological Organization (1987). International Cloud Atlas: Manual on the observation of clouds and other meteors. WMO-No. 407. II (plates). Geneva: World Meteorological Organization. pp. 196. .
Cloud types
Weather modification | 0.783555 | 0.976634 | 0.765246 |
Power transmission | Power transmission is the movement of energy from its place of generation to a location where it is applied to perform useful work.
Power is defined formally as units of energy per unit time. In SI units:
Since the development of technology, transmission and storage systems have been of immense interest to technologists and technology users.
Electrical power
With the widespread establishment of electrical grids, power transmission is usually associated most with electric power transmission. Alternating current is normally preferred as its voltage may be easily stepped up by a transformer in order to minimize resistive loss in the conductors used to transmit power over great distances; another set of transformers is required to step it back down to safer or more usable voltage levels at destination.
Power transmission is usually performed with overhead lines as this is the most economical way to do so. Underground transmission by high-voltage cables is chosen in crowded urban areas and in high-voltage direct-current (HVDC) submarine connections.
Power might also be transmitted by changing electromagnetic fields or by radio waves; microwave energy may be carried efficiently over short distances by a waveguide or in free space via wireless power transfer.
Mechanical power
Electrical power transmission has replaced mechanical power transmission in all but the very shortest distances.
From the 16th century through the Industrial Revolution to the end of the 19th century, mechanical power transmission was the norm. The oldest long-distance power transmission technology involved systems of push-rods or jerker lines (stängenkunst or feldstängen) connecting waterwheels to distant mine-drainage and brine-well pumps. A surviving example from 1780 exists at Bad Kösen that transmits power approximately 200 meters from a waterwheel to a salt well, and from there, an additional 150 meters to a brine evaporator. This technology survived into the 21st century in a handful of oilfields in the US, transmitting power from a central pumping engine to the numerous pump-jacks in the oil field.
Mechanical power may be transmitted directly using a solid structure such as a driveshaft; transmission gears can adjust the amount of torque or force vs. speed in much the same way an electrical transformer adjusts voltage vs current. Factories were fitted with overhead line shafts providing rotary power. Short line-shaft systems were described by Agricola, connecting a waterwheel to numerous ore-processing machines. While the machines described by Agricola used geared connections from the shafts to the machinery, by the 19th century, drivebelts would become the norm for linking individual machines to the line shafts. One mid 19th century factory had 1,948 feet of line shafting with 541 pulleys.
Hydraulic systems use liquid under pressure to transmit power; canals and hydroelectric power generation facilities harness natural water power to lift ships or generate electricity. Pumping water or pushing mass uphill with (windmill pumps) is one possible means of energy storage. London had a hydraulic network powered by five pumping stations operated by the London Hydraulic Power Company, with a total effect of 5 MW.
Pneumatic systems use gasses under pressure to transmit power; compressed air is commonly used to operate pneumatic tools in factories and repair garages. A pneumatic wrench (for instance) is used to remove and install automotive tires far more quickly than could be done with standard manual hand tools. A pneumatic system was proposed by proponents of Edison's direct current as the basis of the power grid. Compressed air generated at Niagara Falls would drive far away generators of DC power. The war of the currents ended with alternating current (AC) as the only means of long distance power transmission.
Thermal power
Thermal power can be transported in pipelines containing a high heat capacity fluid such as oil or water as used in district heating systems, or by physically transporting material items, such as bottle cars, or in the ice trade.
Chemicals and fuels
While not technically power transmission, energy is commonly transported by shipping chemical or nuclear fuels. Possible artificial fuels include radioactive isotopes, wood alcohol, grain alcohol, methane, synthetic gas, hydrogen gas (H2), cryogenic gas, and liquefied natural gas (LNG).
See also
Distributed generation
List of energy storage power plants
References
Electric power transmission | 0.782426 | 0.978019 | 0.765227 |
Vacuum permittivity | Vacuum permittivity, commonly denoted (pronounced "epsilon nought" or "epsilon zero"), is the value of the absolute dielectric permittivity of classical vacuum. It may also be referred to as the permittivity of free space, the electric constant, or the distributed capacitance of the vacuum. It is an ideal (baseline) physical constant. Its CODATA value is:
It is a measure of how dense of an electric field is "permitted" to form in response to electric charges and relates the units for electric charge to mechanical quantities such as length and force. For example, the force between two separated electric charges with spherical symmetry (in the vacuum of classical electromagnetism) is given by Coulomb's law:
Here, q1 and q2 are the charges, r is the distance between their centres, and the value of the constant fraction is approximately . Likewise, ε0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation, and relate them to their sources. In electrical engineering, ε0 itself is used as a unit to quantify the permittivity of various dielectric materials.
Value
The value of ε0 is defined by the formula
where c is the defined value for the speed of light in classical vacuum in SI units, and μ0 is the parameter that international standards organizations refer to as the magnetic constant (also called vacuum permeability or the permeability of free space). Since μ0 has an approximate value 4π × 10−7 H/m, and c has the defined value m⋅s−1, it follows that ε0 can be expressed numerically as
The historical origins of the electric constant ε0, and its value, are explained in more detail below.
Revision of the SI
The ampere was redefined by defining the elementary charge as an exact number of coulombs as from 20 May 2019, with the effect that the vacuum electric permittivity no longer has an exactly determined value in SI units. The value of the electron charge became a numerically defined quantity, not measured, making μ0 a measured quantity. Consequently, ε0 is not exact. As before, it is defined by the equation , and is thus determined by the value of μ0, the magnetic vacuum permeability which in turn is determined by the experimentally determined dimensionless fine-structure constant α:
with e being the elementary charge, h being the Planck constant, and c being the speed of light in vacuum, each with exactly defined values. The relative uncertainty in the value of ε0 is therefore the same as that for the dimensionless fine-structure constant, namely
Terminology
Historically, the parameter ε0 has been known by many different names. The terms "vacuum permittivity" or its variants, such as "permittivity in/of vacuum", "permittivity of empty space", or "permittivity of free space" are widespread. Standards organizations also use "electric constant" as a term for this quantity.
Another historical synonym was "dielectric constant of vacuum", as "dielectric constant" was sometimes used in the past for the absolute permittivity. However, in modern usage "dielectric constant" typically refers exclusively to a relative permittivity ε/ε0 and even this usage is considered "obsolete" by some standards bodies in favor of relative static permittivity. Hence, the term "dielectric constant of vacuum" for the electric constant ε0 is considered obsolete by most modern authors, although occasional examples of continuing usage can be found.
As for notation, the constant can be denoted by either ε0 or 0, using either of the common glyphs for the letter epsilon.
Historical origin of the parameter ε0
As indicated above, the parameter ε0 is a measurement-system constant. Its presence in the equations now used to define electromagnetic quantities is the result of the so-called "rationalization" process described below. But the method of allocating a value to it is a consequence of the result that Maxwell's equations predict that, in free space, electromagnetic waves move with the speed of light. Understanding why ε0 has the value it does requires a brief understanding of the history.
Rationalization of units
The experiments of Coulomb and others showed that the force F between two, equal, point-like "amounts" of electricity that are situated a distance r apart in free space, should be given by a formula that has the form
where Q is a quantity that represents the amount of electricity present at each of the two points, and ke depends on the units. If one is starting with no constraints, then the value of ke may be chosen arbitrarily. For each different choice of ke there is a different "interpretation" of Q: to avoid confusion, each different "interpretation" has to be allocated a distinctive name and symbol.
In one of the systems of equations and units agreed in the late 19th century, called the "centimetre–gram–second electrostatic system of units" (the cgs esu system), the constant ke was taken equal to 1, and a quantity now called "Gaussian electric charge" qs was defined by the resulting equation
The unit of Gaussian charge, the statcoulomb, is such that two units, at a distance of 1 centimetre apart, repel each other with a force equal to the cgs unit of force, the dyne. Thus, the unit of Gaussian charge can also be written 1 dyne1/2⋅cm. "Gaussian electric charge" is not the same mathematical quantity as modern (MKS and subsequently the SI) electric charge and is not measured in coulombs.
The idea subsequently developed that it would be better, in situations of spherical geometry, to include a factor 4π in equations like Coulomb's law, and write it in the form:
This idea is called "rationalization". The quantities qs′ and ke′ are not the same as those in the older convention. Putting generates a unit of electricity of different size, but it still has the same dimensions as the cgs esu system.
The next step was to treat the quantity representing "amount of electricity" as a fundamental quantity in its own right, denoted by the symbol q, and to write Coulomb's law in its modern form:
The system of equations thus generated is known as the rationalized metre–kilogram–second (RMKS) equation system, or "metre–kilogram–second–ampere (MKSA)" equation system. The new quantity q is given the name "RMKS electric charge", or (nowadays) just "electric charge". The quantity qs used in the old cgs esu system is related to the new quantity q by:
In the 2019 revision of the SI, the elementary charge is fixed at and the value of the vacuum permittivity must be determined experimentally.
Determination of a value for ε0
One now adds the requirement that one wants force to be measured in newtons, distance in metres, and charge to be measured in the engineers' practical unit, the coulomb, which is defined as the charge accumulated when a current of 1 ampere flows for one second. This shows that the parameter ε0 should be allocated the unit C2⋅N−1⋅m−2 (or an equivalent unit – in practice, farad per metre).
In order to establish the numerical value of ε0, one makes use of the fact that if one uses the rationalized forms of Coulomb's law and Ampère's force law (and other ideas) to develop Maxwell's equations, then the relationship stated above is found to exist between ε0, μ0 and c0. In principle, one has a choice of deciding whether to make the coulomb or the ampere the fundamental unit of electricity and magnetism. The decision was taken internationally to use the ampere. This means that the value of ε0 is determined by the values of c0 and μ0, as stated above. For a brief explanation of how the value of μ0 is decided, see Vacuum permeability.
Permittivity of real media
By convention, the electric constant ε0 appears in the relationship that defines the electric displacement field D in terms of the electric field E and classical electrical polarization density P of the medium. In general, this relationship has the form:
For a linear dielectric, P is assumed to be proportional to E, but a delayed response is permitted and a spatially non-local response, so one has:
In the event that nonlocality and delay of response are not important, the result is:
where ε is the permittivity and εr the relative static permittivity. In the vacuum of classical electromagnetism, the polarization , so and .
See also
Casimir effect
Coulomb's law
Electromagnetic wave equation
ISO 31-5
Mathematical descriptions of the electromagnetic field
Relative permittivity
Sinusoidal plane-wave solutions of the electromagnetic wave equation
Wave impedance
Vacuum permeability
Notes
Electromagnetism
Fundamental constants | 0.766001 | 0.998974 | 0.765215 |
Mean squared displacement | In statistical mechanics, the mean squared displacement (MSD, also mean square displacement, average squared displacement, or mean square fluctuation) is a measure of the deviation of the position of a particle with respect to a reference position over time. It is the most common measure of the spatial extent of random motion, and can be thought of as measuring the portion of the system "explored" by the random walker. In the realm of biophysics and environmental engineering, the Mean Squared Displacement is measured over time to determine if a particle is spreading slowly due to diffusion, or if an advective force is also contributing. Another relevant concept, the variance-related diameter (VRD, which is twice the square root of MSD), is also used in studying the transportation and mixing phenomena in the realm of environmental engineering. It prominently appears in the Debye–Waller factor (describing vibrations within the solid state) and in the Langevin equation (describing diffusion of a Brownian particle).
The MSD at time is defined as an ensemble average:
where N is the number of particles to be averaged, vector is the reference position of the -th particle, and vector is the position of the -th particle at time t.
Derivation of the MSD for a Brownian particle in 1D
The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses out over time - this is the method used by Einstein to describe a Brownian particle. Another method to describe the motion of a Brownian particle was described by Langevin, now known for its namesake as the Langevin equation.)
given the initial condition ; where is the position of the particle at some given time, is the tagged particle's initial position, and is the diffusion constant with the S.I. units (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle at is position dependent.
The differential equation above takes the form of 1D heat equation. The one-dimensional PDF below is the Green's function of heat equation (also known as Heat kernel in mathematics):
This states that the probability of finding the particle at is Gaussian, and the width of the Gaussian is time dependent. More specifically the full width at half maximum (FWHM)(technically/pedantically, this is actually the Full duration at half maximum as the independent variable is time) scales like
Using the PDF one is able to derive the average of a given function, , at time :
where the average is taken over all space (or any applicable variable).
The Mean squared displacement is defined as
expanding out the ensemble average
dropping the explicit time dependence notation for clarity. To find the MSD, one can take one of two paths: one can explicitly calculate and , then plug the result back into the definition of the MSD; or one could find the moment-generating function, an extremely useful, and general function when dealing with probability densities. The moment-generating function describes the moment of the PDF. The first moment of the displacement PDF shown above is simply the mean: . The second moment is given as .
So then, to find the moment-generating function it is convenient to introduce the characteristic function:
one can expand out the exponential in the above equation to give
By taking the natural log of the characteristic function, a new function is produced, the cumulant generating function,
where is the cumulant of . The first two cumulants are related to the first two moments, , via and where the second cumulant is the so-called variance, . With these definitions accounted for one can investigate the moments of the Brownian particle PDF,
by completing the square and knowing the total area under a Gaussian one arrives at
Taking the natural log, and comparing powers of to the cumulant generating function, the first cumulant is
which is as expected, namely that the mean position is the Gaussian centre. The second cumulant is
the factor 2 comes from the factorial factor in the denominator of the cumulant generating function. From this, the second moment is calculated,
Plugging the results for the first and second moments back, one finds the MSD,
Derivation for n dimensions
For a Brownian particle in higher-dimension Euclidean space, its position is represented by a vector , where the Cartesian coordinates are statistically independent.
The n-variable probability distribution function is the product of the fundamental solutions in each variable; i.e.,
The Mean squared displacement is defined as
Since all the coordinates are independent, their deviation from the reference position is also independent. Therefore,
For each coordinate, following the same derivation as in 1D scenario above, one obtains the MSD in that dimension as . Hence, the final result of mean squared displacement in n-dimensional Brownian motion is:
Definition of MSD for time lags
In the measurements of single particle tracking (SPT), displacements can be defined for different time intervals between positions (also called time lags or lag times). SPT yields the trajectory , representing a particle undergoing two-dimensional diffusion.
Assuming that the trajectory of a single particle measured at time points , where is any fixed number, then there are non-trivial forward displacements (, the cases when are not considered) which correspond to time intervals (or time lags) . Hence, there are many distinct displacements for small time lags, and very few for large time lags, can be defined as an average quantity over time lags:
Similarly, for continuous time series :
It's clear that choosing large and can improve statistical performance. This technique allow us estimate the behavior of the whole ensembles by just measuring a single trajectory, but note that it's only valid for the systems with ergodicity, like classical Brownian motion (BM), fractional Brownian motion (fBM), and continuous-time random walk (CTRW) with limited distribution of waiting times, in these cases, (defined above), here denotes ensembles average. However, for non-ergodic systems, like the CTRW with unlimited waiting time, waiting time can go to infinity at some time, in this case, strongly depends on , and don't equal each other anymore, in order to get better asymptotics, introduce the averaged time MSD:
Here denotes averaging over N ensembles.
Also, one can easily derive the autocorrelation function from the MSD:
where is so-called autocorrelation function for position of particles.
MSD in experiments
Experimental methods to determine MSDs include neutron scattering and photon correlation spectroscopy.
The linear relationship between the MSD and time t allows for graphical methods to determine the diffusivity constant D. This is especially useful for rough calculations of the diffusivity in environmental systems. In some atmospheric dispersion models, the relationship between MSD and time t is not linear. Instead, a series of power laws empirically representing the variation of the square root of MSD versus downwind distance are commonly used in studying the dispersion phenomenon.
See also
Root-mean-square deviation of atomic positions: the average is taken over a group of particles at a single time, where the MSD is taken for a single particle over an interval of time
Mean squared error
References
Statistical mechanics
Statistical deviation and dispersion
Motion (physics) | 0.771799 | 0.991408 | 0.765168 |
Electrodynamic tether | Electrodynamic tethers (EDTs) are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through a planet's magnetic field.
A number of missions have demonstrated electrodynamic tethers in space, most notably the TSS-1, TSS-1R, and Plasma Motor Generator (PMG) experiments.
Tether propulsion
As part of a tether propulsion system, craft can use long, strong conductors (though not all tethers are conductive) to change the orbits of spacecraft. It has the potential to make space travel significantly cheaper. When direct current is applied to the tether, it exerts a Lorentz force against the magnetic field, and the tether exerts a force on the vehicle. It can be used either to accelerate or brake an orbiting spacecraft.
In 2012 Star Technology and Research was awarded a $1.9 million contract to qualify a tether propulsion system for orbital debris removal.
Uses for ED tethers
Over the years, numerous applications for electrodynamic tethers have been identified for potential use in industry, government, and scientific exploration. The table below is a summary of some of the potential applications proposed thus far. Some of these applications are general concepts, while others are well-defined systems. Many of these concepts overlap into other areas; however, they are simply placed under the most appropriate heading for the purposes of this table. All of the applications mentioned in the table are elaborated upon in the Tethers Handbook. Three fundamental concepts that tethers possess, are gravity gradients, momentum exchange, and electrodynamics. Potential tether applications can be seen below:
ISS reboost
EDT has been proposed to maintain the ISS orbit and save the expense of chemical propellant reboosts. It could improve the quality and duration of microgravity conditions.
Electrodynamic tether fundamentals
The choice of the metal conductor to be used in an electrodynamic tether is determined by a variety of factors. Primary factors usually include high electrical conductivity, and low density. Secondary factors, depending on the application, include cost, strength, and melting point.
An electromotive force (EMF) is generated across a tether element as it moves relative to a magnetic field. The force is given by Faraday's Law of Induction:
Without loss of generality, it is assumed the tether system is in Earth orbit and it moves relative to Earth's magnetic field. Similarly, if current flows in the tether element, a force can be generated in accordance with the Lorentz force equation
In self-powered mode (deorbit mode), this EMF can be used by the tether system to drive the current through the tether and other electrical loads (e.g. resistors, batteries), emit electrons at the emitting end, or collect electrons at the opposite. In boost mode, on-board power supplies must overcome this motional EMF to drive current in the opposite direction, thus creating a force in the opposite direction, as seen in below figure, and boosting the system.
Take, for example, the NASA Propulsive Small Expendable Deployer System (ProSEDS) mission as seen in above figure. At 300 km altitude, the Earth's magnetic field, in the north-south direction, is approximately 0.18–0.32 gauss up to ~40° inclination, and the orbital velocity with respect to the local plasma is about 7500 m/s. This results in a Vemf range of 35–250 V/km along the 5 km length of tether. This EMF dictates the potential difference across the bare tether which controls where electrons are collected and / or repelled. Here, the ProSEDS de-boost tether system is configured to enable electron collection to the positively biased higher altitude section of the bare tether, and returned to the ionosphere at the lower altitude end. This flow of electrons through the length of the tether in the presence of the Earth's magnetic field creates a force that produces a drag thrust that helps de-orbit the system, as given by the above equation.
The boost mode is similar to the de-orbit mode, except for the fact that a High Voltage Power Supply (HVPS) is also inserted in series with the tether system between the tether and the higher positive potential end. The power supply voltage must be greater than the EMF and the polar opposite. This drives the current in the opposite direction, which in turn causes the higher altitude end to be negatively charged, while the lower altitude end is positively charged(Assuming a standard east to west orbit around Earth).
To further emphasize the de-boosting phenomenon, a schematic sketch of a bare tether system with no insulation (all bare) can be seen in below figure.
The top of the diagram, point A, represents the electron collection end. The bottom of the tether, point C, is the electron emission end. Similarly, and represent the potential difference from their respective tether ends to the plasma, and is the potential anywhere along the tether with respect to the plasma. Finally, point B is the point at which the potential of the tether is equal to the plasma. The location of point B will vary depending on the equilibrium state of the tether, which is determined by the solution of Kirchhoff's voltage law (KVL)
and Kirchhoff's current law (KCL)
along the tether. Here , , and describe the current gain from point A to B, the current lost from point B to C, and the current lost at point C, respectively.
Since the current is continuously changing along the bare length of the tether, the potential loss due to the resistive nature of the wire is represented as . Along an infinitesimal section of tether, the resistance multiplied by the current traveling across that section is the resistive potential loss.
After evaluating KVL & KCL for the system, the results will yield a current and potential profile along the tether, as seen in above figure. This diagram shows that, from point A of the tether down to point B, there is a positive potential bias, which increases the collected current. Below that point, the becomes negative and the collection of ion current begins. Since it takes a much greater potential difference to collect an equivalent amount of ion current (for a given area), the total current in the tether is reduced by a smaller amount. Then, at point C, the remaining current in the system is drawn through the resistive load, and emitted from an electron emissive device, and finally across the plasma sheath. The KVL voltage loop is then closed in the ionosphere where the potential difference is effectively zero.
Due to the nature of the bare EDTs, it is often not optional to have the entire tether bare. In order to maximize the thrusting capability of the system a significant portion of the bare tether should be insulated. This insulation amount depends on a number of effects, some of which are plasma density, the tether length and width, the orbiting velocity, and the Earth's magnetic flux density.
Tethers as generators
An electrodynamic tether is attached to an object, the tether being oriented at an angle to the local vertical between the object and a planet with a magnetic field. The tether's far end can be left bare, making electrical contact with the ionosphere. When the tether intersects the planet's magnetic field, it generates a current, and thereby converts some of the orbiting body's kinetic energy to electrical energy. Functionally, electrons flow from the space plasma into the conductive tether, are passed through a resistive load in a control unit and are emitted into the space plasma by an electron emitter as free electrons. As a result of this process, an electrodynamic force acts on the tether and attached object, slowing their orbital motion. In a loose sense, the process can be likened to a conventional windmill- the drag force of a resistive medium (air or, in this case, the magnetosphere) is used to convert the kinetic energy of relative motion (wind, or the satellite's momentum) into electricity. In principle, compact high-current tether power generators are possible and, with basic hardware, tens, hundreds, and thousands of kilowatts appears to be attainable.
Voltage and current
NASA has conducted several experiments with Plasma Motor Generator (PMG) tethers in space. An early experiment used a 500-meter conducting tether. In 1996, NASA conducted an experiment with a 20,000-meter conducting tether. When the tether was fully deployed during this test, the orbiting tether generated a potential of 3,500 volts. This conducting single-line tether was severed after five hours of deployment. It is believed that the failure was caused by an electric arc generated by the conductive tether's movement through the Earth's magnetic field.
When a tether is moved at a velocity (v) at right angles to the Earth's magnetic field (B), an electric field is observed in the tether's frame of reference. This can be stated as:
E = v * B = vB
The direction of the electric field (E) is at right angles to both the tether's velocity (v) and magnetic field (B). If the tether is a conductor, then the electric field leads to the displacement of charges along the tether. Note that the velocity used in this equation is the orbital velocity of the tether. The rate of rotation of the Earth, or of its core, is not relevant. In this regard, see also homopolar generator.
Voltage across conductor
With a long conducting wire of length L, an electric field E is generated in the wire. It produces a voltage V between the opposite ends of the wire. This can be expressed as:
where the angle τ is between the length vector (L) of the tether and the electric field vector (E), assumed to be in the vertical direction at right angles to the velocity vector (v) in plane and the magnetic field vector (B) is out of the plane.
Current in conductor
An electrodynamic tether can be described as a type of thermodynamically "open system". Electrodynamic tether circuits cannot be completed by simply using another wire, since another tether will develop a similar voltage. Fortunately, the Earth's magnetosphere is not "empty", and, in near-Earth regions (especially near the Earth's atmosphere) there exist highly electrically conductive plasmas which are kept partially ionized by solar radiation or other radiant energy. The electron and ion density varies according to various factors, such as the location, altitude, season, sunspot cycle, and contamination levels. It is known that a positively charged bare conductor can readily remove free electrons out of the plasma. Thus, to complete the electrical circuit, a sufficiently large area of uninsulated conductor is needed at the upper, positively charged end of the tether, thereby permitting current to flow through the tether.
However, it is more difficult for the opposite (negative) end of the tether to eject free electrons or to collect positive ions from the plasma. It is plausible that, by using a very large collection area at one end of the tether, enough ions can be collected to permit significant current through the plasma. This was demonstrated during the Shuttle orbiter's TSS-1R mission, when the shuttle itself was used as a large plasma contactor to provide over an ampere of current. Improved methods include creating an electron emitter, such as a thermionic cathode, plasma cathode, plasma contactor, or field electron emission device. Since both ends of the tether are "open" to the surrounding plasma, electrons can flow out of one end of the tether while a corresponding flow of electrons enters the other end. In this fashion, the voltage that is electromagnetically induced within the tether can cause current to flow through the surrounding space environment, completing an electrical circuit through what appears to be, at first glance, an open circuit.
Tether current
The amount of current (I) flowing through a tether depends on various factors. One of these is the circuit's total resistance (R). The circuit's resistance consist of three components:
the effective resistance of the plasma,
the resistance of the tether, and
a control variable resistor.
In addition, a parasitic load is needed. The load on the current may take the form of a charging device which, in turn, charges reserve power sources such as batteries. The batteries in return will be used to control power and communication circuits, as well as drive the electron emitting devices at the negative end of the tether. As such the tether can be completely self-powered, besides the initial charge in the batteries to provide electrical power for the deployment and startup procedure.
The charging battery load can be viewed as a resistor which absorbs power, but stores this for later use (instead of immediately dissipating heat). It is included as part of the "control resistor". The charging battery load is not treated as a "base resistance" though, as the charging circuit can be turned off at any time. When off, the operations can be continued without interruption using the power stored in the batteries.
Current collection / emission for an EDT system: theory and technology
Understanding electron and ion current collection to and from the surrounding ambient plasma is critical for most EDT systems. Any exposed conducting section of the EDT system can passively ('passive' and 'active' emission refers to the use of pre-stored energy in order to achieve the desired effect) collect electron or ion current, depending on the electric potential of the spacecraft body with respect to the ambient plasma. In addition, the geometry of the conducting body plays an important role in the size of the sheath and thus the total collection capability. As a result, there are a number of theories for the varying collection techniques.
The primary passive processes that control the electron and ion collection on an EDT system are thermal current collection, ion ram collection effects, electron photoemission, and possibly secondary electron and ion emission. In addition, the collection along a thin bare tether is described using orbital motion limited (OML) theory as well as theoretical derivations from this model depending on the physical size with respect to the plasma Debye length. These processes take place all along the exposed conducting material of the entire system. Environmental and orbital parameters can significantly influence the amount collected current. Some important parameters include plasma density, electron and ion temperature, ion molecular weight, magnetic field strength and orbital velocity relative to the surrounding plasma.
Then there are active collection and emission techniques involved in an EDT system. This occurs through devices such as hollow cathode plasma contactors, thermionic cathodes, and field emitter arrays. The physical design of each of these structures as well as the current emission capabilities are thoroughly discussed.
Bare conductive tethers
The concept of current collection to a bare conducting tether was first formalized by Sanmartin and Martinez-Sanchez. They note that the most area efficient current collecting cylindrical surface is one that has an effective radius less than ~1 Debye length where current collection physics is known as orbital motion limited (OML) in a collisionless plasma. As the effective radius of the bare conductive tether increases past this point then there are predictable reductions in collection efficiency compared to OML theory. In addition to this theory (which has been derived for a non-flowing plasma), current collection in space occurs in a flowing plasma, which introduces another collection effect. These issues are explored in greater detail below.
Orbit motion limited (OML) theory
The electron Debye length is defined as the characteristic shielding distance in a plasma, and is described by the equation
This distance, where all electric fields in the plasma resulting from the conductive body have fallen off by 1/e, can be calculated. OML theory is defined with the assumption that the electron Debye length is equal to or larger than the size of the object and the plasma is not flowing. The OML regime occurs when the sheath becomes sufficiently thick such that orbital effects become important in particle collection. This theory accounts for and conserves particle energy and angular momentum. As a result, not all particles that are incident onto the surface of the thick sheath are collected. The voltage of the collecting structure with respect to the ambient plasma, as well as the ambient plasma density and temperature, determines the size of the sheath. This accelerating (or decelerating) voltage combined with the energy and momentum of the incoming particles determines the amount of current collected across the plasma sheath.
The orbital-motion-limit regime is attained when the cylinder radius is small enough such that all incoming particle trajectories that are collected are terminated on the cylinder's surface are connected to the background plasma, regardless of their initial angular momentum (i.e., none are connected to another location on the probe's surface). Since, in a quasi-neutral collisionless plasma, the distribution function is conserved along particle orbits, having all “directions of arrival” populated corresponds to an upper limit on the collected current per unit area (not total current).
In an EDT system, the best performance for a given tether mass is for a tether diameter chosen to be smaller than an electron Debye length for typical ionospheric ambient conditions (Typical ionospheric conditions in the from 200 to 2000 km altitude range, have a T_e ranging from 0.1 eV to 0.35 eV, and n_e ranging from 10^10 m^-3 to 10^12 m^-3 ), so it is therefore within the OML regime. Tether geometries outside this dimension have been addressed. OML collection will be used as a baseline when comparing the current collection results for various sample tether geometries and sizes.
In 1962 Gerald H. Rosen derived the equation that is now known as the OML theory of dust charging. According to Robert Merlino of the University of Iowa, Rosen seems to have arrived at the equation 30 years before anyone else.
Deviations from OML theory in a non-flowing plasma
For a variety of practical reasons, current collection to a bare EDT does not always satisfy the assumption of OML collection theory. Understanding how the predicted performance deviates from theory is important for these conditions. Two commonly proposed geometries for an EDT involve the use of a cylindrical wire and a flat tape. As long as the cylindrical tether is less than one Debye length in radius, it will collect according to the OML theory. However, once the width exceeds this distance, then the collection increasingly deviates from this theory. If the tether geometry is a flat tape, then an approximation can be used to convert the normalized tape width to an equivalent cylinder radius. This was first done by Sanmartin and Estes and more recently using the 2-Dimensional Kinetic Plasma Solver (KiPS 2-D) by Choiniere et al.
Flowing plasma effect
There is at present, no closed-form solution to account for the effects of plasma flow relative to the bare tether. However, numerical simulation has been recently developed by Choiniere et al. using KiPS-2D which can simulate flowing cases for simple geometries at high bias potentials. This flowing plasma analysis as it applies to EDTs have been discussed. This phenomenon is presently being investigated through recent work, and is not fully understood.
Endbody collection
This section discusses the plasma physics theory that explains passive current collection to a large conductive body which will be applied at the end of an ED tether. When the size of the sheath is much smaller than the radius of the collecting body then depending on the polarity of the difference between the potential of the tether and that of the ambient plasma, (V – Vp), it is assumed that all of the incoming electrons or ions that enter the plasma sheath are collected by the conductive body. This 'thin sheath' theory involving non-flowing plasmas is discussed, and then the modifications to this theory for flowing plasma is presented. Other current collection mechanisms will then be discussed. All of the theory presented is used towards developing a current collection model to account for all conditions encountered during an EDT mission.
Passive collection theory
In a non-flowing quasi-neutral plasma with no magnetic field, it can be assumed that a spherical conducting object will collect equally in all directions. The electron and ion collection at the end-body is governed by the thermal collection process, which is given by Ithe and Ithi.
Flowing plasma electron collection mode
The next step in developing a more realistic model for current collection is to include the magnetic field effects and plasma flow effects. Assuming a collisionless plasma, electrons and ions gyrate around magnetic field lines as they travel between the poles around the Earth due to magnetic mirroring forces and gradient-curvature drift. They gyrate at a particular radius and frequency dependence upon their mass, the magnetic field strength, and energy. These factors must be considered in current collection models.
Flowing plasma ion collection model
When the conducting body is negatively biased with respect to the plasma and traveling above the ion thermal velocity, there are additional collection mechanisms at work. For typical Low Earth Orbits (LEOs), between 200 km and 2000 km, the velocities in an inertial reference frame range from 7.8 km/s to 6.9 km/s for a circular orbit and the atmospheric molecular weights range from 25.0 amu (O+, O2+, & NO+) to 1.2 amu (mostly H+), respectively. Assuming that the electron and ion temperatures range from ~0.1 eV to 0.35 eV, the resulting ion velocity ranges from 875 m/s to 4.0 km/s from 200 km to 2000 km altitude, respectively. The electrons are traveling at approximately 188 km/s throughout LEO. This means that the orbiting body is traveling faster than the ions and slower than the electrons, or at a mesosonic speed. This results in a unique phenomenon whereby the orbiting body 'rams' through the surrounding ions in the plasma creating a beam like effect in the reference frame of the orbiting body.
Porous endbodies
Porous endbodies have been proposed as a way to reduce the drag of a collecting endbody while ideally maintaining a similar current collection. They are often modeled as solid endbodies, except they are a small percentage of the solid spheres surface area. This is, however, an extreme oversimplification of the concept. Much has to be learned about the interactions between the sheath structure, the geometry of the mesh, the size of the endbody, and its relation to current collection. This technology also has the potential to resolve a number of issues concerning EDTs. Diminishing returns with collection current and drag area have set a limit that porous tethers might be able to overcome. Work has been accomplished on current collection using porous spheres, by Stone et al. and Khazanov et al.
It has been shown that the maximum current collected by a grid sphere compared to the mass and drag reduction can be estimated. The drag per unit of collected current for a grid sphere with a transparency of 80 to 90% is approximately 1.2 – 1.4 times smaller than that of a solid sphere of the same radius. The reduction in mass per unit volume, for this same comparison, is 2.4 – 2.8 times.
Other current collection methods
In addition to the electron thermal collection, other processes that could influence the current collection in an EDT system are photoemission, secondary electron emission, and secondary ion emission. These effects pertain to all conducting surfaces on an EDT system, not just the end-body.
Space charge limits across plasma sheaths
In any application where electrons are emitted across a vacuum gap, there is a maximum allowable current for a given bias due to the self repulsion of the electron beam. This classical 1-D space charge limit (SCL) is derived for charged particles of zero initial energy, and is termed the Child-Langmuir Law. This limit depends on the emission surface area, the potential difference across the plasma gap and the distance of that gap. Further discussion of this topic can be found.
Electron emitters
There are three active electron emission technologies usually considered for EDT applications: hollow cathode plasma contactors (HCPCs), thermionic cathodes (TCs), and field emission cathodes (FEC), often in the form of field emitter arrays (FEAs). System level configurations will be presented for each device, as well as the relative costs, benefits, and validation.
Thermionic cathode (TC)
Thermionic emission is the flow of electrons from a heated charged metal or metal oxide surface, caused by thermal vibrational energy overcoming the work function (electrostatic forces holding electrons to the surface). The thermionic emission current density, J, rises rapidly with increasing temperature, releasing a significant number of electrons into the vacuum near the surface. The quantitative relation is given in the equation
This equation is called the Richardson-Dushman or Richardson equation. (ф is approximately 4.54 eV and AR ~120 A/cm2 for tungsten).
Once the electrons are thermionically emitted from the TC surface they require an acceleration potential to cross a gap, or in this case, the plasma sheath. Electrons can attain this necessary energy to escape the SCL of the plasma sheath if an accelerated grid, or electron gun, is used. The equation
shows what potential is needed across the grid in order to emit a certain current entering the device.
Here, η is the electron gun assembly (EGA) efficiency (~0.97 in TSS-1), ρ is the perveance of the EGA (7.2 micropervs in TSS-1), ΔVtc is the voltage across the accelerating grid of the EGA, and It is the emitted current. The perveance defines the space charge limited current that can be emitted from a device. The figure below displays commercial examples of thermionic emitters and electron guns produced at Heatwave Labs Inc.
TC electron emission will occur in one of two different regimes: temperature or space charge limited current flow. For temperature limited flow every electron that obtains enough energy to escape from the cathode surface is emitted, assuming the acceleration potential of the electron gun is large enough. In this case, the emission current is regulated by the thermionic emission process, given by the Richardson Dushman equation. In SCL electron current flow there are so many electrons emitted from the cathode that not all of them are accelerated enough by the electron gun to escape the space charge. In this case, the electron gun acceleration potential limits the emission current. The below chart displays the temperature limiting currents and SCL effects. As the beam energy of the electrons is increased, the total escaping electrons can be seen to increase. The curves that become horizontal are temperature limited cases.
Field emission cathode (FEC)
In field electron emission, electrons tunnel through a potential barrier, rather than escaping over it as in thermionic emission or photoemission. For a metal at low temperature, the process can be understood in terms of the figure below. The metal can be considered a potential box, filled with electrons to the Fermi level (which lies below the vacuum level by several electron volts). The vacuum level represents the potential energy of an electron at rest outside the metal in the absence of an external field. In the presence of a strong electric field, the potential outside the metal will be deformed along the line AB, so that a triangular barrier is formed, through which electrons can tunnel. Electrons are extracted from the conduction band with a current density given by the Fowler−Nordheim equation
AFN and BFN are the constants determined by measurements of the FEA with units of A/V2 and V/m, respectively. EFN is the electric field that exists between the electron emissive tip and the positively biased structure drawing the electrons out. Typical constants for Spindt type cathodes include: AFN = 3.14 x 10-8 A/V2 and BFN = 771 V/m. (Stanford Research Institute data sheet). An accelerating structure is typically placed in close proximity with the emitting material as in the below figure. Close (micrometer scale) proximity between the emitter and gate, combined with natural or artificial focusing structures, efficiently provide the high field strengths required for emission with relatively low applied voltage and power.
A carbon nanotube field-emission cathode was successfully tested on the KITE Electrodynamic tether experiment on the Japanese H-II Transfer Vehicle.
Field emission cathodes are often in the form of Field Emitter Arrays (FEAs), such as the cathode design by Spindt et al. The figure below displays close up visual images of a Spindt emitter.
A variety of materials have been developed for field emitter arrays, ranging from silicon to semiconductor fabricated molybdenum tips with integrated gates to a plate of randomly distributed carbon nanotubes with a separate gate structure suspended above. The advantages of field emission technologies over alternative electron emission methods are:
No requirement for a consumable (gas) and no resulting safety considerations for handling a pressurized vessel
A low-power capability
Having moderate power impacts due to space-charge limits in the emission of the electrons into the surrounding plasma.
One major issue to consider for field emitters is the effect of contamination. In order to achieve electron emission at low voltages, field emitter array tips are built on a micrometer-level scale sizes. Their performance depends on the precise construction of these small structures. They are also dependent on being constructed with a material possessing a low work-function. These factors can render the device extremely sensitive to contamination, especially from hydrocarbons and other large, easily polymerized molecules. Techniques for avoiding, eliminating, or operating in the presence of contaminations in ground testing and ionospheric (e.g. spacecraft outgassing) environments are critical. Research at the University of Michigan and elsewhere has focused on this outgassing issue. Protective enclosures, electron cleaning, robust coatings, and other design features are being developed as potential solutions. FEAs used for space applications still require the demonstration of long term stability, repeatability, and reliability of operation at gate potentials appropriate to the space applications.
Hollow cathode
Hollow cathodes emit a dense cloud of plasma by first ionizing a gas. This creates a high density plasma plume which makes contact with the surrounding plasma. The region between the high density plume and the surrounding plasma is termed a double sheath or double layer. This double layer is essentially two adjacent layers of charge. The first layer is a positive layer at the edge of the high potential plasma (the contactor plasma cloud). The second layer is a negative layer at the edge of the low potential plasma (the ambient plasma). Further investigation of the double layer phenomenon has been conducted by several people. One type of hollow cathode consists of a metal tube lined with a sintered barium oxide impregnated tungsten insert, capped at one end by a plate with a small orifice, as shown in the below figure. Electrons are emitted from the barium oxide impregnated insert by thermionic emission. A noble gas flows into the insert region of the HC and is partially ionized by the emitted electrons that are accelerated by an electric field near the orifice (Xenon is a common gas used for HCs as it has a low specific ionization energy (ionization potential per unit mass). For EDT purposes, a lower mass would be more beneficial because the total system mass would be less. This gas is just used for charge exchange and not propulsion.). Many of the ionized xenon atoms are accelerated into the walls where their energy maintains the thermionic emission temperature. The ionized xenon also exits out of the orifice. Electrons are accelerated from the insert region, through the orifice to the keeper, which is always at a more positive bias.
In electron emission mode, the ambient plasma is positively biased with respect to the keeper. In the contactor plasma, the electron density is approximately equal to the ion density. The higher energy electrons stream through the slowly expanding ion cloud, while the lower energy electrons are trapped within the cloud by the keeper potential. The high electron velocities lead to electron currents much greater than xenon ion currents. Below the electron emission saturation limit the contactor acts as a bipolar emissive probe. Each outgoing ion generated by an electron allows a number of electrons to be emitted. This number is approximately equal to the square root of the ratio of the ion mass to the electron mass.
It can be seen in the below chart what a typical I-V curve looks like for a hollow cathode in electron emission mode. Given a certain keeper geometry (the ring in the figure above that the electrons exit through), ion flow rate, and Vp, the I-V profile can be determined. [111-113].
The operation of the HC in the electron collection mode is called the plasma contacting (or ignited) operating mode. The “ignited mode” is so termed because it indicates that multi-ampere current levels can be achieved by using the voltage drop at the plasma contactor. This accelerates space plasma electrons which ionize neutral expellant flow from the contactor. If electron collection currents are high and/or ambient electron densities are low, the sheath at which electron current collection is sustained simply expands or shrinks until the required current is collected.
In addition, the geometry affects the emission of the plasma from the HC as seen in the below figure. Here it can be seen that, depending on the diameter and thickness of the keeper and the distance of it with respect to the orifice, the total emission percentage can be affected.
Plasma collection and emission summary
All of the electron emission and collection techniques can be summarized in the table following. For each method there is a description as to whether the electrons or ions in the system increased or decreased based on the potential of the spacecraft with respect to the plasma. Electrons (e-) and ions (ions+) indicates that the number of electrons or ions are being increased (↑) or reduced (↓). Also, for each method some special conditions apply (see the respective sections in this article for further clarification of when and where it applies).
{| class="wikitable"
|-
! Passive e− and ion emission/collection
! V − Vp < 0
! V − Vp > 0
|-
| Bare tether: OML
| ions+ ↑
| e− ↑
|-
| Ram collection
| ions+ ↑
| 0
|-
| Thermal collection
| ions+ ↑
| e− ↑
|-
| Photoemmision
| e− ↓
| e− ↓,~0
|-
| Secondary electron emission
| e− ↓
| e− ↓
|-
| Secondary ion emission
| ions+ ↓,~0
| 0
|-
| Retardation regieme
| e− ↑
| ions+ ↑, ~0
|-
! Active e− and ion emission
|colspan="2"| Potential does not matter
|-
| Thermionic emission
|colspan="2"| e− ↓
|-
| Field emitter arrays
|colspan="2"| e− ↓
|-
| Hollow cathodes
| e− ↓
| e− ↑
|}
For use in EDT system modeling, each of the passive electron collection and emission theory models has been verified by reproducing previously published equations and results. These plots include: orbital motion limited theory, Ram collection, and thermal collection, photoemission, secondary electron emission, and secondary ion emission.
Electrodynamic tether system fundamentals
In order to integrate all the most recent electron emitters, collectors, and theory into a single model, the EDT system must first be defined and derived. Once this is accomplished it will be possible to apply this theory toward determining optimizations of system attributes.
There are a number of derivations that solve for the potentials and currents involved in an EDT system numerically. The derivation and numerical methodology of a full EDT system that includes a bare tether section, insulating conducting tether, electron (and ion) endbody emitters, and passive electron collection is described. This is followed by the simplified, all insulated tether model. Special EDT phenomena and verification of the EDT system model using experimental mission data will then be discussed.
Bare tether system derivation
An important note concerning an EDT derivation pertains to the celestial body which the tether system orbits. For practicality, Earth will be used as the body that is orbited; however, this theory applies to any celestial body with an ionosphere and a magnetic field.
The coordinates are the first thing that must be identified. For the purposes of this derivation, the x- and y-axis are defined as the east-west, and north-south directions with respect to the Earth's surface, respectively. The z-axis is defined as up-down from the Earth's center, as seen in the figure below. The parameters – magnetic field B, tether length L, and the orbital velocity vorb – are vectors that can be expressed in terms of this coordinate system, as in the following equations:
(the magnetic field vector),
(the tether position vector), and
(the orbital velocity vector).
The components of the magnetic field can be obtained directly from the International Geomagnetic Reference Field (IGRF) model. This model is compiled from a collaborative effort between magnetic field modelers and the institutes involved in collecting and disseminating magnetic field data from satellites and from observatories and surveys around the world. For this derivation, it is assumed that the magnetic field lines are all the same angle throughout the length of the tether, and that the tether is rigid.
Realistically, the transverse electrodynamic forces cause the tether to bow and to swing away from the local vertical. Gravity gradient forces then produce a restoring force that pulls the tether back towards the local vertical; however, this results in a pendulum-like motion (Gravity gradient forces also result in pendulous motions without ED forces). The B direction changes as the tether orbits the Earth, and thus the direction and magnitude of the ED forces also change. This pendulum motion can develop into complex librations in both the in-plane and out-of-plane directions. Then, due to coupling between the in-plane motion and longitudinal elastic oscillations, as well as coupling between in-plane and out-of-plane motions, an electrodynamic tether operated at a constant current can continually add energy to the libration motions. This effect then has a chance to cause the libration amplitudes to grow and eventually cause wild oscillations, including one such as the 'skip-rope effect', but that is beyond the scope of this derivation. In a non-rotating EDT system (A rotating system, called Momentum Exchange Electrodynamic Reboost [MXER]), the tether is predominantly in the z-direction due to the natural gravity gradient alignment with the Earth.
Derivations
The following derivation will describe the exact solution to the system accounting for all vector quantities involved, and then a second solution with the nominal condition where the magnetic field, the orbital velocity, and the tether orientation are all perpendicular to one another. The final solution of the nominal case is solved for in terms of just the electron density, n_e, the tether resistance per unit length, R_t, and the power of the high voltage power supply, P_hvps.
The below figure describes a typical EDT system in a series bias grounded gate configuration (further description of the various types of configurations analyzed have been presented) with a blow-up of an infinitesimal section of bare tether. This figure is symmetrically set up so either end can be used as the anode. This tether system is symmetrical because rotating tether systems will need to use both ends as anodes and cathodes at some point in its rotation. The V_hvps will only be used in the cathode end of the EDT system, and is turned off otherwise.
In-plane and out-of-plane direction is determined by the orbital velocity vector of the system. An in-plane force is in the direction of travel. It will add or remove energy to the orbit, thereby increasing the altitude by changing the orbit into an elliptical one. An out-of-plane force is in the direction perpendicular to the plane of travel, which causes a change in inclination. This will be explained in the following section.
To calculate the in-plane and out-of-plane directions, the components of the velocity and magnetic field vectors must be obtained and the force values calculated. The component of the force in the direction of travel will serve to enhance the orbit raising capabilities, while the out-of-plane component of thrust will alter the inclination. In the below figure, the magnetic field vector is solely in the north (or y-axis) direction, and the resulting forces on an orbit, with some inclination, can be seen. An orbit with no inclination would have all the thrust in the in-plane direction.
There has been work conducted to stabilize the librations of the tether system to prevent misalignment of the tether with the gravity gradient. The below figure displays the drag effects an EDT system will encounter for a typical orbit. The in-plane angle, α_ip, and out-of-plane angle, α_op, can be reduced by increasing the endmass of the system, or by employing feedback technology. Any deviations in the gravity alignment must be understood, and accounted for in the system design.
Interstellar travel
An application of the EDT system has been considered and researched for interstellar travel by using the local interstellar medium of the Local Bubble. It has been found to be feasible to use the EDT system to supply on-board power given a crew of 50 with a requirement of 12 kilowatts per person. Energy generation is achieved at the expense of kinetic energy of the spacecraft. In reverse the EDT system could be used for acceleration. However, this has been found to be ineffective. Thrustless turning using the EDT system is possible to allow for course correction and rendezvous in interstellar space. It will not, however, allow rapid thrustless circling to allow a starship to re-enter a power beam or make numerous solar passes due to an extremely large turning radius of 3.7*1013 km (~3.7 lightyears).
See also
STARS-II
HTV-6
Tether propulsion
Earth's magnetic field
Tether satellite
Atmospheric electricity
STS-75
Magnetic sail
Electric sail
Spacecraft propulsion
References
General information
Cosmo, M.L., and Lorenzini, E.C., "Tethers in Space Handbook," NASA Marchall Space Flight Center, 1997, pp. 274–1-274.
Mariani, F., Candidi, M., Orsini, S., "Current Flow Through High-Voltage Sheaths Observer by the TEMAG Experiment During TSS-1R," Geophysical Research Letters, Vol. 25, No. 4, 1998, pp. 425–428.
Citations
Further reading
Dobrowolny, M. (1979). Wave and particle phenomena induced by an electrodynamic tether. SAO special report, 388. Cambridge, Mass: Smithsonian Institution Astrophysical Observatory.
Williamson, P. R. (1986). High voltage characteristics of the electrodynamic tether and the generation of power and propulsion final report. [NASA contractor report], NASA CR-178949. Washington, DC: National Aeronautics and Space Administration.
External links
Related patents
, "Space station and system for operating same".
, "Ionospheric battery".
, "Satellite connected by means of a long tether to a powered spacecraft ".
, "Electrodynamic Tether And Method of Use".
Publications
Cosmo, M. L., and E. C. Lorenzini, "Tethers in Space Handbook" (3rd ed). Prepared for NASA/MSFC by Smithsonian Astrophysical Observatory, Cambridge, MA, December 1997. (PDF)
Other articles
"Electrodynamic Tethers ". Tethers.com.
"Shuttle Electrodynamic Tether System (SETS)".
Enrico Lorenzini and Juan Sanmartín, "Electrodynamic Tethers in Space; By exploiting fundamental physical laws, tethers may provide low-cost electrical power, drag, thrust, and artificial gravity for spaceflight". Scientific American, August 2004.
"Tethers". Astronomy Study Guide, BookRags.
David P. Stern, "The Space Tether Experiment". 25 November 2001.
Spacecraft propulsion
Spacecraft components
Electrodynamics
Magnetic propulsion devices
Electrical generators | 0.783084 | 0.97711 | 0.765159 |
Cognitive inertia | Cognitive inertia is the tendency for a particular orientation in how an individual thinks about an issue, belief, or strategy to resist change. Clinical and neuroscientific literature often defines it as a lack of motivation to generate distinct cognitive processes needed to attend to a problem or issue. The physics term inertia emphasizes the rigidity and resistance to change in the method of cognitive processing that has been used for a significant amount of time. Commonly confused with belief perseverance, cognitive inertia is the perseverance of how one interprets information, not the perseverance of the belief itself.
Cognitive inertia has been causally implicated in disregarding impending threats to one's health or environment, enduring political values and deficits in task switching. Interest in the phenomenon was primarily taken up by economic and industrial psychologists to explain resistance to change in brand loyalty, group brainstorming, and business strategies. In the clinical setting, cognitive inertia has been used as a diagnostic tool for neurodegenerative diseases, depression, and anxiety. Critics have stated that the term oversimplifies resistant thought processes and suggests a more integrative approach that involves motivation, emotion, and developmental factors.
History and methods
Early history
The idea of cognitive inertia has its roots in philosophical epistemology. Early allusions to a reduction of cognitive inertia can be found in the Socratic dialogues written by Plato. Socrates builds his argument by using the detractor's beliefs as the premise of his argument's conclusions. In doing so, Socrates reveals the detractor's fallacy of thought, inducing the detractor to change their mind or face the reality that their thought processes are contradictory. Ways to combat persistence of cognitive style are also seen in Aristotle's syllogistic method which employs logical consistency of the premises to convince an individual of the conclusion's validity.
At the beginning of the twentieth century, two of the earliest experimental psychologists, Müller and Pilzecker, defined perseveration of thought to be "the tendency of ideas, after once having entered consciousness, to rise freely again in consciousness". Müller described perseveration by illustrating his own inability to inhibit old cognitive strategies with a syllable-switching task, while his wife easily switched from one strategy to the next. One of the earliest personality researchers, W. Lankes, more broadly defined perseveration as "being confined to the cognitive side" and possibly "counteracted by strong will". These early ideas of perseveration were the precursor to how the term cognitive inertia would be used to study certain symptoms in patients with neurodegenerative disorders, rumination and depression.
Cognitive psychology
Originally proposed by William J. McGuire in 1960, the theory of cognitive inertia was built upon emergent theories in social psychology and cognitive psychology that centered around cognitive consistency, including Fritz Heider's balance theory and Leon Festinger's cognitive dissonance. McGuire used the term cognitive inertia to account for an initial resistance to change how an idea was processed after new information, that conflicted with the idea, had been acquired.
In McGuire's initial study involving cognitive inertia, participants gave their opinions of how probable they believed various topics to be. A week later, they returned to read messages related to the topics they had given their opinions on. The messages were presented as factual and were targeted to change the participants' belief in how probable the topics were. Immediately after reading the messages, and one week later, the participants were again assessed on how probable they believed the topics to be. Discomforted by the inconsistency of the related information from the messages and their initial ratings on the topics, McGuire believed the participants would be motivated to shift their probability ratings to be more consistent with the factual messages. However, the participants' opinions did not immediately shift toward the information presented in the messages. Instead, a shift towards consistency of thought on the information from the messages and topics grew stronger as time passed, often referred to as "seepage" of information. The lack of change was reasoned to be due to persistence in the individual's existing thought processes which inhibited their ability to re-evaluate their initial opinion properly, or as McGuire called it, cognitive inertia.
Probabilistic model
Although cognitive inertia was related to many of the consistency theories at the time of its conception, McGuire used a unique method of probability theory and logic to support his hypotheses on change and persistence in cognition. Utilizing a syllogistic framework, McGuire proposed that if three issues (a, b and c) were so interrelated that an individual's opinion were in complete support of issues a and b then it would follow their opinion on issue c would be supported as a logical conclusion. Furthermore, McGuire proposed if an individual's belief in the probability (p) of the supporting issues (a or b) was changed, then not only would the issue (c) explicitly stated change, but a related implicit issue (d) could be changed as well. More formally:
This formula was used by McGuire to show that the effect of a persuasive message on a related, but unmentioned, topic (d) took time to sink in. The assumption was that topic d was predicated on issues a and b, similar to issue c, so if the individual agreed with issue c then so too should they agree with issue d. However, in McGuire's initial study immediate measurement on issue d, after agreement on issues a, b and c, had only shifted half the amount that would be expected to be logically consistent. Follow-up a week later showed that shift in opinion on issue d had shifted enough to be logically consistent with issues a, b, and c, which not only supported the theory of cognitive consistency, but also the initial hurdle of cognitive inertia.
The model was based on probability to account for the idea that individuals do not necessarily assume every issue is 100% likely to happen, but instead there is a likelihood of an issue occurring and the individual's opinion on that likelihood will rest on the likelihood of other interrelated issues.
Examples
Public health
Historical
Group (cognitive) inertia, how a subset of individuals view and process an issue, can have detrimental effects on how emergent and existing issues are handled. In an effort to describe the almost lackadaisical attitude from a large majority of U.S. citizens toward the insurgence of the Spanish flu in 1918, historian Tom Dicke has proposed that cognitive inertia explains why many individuals did not take the flu seriously. At the time, most U.S. citizens were familiar with the seasonal flu. They viewed it as an irritation that was often easy to treat, infected few, and passed quickly with few complications and hardly ever a death. However, this way of thinking about the flu was detrimental to the need for preparation, prevention, and treatment of the Spanish flu due to its quick spread and virulent form until it was much too late, and it became one of the most deadly pandemics in history.
Contemporary
In the more modern period, there is an emerging position that anthropogenic climate change denial is a kind of cognitive inertia. Despite the evidence provided by scientific discovery, there are still those – including nations – who deny its incidence in favor of existing patterns of development.
Geography
To better understand how individuals store and integrate new knowledge with existing knowledge, Friedman and Brown tested participants on where they believed countries and cities to be located latitudinally and then, after giving them the correct information, tested them again on different cities and countries. The majority of participants were able to use the correct information to update their cognitive understanding of geographical locations and place the new locations closer to their correct latitudinal location, which supported the idea that new knowledge affects not only the direct information but also related information. However, there was a small effect of cognitive inertia as some areas were unaffected by the correct information, which the researchers suggested was due to a lack of knowledge linkage in the correct information and new locations presented.
Group membership
Politics
The persistence of political group membership and ideology is suggested to be due to the inertia of how the individual has perceived the grouping of ideas over time. The individual may accept that something counter to their perspective is true, but it may not be enough to tip the balance of how they process the entirety of the subject.
Governmental organizations can often be resistant or glacially slow to change along with social and technological transformation. Even when evidence of malfunction is clear, institutional inertia can persist. Political scientist Francis Fukuyama has asserted that humans imbue intrinsic value on the rules they enact and follow, especially in the larger societal institutions that create order and stability. Despite rapid social change and increasing institutional problems, the value placed on an institution and its rules can mask how well an institution is functioning as well as how that institution could be improved. The inability to change an institutional mindset is supported by the theory of punctuated equilibrium, long periods of deleterious governmental policies punctuated by moments of civil unrest. After decades of economic decline, the United Kingdom's referendum to leave the EU was seen as an example of the dramatic movement after a long period of governmental inertia.
Interpersonal roles
The unwavering views of the roles people play in our lives have been suggested as a form of cognitive inertia. When asked how they would feel about a classmate marrying their mother or father, many students said they could not view their classmate as a step-father/mother. Some students went so far as to say that the hypothetical relationship felt like incest.
Role inertia has also been implicated in marriage and the likelihood of divorce. Research on couples who cohabit together before marriage shows they are more likely to get divorced than those who do not. The effect is most seen in a subset of couples who cohabit without first being transparent about future expectations of marriage. Over time, cognitive role inertia takes over, and the couple marries without fully processing the decision, often with one or both of the partners not fully committed to the idea. The lack of deliberative processing of existing problems and levels of commitment in the relationship can lead to increased stress, arguments, dissatisfaction, and divorce.
In business
Cognitive inertia is regularly referenced in business and management to refer to consumers' continued use of products, a lack of novel ideas in group brainstorming sessions, and lack of change in competitive strategies.
Brand loyalty
Gaining and retaining new customers is essential to whether a business succeeds early on. To assess a service, product, or likelihood of customer retention, many companies will invite their customers to complete satisfaction surveys immediately after purchasing a product or service. However, unless the satisfaction survey is completed immediately after the point of purchase, the customer response is often based on an existing mindset about the company, not the actual quality of experience. Unless the product or service is extremely negative or positive, cognitive inertia related to how the customer feels about the company will not be inhibited, even when the product or service is substandard. These satisfaction surveys can lack the information businesses need to improve a service or product that will allow them to survive against the competition.
Brainstorming
Cognitive inertia plays a role in why a lack of ideas is generated during group brainstorming sessions. Individuals in a group will often follow an idea trajectory, in which they continue to narrow in on ideas based on the very first idea proposed in the brainstorming session. This idea trajectory inhibits the creation of new ideas central to the group's initial formation.
In an effort to combat cognitive inertia in group brainstorming, researchers had business students either use a single-dialogue or multiple-dialogue approach to brainstorming. In the single dialogue version, the business students all listed their ideas. They created a dialogue around the list, whereas, in the multi-dialogue version, ideas were placed in subgroups that individuals could choose to enter and talk about and then freely move to another subgroup. The multi-dialogue approach was able to combat cognitive inertia by allowing different ideas to be generated in sub-groups simultaneously and each time an individual switched to a different sub-group, they had to change how they were processing the ideas, which led to more novel and high-quality ideas.
Competitive strategies
Adapting cognitive strategies to changing business climates is often integral to whether or not a business succeeds or fails during economic stress. In the late 1980s in the UK, real estate agents' cognitive competitive strategies did not shift with signs of an increasingly depressed real estate market, despite their ability to acknowledge the signs of decline. This cognitive inertia at the individual and corporate level has been proposed as reasons to why companies do not adopt new strategies to combat the ever-increasing decline in the business or take advantage of the potential. General Mills' continued operation of mills long after they were no longer necessary is an example of when companies refuse to change the mindset of how they should operate.
More famously, cognitive inertia in upper management at Polaroid was proposed as one of the main contributing factors to the company's outdated competitive strategy. Management strongly held that consumers wanted high-quality physical copies of their photos, where the company would make their money. Despite Polaroid's extensive research and development into the digital market, their inability to refocus their strategy to hardware sales instead of film eventually led to their collapse.
Scenario planning has been one suggestion to combat cognitive inertia when making strategic decisions to improve business. Individuals develop different strategies and outline how the scenario could play out, considering different ways it could go. Scenario planning allows for diverse ideas to be heard and the breadth of each scenario, which can help combat relying on existing methods and thinking alternatives is unrealistic.
Management
In a recent review of company archetypes that lead to corporate failure, Habersang, Küberling, Reihlen, and Seckler defined "the laggard" as one who rests on the laurels of the company, believing past success and recognition will shield them from failure. Instead of adapting to changes in the market, "the laggard" assumes that the same strategies that won the company success in the past will do the same in the future. This lag in changing how they think about the company can lead to rigidity in company identity, like Polaroid, conflict in adapting when the sales plummet, and resource rigidity. In the case of Kodak, instead of reallocating money to a new product or service strategy, they cut production costs and imitation of competitors, both leading to poorer quality products and eventually bankruptcy.
A review of 27 firms integrating the use of big data analytics found cognitive inertia to hamper the widespread implementation, with managers from sectors that did not focus on digital technology seeing the change as unnecessary and cost prohibitive.
Managers with high cognitive flexibility that can change the type of cognitive processing based on the situation at hand are often the most successful in solving novel problems and keeping up with changing circumstances. Interestingly, shifts in mental models (disrupting cognitive inertia) during a company crisis are frequently at the lower group level, with leaders coming to a consensus with the rest of the workforce in how to process and deal with the crisis, instead of vice versa. It is proposed that leaders can be blinded by their authority and too easily disregard those at the front-line of the problem causing them to reject remunerative ideas.
Applications
Therapy
An inability to change how one thinks about a situation has been implicated as one of the causes of depression. Rumination, or the perseverance of negative thoughts, is often correlated with the severity of depression and anxiety. Individuals with high levels of rumination test low on scales of cognitive flexibility and have trouble shifting how they think about a problem or issue even when presented with facts that counter their thinking process.
In a review paper that outlined strategies that are effective for combating depression, the Socratic method was suggested to overcome cognitive inertia. By presenting the patient's incoherent beliefs close together and evaluating with the patient their thought processes behind those beliefs, the therapist is able to help them understand things from a different perspective.
Clinical diagnostics
In nosological literature relating to the symptom or disorder of apathy, clinicians have used cognitive inertia as one of the three main criteria for diagnosis. The description of cognitive inertia differs from its use in cognitive and industrial psychology in that lack of motivation plays a key role. As a clinical diagnostic criterion, Thant and Yager described it as "impaired abilities to elaborate and sustain goals and plans of actions, to shift mental sets, and to use working memory". This definition of apathy is frequently applied to onset of apathy due to neurodegenerative disorders such as Alzheimer's and Parkinson's disease but has also been applied to individuals who have gone through extreme trauma or abuse.
Neural anatomy and correlates
Cortical
Cognitive inertia has been linked to decreased use of executive function, primarily in the prefrontal cortex, which aids in the flexibility of cognitive processes when switching tasks. Delayed response on the implicit associations task (IAT) and Stroop task have been related to an inability to combat cognitive inertia, as participants struggle to switch from one cognitive rule to the next to get the questions right.
Before taking part in an electronic brainstorming session, participants were primed with pictures that motivated achievement to combat cognitive inertia. In the achievement-primed condition, subjects were able to produce more novel, high-quality ideas. They used more right frontal cortical areas related to decision-making and creativity.
Cognitive inertia is a critical dimension of clinical apathy, described as a lack of motivation to elaborate plans for goal-directed behavior or automated processing. Parkinson's patients whose apathy was measured using the cognitive inertia dimension showed less executive function control than Parkinson's patients without apathy, possibly suggesting more damage to the frontal cortex. Additionally, more damage to the basal ganglia in Parkinson's, Huntington's and other neurodegenerative disorders have been found with patients exhibiting cognitive inertia in relation to apathy when compared to those who do not exhibit apathy. Patients with lesions to the dorsolateral prefrontal cortex have shown reduced motivation to change cognitive strategies and how they view situations, similar to individuals who experience apathy and cognitive inertia after severe or long-term trauma.
Functional connectivity
Nursing home patients who have dementia have been found to have larger reductions in functional brain connectivity, primarily in the corpus callosum, important for communication between hemispheres. Cognitive inertia in neurodegenerative patients has also been associated with a decrease in the connection of the dorsolateral prefrontal cortex and posterior parietal area with subcortical areas, including the anterior cingulate cortex and basal ganglia. Both findings are suggested to decrease motivation to change one's thought processes or create new goal-directed behavior.
Alternative theories
Some researchers have refuted the cognitive perspective of cognitive inertia and suggest a more holistic approach that considers the motivations, emotions, and attitudes that fortify the existing frame of reference.
Alternative paradigms
Motivated reasoning
The theory of motivated reasoning is proposed to be driven by the individual's motivation to think a certain way, often to avoid thinking negatively about oneself. The individual's own cognitive and emotional biases are commonly used to justify a thought, belief, or behavior. Unlike cognitive inertia, where an individual's orientation in processing information remains unchanged either due to new information not being fully absorbed or being blocked by a cognitive bias, motivated reasoning may change the orientation or keep it the same depending on whether that orientation benefits the individual.
In an extensive online study, participant opinions were acquired after two readings about various political issues to assess the role of cognitive inertia. The participants gave their opinions after the first reading and were then assigned a second reading with new information; after being assigned to read more information on the issue that either confirmed or disconfirmed their initial opinion, the majority of participants' opinions did not change. When asked about the information in the second reading, those who did not change their opinion evaluated the information that supported their initial opinion as stronger than information that disconfirmed their initial opinion. The persistence in how the participants viewed the incoming information was based on their motivation to be correct in their initial opinion, not the persistence of an existing cognitive perspective.
Socio-cognitive inflexibility
From a social psychology perspective, individuals continually shape beliefs and attitudes about the world based on interaction with others. What information the individual attends to is based on prior experience and knowledge of the world. Cognitive inertia is seen not just as a malfunction in updating how information is being processed but as the assumptions about the world and how it works can impede cognitive flexibility. The persistence of the idea of the nuclear family has been proposed as a socio-cognitive inertia. Despite the changing trends in family structure, including multi-generational, single-parent, blended, and same-sex parent families, the normative idea of a family has centered around the mid-twentieth century idea of a nuclear family (i.e., mother, father, and children). Various social influences are proposed to maintain the inertia of this viewpoint, including media portrayals, the persistence of working-class gender roles, unchanged domestic roles despite working mothers, and familial pressure to conform.
The phenomenon of cognitive inertia in brainstorming groups has been argued to be due to other psychological effects such as fear of disagreeing with an authority figure in the group, fear of new ideas being rejected and the majority of speech being attributed to the minority group members. Internet-based brainstorming groups have been found to produce more ideas of high-quality because it overcomes the problem of speaking up and fear of idea rejection.
See also
References
Cognitive psychology
Heuristics
Management
Behavioral economics | 0.787653 | 0.971426 | 0.765147 |
Magnetomotive force | In physics, the magnetomotive force (abbreviated mmf or MMF, symbol ) is a quantity appearing in the equation for the magnetic flux in a magnetic circuit, Hopkinson's law. It is the property of certain substances or phenomena that give rise to magnetic fields:
where is the magnetic flux and is the reluctance of the circuit. It can be seen that the magnetomotive force plays a role in this equation analogous to the voltage in Ohm's law, , since it is the cause of magnetic flux in a magnetic circuit:
where is the number of turns in a coil and is the electric current through the coil.
where is the magnetic flux and is the magnetic reluctance
where is the magnetizing force (the strength of the magnetizing field) and is the mean length of a solenoid or the circumference of a toroid.
Units
The SI unit of mmf is the ampere, the same as the unit of current (analogously the units of emf and voltage are both the volt). Informally, and frequently, this unit is stated as the ampere-turn to avoid confusion with current. This was the unit name in the MKS system. Occasionally, the cgs system unit of the gilbert may also be encountered.
History
The term magnetomotive force was coined by Henry Augustus Rowland in 1880. Rowland intended this to indicate a direct analogy with electromotive force. The idea of a magnetic analogy to electromotive force can be found much earlier in the work of Michael Faraday (1791–1867) and it is hinted at by James Clerk Maxwell (1831–1879). However, Rowland coined the term and was the first to make explicit an Ohm's law for magnetic circuits in 1873.
Ohm's law for magnetic circuits is sometimes referred to as Hopkinson's law rather than Rowland's law as some authors attribute the law to John Hopkinson instead of Rowland. According to a review of magnetic circuit analysis methods this is an incorrect attribution originating from an 1885 paper by Hopkinson. Furthermore, Hopkinson actually cites Rowland's 1873 paper in this work.
References
Bibliography
Cited sources
Hon, Giora; Goldstein, Bernard R, "Symmetry and asymmetry in electrodynamics from Rowland to Einstein", Studies in History and Philosophy of Modern Physics, vol. 37, iss. 4, pp. 635–660, Elsevier December 2006.
Hopkinson, John, "Magnetisation of iron", Philosophical Transactions of the Royal Society, vol. 176, pp. 455–469, 1885.
Lambert, Mathieu; Mahseredjian, Jean; Martínez-Duró, Manuel; Sirois, Frédéric, "Magnetic circuits within electric circuits: critical review of existing methods and new mutator implementations", IEEE Transactions on Power Delivery, vol. 30, iss. 6, pp. 2427–2434, December 2015.
Rowland, Henry A, "On magnetic permeability and the maximum magnetism of iron, steel, and nickel", Philosophical Magazine, series 4, vol. 46, no. 304, pp. 140–159, August 1873.
Rowland, Henry A, "On the general equations of electro-magnetic action, with application to a new theory of magnetic attractions, and to the theory of the magnetic rotation of the plane of polarization of light" (part 2), American Journal of Mathematics, vol. 3, nos. 1–2, pp. 89–113, March 1880.
Schmidt, Robert Munnig; Schitter, Georg, "Electromechanical actuators", ch. 5 in Schmidt, Robert Munnig; Schitter, Georg; Rankers, Adrian; van Eijk, Jan, The Design of High Performance Mechatronics, IOS Press, 2014 .
Thompson, Silvanus Phillips, The Electromagnet and Electromagnetic Mechanism, Cambridge University Press, 2011 (first published 1891) .
Smith, R.J. (1966), Circuits, Devices and Systems, Chapter 15, Wiley International Edition, New York. Library of Congress Catalog Card No. 66-17612
Waygood, Adrian, An Introduction to Electrical Science, Routledge, 2013 .
General references
The Penguin Dictionary of Physics, 1977,
A Textbook of Electrical Technology, 2008,
Magnetism
Physical quantities
it:Forza magnetomotrice | 0.774881 | 0.987404 | 0.765121 |
Mie scattering | In electromagnetism, the Mie solution to Maxwell's equations (also known as the Lorenz–Mie solution, the Lorenz–Mie–Debye solution or Mie scattering) describes the scattering of an electromagnetic plane wave by a homogeneous sphere. The solution takes the form of an infinite series of spherical multipole partial waves. It is named after German physicist Gustav Mie.
The term Mie solution is also used for solutions of Maxwell's equations for scattering by stratified spheres or by infinite cylinders, or other geometries where one can write separate equations for the radial and angular dependence of solutions. The term Mie theory is sometimes used for this collection of solutions and methods; it does not refer to an independent physical theory or law. More broadly, the "Mie scattering" formulas are most useful in situations where the size of the scattering particles is comparable to the wavelength of the light, rather than much smaller or much larger.
Mie scattering (sometimes referred to as a non-molecular scattering or aerosol particle scattering) takes place in the lower of the atmosphere, where many essentially spherical particles with diameters approximately equal to the wavelength of the incident ray may be present. Mie scattering theory has no upper size limitation, and converges to the limit of geometric optics for large particles.
Introduction
A modern formulation of the Mie solution to the scattering problem on a sphere can be found in many books, e.g., J. A. Stratton's Electromagnetic Theory. In this formulation, the incident plane wave, as well as the scattering field, is expanded into radiating spherical vector spherical harmonics. The internal field is expanded into regular vector spherical harmonics. By enforcing the boundary condition on the spherical surface, the expansion coefficients of the scattered field can be computed.
For particles much larger or much smaller than the wavelength of the scattered light there are simple and accurate approximations that suffice to describe the behavior of the system. But for objects whose size is within a few orders of magnitude of the wavelength, e.g., water droplets in the atmosphere, latex particles in paint, droplets in emulsions, including milk, and biological cells and cellular components, a more detailed approach is necessary.
The Mie solution is named after its developer, German physicist Gustav Mie. Danish physicist Ludvig Lorenz and others independently developed the theory of electromagnetic plane wave scattering by a dielectric sphere.
The formalism allows the calculation of the electric and magnetic fields inside and outside a spherical object and is generally used to calculate either how much light is scattered (the total optical cross section), or where it goes (the form factor). The notable features of these results are the Mie resonances, sizes that scatter particularly strongly or weakly. This is in contrast to Rayleigh scattering for small particles and Rayleigh–Gans–Debye scattering (after Lord Rayleigh, Richard Gans and Peter Debye) for large particles. The existence of resonances and other features of Mie scattering makes it a particularly useful formalism when using scattered light to measure particle size.
Approximations
Rayleigh approximation (scattering)
Rayleigh scattering describes the elastic scattering of light by spheres that are much smaller than the wavelength of light. The intensity I of the scattered radiation is given by
where I0 is the light intensity before the interaction with the particle, R is the distance between the particle and the observer, θ is the scattering angle, λ is the wavelength of light under consideration, n is the refractive index of the particle, and d is the diameter of the particle.
It can be seen from the above equation that Rayleigh scattering is strongly dependent upon the size of the particle and the wavelengths. The intensity of the Rayleigh scattered radiation increases rapidly as the ratio of particle size to wavelength increases. Furthermore, the intensity of Rayleigh scattered radiation is identical in the forward and reverse directions.
The Rayleigh scattering model breaks down when the particle size becomes larger than around 10% of the wavelength of the incident radiation. In the case of particles with dimensions greater than this, Mie's scattering model can be used to find the intensity of the scattered radiation. The intensity of Mie scattered radiation is given by the summation of an infinite series of terms rather than by a simple mathematical expression. It can be shown, however, that scattering in this range of particle sizes differs from Rayleigh scattering in several respects: it is roughly independent of wavelength and it is larger in the forward direction than in the reverse direction. The greater the particle size, the more of the light is scattered in the forward direction.
The blue colour of the sky results from Rayleigh scattering, as the size of the gas particles in the atmosphere is much smaller than the wavelength of visible light. Rayleigh scattering is much greater for blue light than for other colours due to its shorter wavelength. As sunlight passes through the atmosphere, its blue component is Rayleigh scattered strongly by atmospheric gases but the longer wavelength (e.g. red/yellow) components are not. The sunlight arriving directly from the Sun therefore appears to be slightly yellow, while the light scattered through rest of the sky appears blue. During sunrises and sunsets, the effect of Rayleigh scattering on the spectrum of the transmitted light is much greater due to the greater distance the light rays have to travel through the high-density air near the Earth's surface.
In contrast, the water droplets that make up clouds are of a comparable size to the wavelengths in visible light, and the scattering is described by Mie's model rather than that of Rayleigh. Here, all wavelengths of visible light are scattered approximately identically, and the clouds therefore appear to be white or grey.
Rayleigh–Gans approximation
The Rayleigh–Gans approximation is an approximate solution to light scattering when the relative refractive index of the particle is close to that of the environment, and its size is much smaller in comparison to the wavelength of light divided by |n − 1|, where n is the refractive index:
where is the wavevector of the light, and refers to the linear dimension of the particle. The former condition is often referred as optically soft and the approximation holds for particles of arbitrary shape.
Anomalous diffraction approximation of van de Hulst
The anomalous diffraction approximation is valid for large (compared to wavelength) and optically soft spheres; soft in the context of optics implies that the refractive index of the particle (m) differs only slightly from the refractive index of the environment, and the particle subjects the wave to only a small phase shift. The extinction efficiency in this approximation is given by
where Q is the efficiency factor of scattering, which is defined as the ratio of the scattering cross-section and geometrical cross-section πa2.
The term p = 4πa(n − 1)/λ has as its physical meaning the phase delay of the wave passing through the centre of the sphere, where a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
This set of equations was first described by van de Hulst in (1957).
Mathematics
The scattering by a spherical nanoparticle is solved exactly regardless of the particle size. We consider scattering by a plane wave propagating along the z-axis polarized along the x-axis. Dielectric and magnetic permeabilities of a particle are and , and and for the environment.
In order to solve the scattering problem, we write first the solutions of the vector Helmholtz equation in spherical coordinates, since the fields inside and outside the particles must satisfy it. Helmholtz equation:
In addition to the Helmholtz equation, the fields must satisfy the conditions and , .
Vector spherical harmonics possess all the necessary properties, introduced as follows:
— magnetic harmonics (TE),
— electric harmonics (TM),
where
and — Associated Legendre polynomials, and — any of the spherical Bessel functions.
Next, we expand the incident plane wave in vector spherical harmonics:
Here the superscript means that in the radial part of the functions are spherical Bessel functions of the first kind.
The expansion coefficients are obtained by taking integrals of the form
In this case, all coefficients at are zero, since the integral over the angle in the numerator is zero.
Then the following conditions are imposed:
Interface conditions on the boundary between the sphere and the environment (which allow us to relate the expansion coefficients of the incident, internal, and scattered fields)
The condition that the solution is bounded at the origin (therefore, in the radial part of the generating functions , spherical Bessel functions of the first kind are selected for the internal field),
For a scattered field, the asymptotics at infinity corresponds to a diverging spherical wave (in connection with this, for the scattered field in the radial part of the generating functions spherical Hankel functions of the first kind are chosen).
Scattered fields are written in terms of a vector harmonic expansion as
Here the superscript means that in the radial part of the functions are spherical Hankel functions of the first kind (those of the second kind would have ), and ,
Internal fields:
is the wave vector outside the particle is the wave vector in the medium from the particle material, and are the refractive indices of the medium and the particle.
After applying the interface conditions, we obtain expressions for the coefficients:
where
with being the radius of the sphere.
and represent the spherical functions of Bessel and Hankel of the first kind, respectively.
Scattering and extinction cross-sections
Values commonly calculated using Mie theory include efficiency coefficients for extinction , scattering , and absorption . These efficiency coefficients are ratios of the cross section of the respective process, , to the particle protected area, , where a is the particle radius.
According to the definition of extinction,
and .
The scattering and extinction coefficients can be represented as the infinite series:
The contributions in these sums, indexed by n, correspond to the orders of a multipole expansion with being the dipole term, being the quadrapole term, and so forth.
Application to larger particles
If the size of the particle is equal to several wavelengths in the material, then the scattered fields have some features. Further, the form of the electric field is key, since the magnetic field is obtained from it by taking the curl.
All Mie coefficients depend on the frequency and have maximums when the denominator is close to zero (exact equality to zero is achieved for complex frequencies). In this case, it is possible, that the contribution of one specific harmonic dominates in scattering. Then at large distances from the particle, the radiation pattern of the scattered field will be similar to the corresponding radiation pattern of the angular part of vector spherical harmonics. The harmonics correspond to electric dipoles (if the contribution of this harmonic dominates in the expansion of the electric field, then the field is similar to the electric dipole field), correspond to the electric field of the magnetic dipole, and - electric and magnetic quadrupoles, and - octupoles, and so on. The maxima of the scattering coefficients (as well as the change of their phase to ) are called multipole resonances, and zeros can be called anapoles.
The dependence of the scattering cross-section on the wavelength and the contribution of specific resonances strongly depends on the particle material. For example, for a gold particle with a radius of 100 nm, the contribution of the electric dipole to scattering predominates in the optical range, while for a silicon particle there are pronounced magnetic dipole and quadrupole resonances. For metal particles, the peak visible in the scattering cross-section is also called localized plasmon resonance.
In the limit of small particles or long wavelengths, the electric dipole contribution dominates in the scattering cross-section.
Other directions of the incident plane wave
In case of x-polarized plane wave, incident along the z-axis, decompositions of all fields contained only harmonics with m= 1, but for an arbitrary incident wave this is not the case. For a rotated plane wave, the expansion coefficients can be obtained, for example, using the fact that during rotation, vector spherical harmonics are transformed through each other by Wigner D-matrixes.
In this case, the scattered field will be decomposed by all possible harmonics:
Then the scattering cross section will be expressed in terms of the coefficients as follows:
Kerker effect
The Kerker effect is a phenomenon in scattering directionality, which occurs when different multipole responses are presented and not negligible.
In 1983, in the work of Kerker, Wang and Giles, the direction of scattering by particles with was investigated. In particular, it was shown that for hypothetical particles with backward scattering is completely suppressed. This can be seen as an extension to a spherical surface of Giles' and Wild's results for reflection at a planar surface with equal refractive indices where reflection and transmission is constant and independent of angle of incidence.
In addition, scattering cross sections in the forward and backward directions are simply expressed in terms of Mie coefficients:
For certain combinations of coefficients, the expressions above can be minimized.
So, for example, when terms with can be neglected (dipole approximation), , corresponds to the minimum in backscattering (magnetic and electric dipoles are equal in magnitude and are in phase, this is also called first Kerker or zero-backward intensity condition). And corresponds to minimum in forward scattering, this is also called second Kerker condition (or near-zero forward intensity condition). From the optical theorem, it is shown that for a passive particle is not possible. For the exact solution of the problem, it is necessary to take into account the contributions of all multipoles. The sum of the electric and magnetic dipoles forms Huygens source
For dielectric particles, maximum forward scattering is observed at wavelengths longer than the wavelength of magnetic dipole resonance, and maximum backward scattering at shorter ones.
Later, other varieties of the effect were found. For example, the transverse Kerker effect, with nearly complete simultaneous
suppression of both forward and backward scattered fields (side-scattering patterns), optomechanical Kerker effect, in acoustic scattering, and also found in plants.
There is also a short with an explanation of the effect.
Dyadic Green's function of a sphere
Green's function is a solution to the following equation:
where — identity matrix for , and for . Since all fields are vectorial, the Green function is a 3 by 3 matrix and is called a dyadic. If polarization is induced in the system, when the fields are written as
In the same way as the fields, the Green's function can be decomposed into vector spherical harmonics.
Dyadic Green's function of a free space а:
In the presence of a sphere, the Green's function is also decomposed into vector spherical harmonics. Its appearance depends on the environment in which the points and are located.
When both points are outside the sphere:
where the coefficients are :
When both points are inside the sphere :
Coefficients:
Source is inside the sphere and observation point is outside:
coefficients:
Source is outside the sphere and observation point is inside :
coefficients:
Computational codes
Mie solutions are implemented in a number of programs written in different computer languages such as Fortran, MATLAB, and Mathematica. These solutions approximate an infinite series, and provide as output the calculation of the scattering phase function, extinction, scattering, and absorption efficiencies, and other parameters such as asymmetry parameters or radiation torque. Current usage of the term "Mie solution" indicates a series approximation to a solution of Maxwell's equations. There are several known objects that allow such a solution: spheres, concentric spheres, infinite cylinders, clusters of spheres and clusters of cylinders. There are also known series solutions for scattering by ellipsoidal particles. A list of codes implementing these specialized solutions is provided in the following:
Codes for electromagnetic scattering by spheres – solutions for a single sphere, coated spheres, multilayer sphere, and cluster of spheres;
Codes for electromagnetic scattering by cylinders – solutions for a single cylinder, multilayer cylinders, and cluster of cylinders.
A generalization that allows a treatment of more generally shaped particles is the T-matrix method, which also relies on a series approximation to solutions of Maxwell's equations.
See also external links for other codes and calculators.
Applications
Mie theory is very important in meteorological optics, where diameter-to-wavelength ratios of the order of unity and larger are characteristic for many problems regarding haze and cloud scattering. A further application is in the characterization of particles by optical scattering measurements. The Mie solution is also important for understanding the appearance of common materials like milk, biological tissue and latex paint.
Atmospheric science
Mie scattering occurs when the diameters of atmospheric particulates are similar to or larger than the wavelengths of the light. Dust, pollen, smoke and microscopic water droplets that form clouds are common causes of Mie scattering. Mie scattering occurs mostly in the lower portions of the atmosphere, where larger particles are more abundant, and dominates in cloudy conditions.
Cancer detection and screening
Mie theory has been used to determine whether scattered light from tissue corresponds to healthy or cancerous cell nuclei using angle-resolved low-coherence interferometry.
Clinical laboratory analysis
Mie theory is a central principle in the application of nephelometric based assays, widely used in medicine to measure various plasma proteins. A wide array of plasma proteins can be detected and quantified by nephelometry.
Magnetic particles
A number of unusual electromagnetic scattering effects occur for magnetic spheres. When the relative permittivity equals the permeability, the back-scatter gain is zero. Also, the scattered radiation is polarized in the same sense as the incident radiation. In the small-particle (or long-wavelength) limit, conditions can occur for zero forward scatter, for complete polarization of scattered radiation in other directions, and for asymmetry of forward scatter to backscatter. The special case in the small-particle limit provides interesting special instances of complete polarization and forward-scatter-to-backscatter asymmetry.
Metamaterial
Mie theory has been used to design metamaterials. They usually consist of three-dimensional composites of metal or non-metallic inclusions periodically or randomly embedded in a low-permittivity matrix. In such a scheme, the negative constitutive parameters are designed to appear around the Mie resonances of the inclusions: the negative effective permittivity is designed around the resonance of the Mie electric dipole scattering coefficient, whereas negative effective permeability is designed around the resonance of the Mie magnetic dipole scattering coefficient, and doubly negative material (DNG) is designed around the overlap of resonances of Mie electric and magnetic dipole scattering coefficients. The particle usually have the following combinations:
one set of magnetodielectric particles with values of relative permittivity and permeability much greater than one and close to each other;
two different dielectric particles with equal permittivity but different size;
two different dielectric particles with equal size but different permittivity.
In theory, the particles analyzed by Mie theory are commonly spherical but, in practice, particles are usually fabricated as cubes or cylinders for ease of fabrication. To meet the criteria of homogenization, which may be stated in the form that the lattice constant is much smaller than the operating wavelength, the relative permittivity of the dielectric particles should be much greater than 1, e.g. to achieve negative effective permittivity (permeability).
Particle sizing
Mie theory is often applied in laser diffraction analysis to inspect the particle sizing effect. While early computers in the 1970s were only able to compute diffraction data with the more simple Fraunhofer approximation, Mie is widely used since the 1990s and officially recommended for particles below 50 micrometers in guideline ISO 13320:2009.
Mie theory has been used in the detection of oil concentration in polluted water.
Mie scattering is the primary method of sizing single sonoluminescing bubbles of air in water and is valid for cavities in materials, as well as particles in materials, as long as the surrounding material is essentially non-absorbing.
Parasitology
It has also been used to study the structure of Plasmodium falciparum, a particularly pathogenic form of malaria.
Extensions
In 1986, P. A. Bobbert and J. Vlieger extended the Mie model to calculate scattering by a sphere in a homogeneous medium placed on flat surface: the Bobbert–Vlieger (BV) model. Like the Mie model, the extended model can be applied to spheres with a radius nearly the wavelength of the incident light. The model has been implemented in C++ source code.
Recent developments are related to scattering by ellipsoid.
The contemporary studies go to well known research of Rayleigh.
See also
Codes for electromagnetic scattering by spheres
Computational electromagnetics
Light scattering by particles
List of atmospheric radiative transfer codes
Optical properties of water and ice
References
Further reading
External links
SCATTERLIB and scattport.org are collections of light scattering codes with implementations of Mie solutions in Fortran, C++, IDL, Pascal, Mathematica, and Mathcad
JMIE (2D C++ code to calculate the analytical fields around an infinite cylinder, developed by Jeffrey M. McMahon)
ScatLab. Mie scattering software for Windows.
STRATIFY MATLAB code of scattering from multilayered spheres in cases where the source is a point dipole and a plane wave. Description in arXiv:2006.06512
Scattnlay, an open-source C++ Mie solution package with Python and JavaScript wrappers. Provides far-field and near-field simulation results for multilayered spheres.
Online Mie scattering calculator provides simulation of scattering properties (including multipole decomposition) and near-field maps for bulk, core-shell, and multilayer spheres. Material parameters include all nk-data files from refractiveindex.info website. The source code is part of Scattnlay project.
Online Mie solution calculator is available, with documentation in German and English.
Online Mie scattering calculator produces beautiful graphs over a range of parameters.
phpMie Online Mie scattering calculator written on PHP.
Mie resonance mediated light diffusion and random lasing.
Mie solution for spherical particles.
PyMieScatt, a Mie solution package written in Python.
pyMieForAll, an open-source C++ Mie solution package with Python wrapper.
Radio frequency propagation
Scattering, absorption and radiative transfer (optics)
Visibility | 0.768837 | 0.995162 | 0.765118 |
Cherenkov radiation | Cherenkov radiation (also known as Čerenkov radiation) is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a dielectric medium (such as distilled water) at a speed greater than the phase velocity (speed of propagation of a wavefront in a medium) of light in that medium. A classic example of Cherenkov radiation is the characteristic blue glow of an underwater nuclear reactor. Its cause is similar to the cause of a sonic boom, the sharp sound heard when faster-than-sound movement occurs. The phenomenon is named after Soviet physicist Pavel Cherenkov.
History
The radiation is named after the Soviet scientist Pavel Cherenkov, the 1958 Nobel Prize winner, who was the first to detect it experimentally under the supervision of Sergey Vavilov at the Lebedev Institute in 1934. Therefore, it is also known as Vavilov–Cherenkov radiation. Cherenkov saw a faint bluish light around a radioactive preparation in water during experiments. His doctorate thesis was on luminescence of uranium salt solutions that were excited by gamma rays instead of less energetic visible light, as done commonly. He discovered the anisotropy of the radiation and came to the conclusion that the bluish glow was not a fluorescent phenomenon.
A theory of this effect was later developed in 1937 within the framework of Einstein's special relativity theory by Cherenkov's colleagues Igor Tamm and Ilya Frank, who also shared the 1958 Nobel Prize.
Cherenkov radiation as conical wavefronts had been theoretically predicted by the English polymath Oliver Heaviside in papers published between 1888 and 1889 and by Arnold Sommerfeld in 1904, but both had been quickly dismissed following the relativity theory's restriction of superluminal particles until the 1970s. Marie Curie observed a pale blue light in a highly concentrated radium solution in 1910, but did not investigate its source. In 1926, the French radiotherapist Lucien Mallet described the luminous radiation of radium irradiating water having a continuous spectrum.
In 2019, a team of researchers from Dartmouth's and Dartmouth-Hitchcock's Norris Cotton Cancer Center discovered Cherenkov light being generated in the vitreous humor of patients undergoing radiotherapy. The light was observed using a camera imaging system called a CDose, which is specially designed to view light emissions from biological systems. For decades, patients had reported phenomena such as "flashes of bright or blue light" when receiving radiation treatments for brain cancer, but the effects had never been experimentally observed.
Physical origin
Basics
While the speed of light in vacuum is a universal constant, the speed in a material may be significantly less, as it is perceived to be slowed by the medium. For example, in water it is only 0.75c. Matter can accelerate to a velocity higher than this (although still less than c, the speed of light in vacuum) during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (can be polarized electrically) medium with a speed greater than light's speed in that medium.
The effect can be intuitively described in the following way. From classical physics, it is known that accelerating charged particles emit EM waves and via Huygens' principle these waves will form spherical wavefronts which propagate with the phase velocity of that medium (i.e. the speed of light in that medium given by , for , the refractive index). When any charged particle passes through a medium, the particles of the medium will polarize around it in response. The charged particle excites the molecules in the polarizable medium and on returning to their ground state, the molecules re-emit the energy given to them to achieve excitation as photons. These photons form the spherical wavefronts which can be seen originating from the moving particle. If , that is the velocity of the charged particle is less than that of the speed of light in the medium, then the polarization field which forms around the moving particle is usually symmetric. The corresponding emitted wavefronts may be bunched up, but they do not coincide or cross, and there are therefore no interference effects to consider. In the reverse situation, i.e. , the polarization field is asymmetric along the direction of motion of the particle, as the particles of the medium do not have enough time to recover to their "normal" randomized states. This results in overlapping waveforms (as in the animation) and constructive interference leads to an observed cone-like light signal at a characteristic angle: Cherenkov light.
A common analogy is the sonic boom of a supersonic aircraft. The sound waves generated by the aircraft travel at the speed of sound, which is slower than the aircraft, and cannot propagate forward from the aircraft, instead forming a conical shock front. In a similar way, a charged particle can generate a "shock wave" of visible light as it travels through an insulator.
The velocity that must be exceeded is the phase velocity of light rather than the group velocity of light. The phase velocity can be altered dramatically by using a periodic medium, and in that case one can even achieve Cherenkov radiation with no minimum particle velocity, a phenomenon known as the Smith–Purcell effect. In a more complex periodic medium, such as a photonic crystal, one can also obtain a variety of other anomalous Cherenkov effects, such as radiation in a backwards direction (see below) whereas ordinary Cherenkov radiation forms an acute angle with the particle velocity.
In their original work on the theoretical foundations of Cherenkov radiation, Tamm and Frank wrote, "This peculiar radiation can evidently not be explained by any common mechanism such as the interaction of the fast electron with individual atom or as radiative scattering of electrons on atomic nuclei. On the other hand, the phenomenon can be explained both qualitatively and quantitatively if one takes into account the fact that an electron moving in a medium does radiate light even if it is moving uniformly provided that its velocity is greater than the velocity of light in the medium."
Emission angle
In the figure on the geometry, the particle (red arrow) travels in a medium with speed such that
where is speed of light in vacuum, and is the refractive index of the medium. If the medium is water, the condition is , since for water at 20 °C.
We define the ratio between the speed of the particle and the speed of light as
The emitted light waves (denoted by blue arrows) travel at speed
The left corner of the triangle represents the location of the superluminal particle at some initial moment. The right corner of the triangle is the location of the particle at some later time t. In the given time t, the particle travels the distance
whereas the emitted electromagnetic waves are constricted to travel the distance
So the emission angle results in
Arbitrary emission angle
Cherenkov radiation can also radiate in an arbitrary direction using properly engineered one dimensional metamaterials. The latter is designed to introduce a gradient of phase retardation along the trajectory of the fast travelling particle, reversing or steering Cherenkov emission at arbitrary angles given by the generalized relation:
Note that since this ratio is independent of time, one can take arbitrary times and achieve similar triangles. The angle stays the same, meaning that subsequent waves generated between the initial time and final time t will form similar triangles with coinciding right endpoints to the one shown.
Reverse Cherenkov effect
A reverse Cherenkov effect can be experienced using materials called negative-index metamaterials (materials with a subwavelength microstructure that gives them an effective "average" property very different from their constituent materials, in this case having negative permittivity and negative permeability). This means that, when a charged particle (usually electrons) passes through a medium at a speed greater than the phase velocity of light in that medium, that particle emits trailing radiation from its progress through the medium rather than in front of it (as is the case in normal materials with, both permittivity and permeability positive). One can also obtain such reverse-cone Cherenkov radiation in non-metamaterial periodic media where the periodic structure is on the same scale as the wavelength, so it cannot be treated as an effectively homogeneous metamaterial.
In vacuum
The Cherenkov effect can occur in vacuum. In a slow-wave structure, like in a traveling-wave tube (TWT), the phase velocity decreases and the velocity of charged particles can exceed the phase velocity while remaining lower than . In such a system, this effect can be derived from conservation of the energy and momentum where the momentum of a photon should be ( is phase constant) rather than the de Broglie relation . This type of radiation (VCR) is used to generate high-power microwaves.
Collective Cherenkov
Radiation with the same properties of typical Cherenkov radiation can be created by structures of electric current that travel faster than light. By manipulating density profiles in plasma acceleration setups, structures up to nanocoulombs of charge are created and may travel faster than the speed of light and emit optical shocks at the Cherenkov angle. Electrons are still subluminal, hence the electrons that compose the structure at a time are different from the electrons in the structure at a time .
Characteristics
The frequency spectrum of Cherenkov radiation by a particle is given by the Frank–Tamm formula:
The Frank–Tamm formula describes the amount of energy emitted from Cherenkov radiation, per unit length traveled and per frequency . is the permeability and is the index of refraction of the material the charged particle moves through. is the electric charge of the particle, is the speed of the particle, and is the speed of light in vacuum.
Unlike fluorescence or emission spectra that have characteristic spectral peaks, Cherenkov radiation is continuous. Around the visible spectrum, the relative intensity per unit frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum—it is only with sufficiently accelerated charges that it even becomes visible; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum.
There is a cut-off frequency above which the equation can no longer be satisfied. The refractive index varies with frequency (and hence with wavelength) in such a way that the intensity cannot continue to increase at ever shorter wavelengths, even for very relativistic particles (where v/c is close to 1). At X-ray frequencies, the refractive index becomes less than 1 (note that in media, the phase velocity may exceed c without violating relativity) and hence no X-ray emission (or shorter wavelength emissions such as gamma rays) would be observed. However, X-rays can be generated at special frequencies just below the frequencies corresponding to core electronic transitions in a material, as the index of refraction is often greater than 1 just below a resonant frequency (see Kramers–Kronig relation and Anomalous dispersion).
As in sonic booms and bow shocks, the angle of the shock cone is directly related to the velocity of the disruption. The Cherenkov angle is zero at the threshold velocity for the emission of Cherenkov radiation. The angle takes on a maximum as the particle speed approaches the speed of light. Hence, observed angles of incidence can be used to compute the direction and speed of a Cherenkov radiation-producing charge.
Cherenkov radiation can be generated in the eye by charged particles hitting the vitreous humour, giving the impression of flashes, as in cosmic ray visual phenomena and possibly some observations of criticality accidents.
Uses
Detection of labelled biomolecules
Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. Radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.
Medical imaging of radioisotopes and external beam radiotherapy
More recently, Cherenkov light has been used to image substances in the body. These discoveries have led to intense interest around the idea of using this light signal to quantify and/or detect radiation in the body, either from internal sources such as injected radiopharmaceuticals or from external beam radiotherapy in oncology. Radioisotopes such as the positron emitters 18F and 13N or beta emitters 32P or 90Y have measurable Cherenkov emission and isotopes 18F and 131I have been imaged in humans for diagnostic value demonstration.
External beam radiation therapy has been shown to induce a substantial amount of Cherenkov light in the tissue being treated, due to electron beams or photon beams with energy in the 6 MV to 18 MV ranges. The secondary electrons induced by these high energy x-rays result in the Cherenkov light emission, where the detected signal can be imaged at the entry and exit surfaces of the tissue. The Cherenkov light emitted from patient's tissue during radiation therapy is a very low light level signal but can be detected by specially designed cameras that synchronize their acquisition to the linear accelerator pulses. The ability to see this signal shows the shape of the radiation beam as it is incident upon the tissue in real time.
Nuclear reactors
Cherenkov radiation is used to detect high-energy charged particles. In open pool reactors, beta particles (high-energy electrons) are released as the fission products decay. The glow continues after the chain reaction stops, dimming as the shorter-lived products decay. Similarly, Cherenkov radiation can characterize the remaining radioactivity of spent fuel rods. This phenomenon is used to verify the presence of spent nuclear fuel in spent fuel pools for nuclear safeguards purposes.
Astrophysics experiments
When a high-energy (TeV) gamma photon or cosmic ray interacts with the Earth's atmosphere, it may produce an electron–positron pair with enormous velocities. The Cherenkov radiation emitted in the atmosphere by these charged particles is used to determine the direction and energy of the cosmic ray or gamma ray, which is used for example in the Imaging Atmospheric Cherenkov Technique (IACT), by experiments such as VERITAS, H.E.S.S., MAGIC. Cherenkov radiation emitted in tanks filled with water by those charged particles reaching earth is used for the same goal by the Extensive Air Shower experiment HAWC, the Pierre Auger Observatory and other projects. Similar methods are used in very large neutrino detectors, such as the Super-Kamiokande, the Sudbury Neutrino Observatory (SNO) and IceCube. Other projects operated in the past applying related techniques, such as STACEE, a former solar tower refurbished to work as a non-imaging Cherenkov observatory, which was located in New Mexico.
Astrophysics observatories using the Cherenkov technique to measure air showers are key to determining the properties of astronomical objects that emit very-high-energy gamma rays, such as supernova remnants and blazars.
Particle physics experiments
Cherenkov radiation is commonly used in experimental particle physics for particle identification. One could measure (or put limits on) the velocity of an electrically charged elementary particle by the properties of the Cherenkov light it emits in a certain medium. If the momentum of the particle is measured independently, one could compute the mass of the particle by its momentum and velocity (see four-momentum), and hence identify the particle.
The simplest type of particle identification device based on a Cherenkov radiation technique is the threshold counter, which answers whether the velocity of a charged particle is lower or higher than a certain value (, where is the speed of light, and is the refractive index of the medium) by looking at whether this particle emits Cherenkov light in a certain medium. Knowing particle momentum, one can separate particles lighter than a certain threshold from those heavier than the threshold.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high-speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position sensitive planar photon detector, which allows reconstructing a ring or disc, whose radius is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.
See also
Askaryan radiation, similar radiation produced by fast uncharged particles
Blue noise
Bremsstrahlung, radiation produced when charged particles are decelerated by other charged particles
Faster-than-light, about conjectural propagation of information or matter faster than the speed of light
Frank–Tamm formula, giving the spectrum of Cherenkov radiation
Light echo
List of light sources
Non-radiation condition
Radioluminescence
Tachyon
Transition radiation
Citations
Sources
External links
Physical phenomena
Particle physics
Special relativity
Experimental particle physics
Light sources | 0.766018 | 0.998817 | 0.765112 |
Alice in Wonderland syndrome | Alice in Wonderland Syndrome (AIWS), also known as Todd's Syndrome or Dysmetropsia, is a neurological disorder that distorts perception. People with this syndrome may experience distortions in their visual perception of objects, such as appearing smaller (micropsia) or larger (macropsia), or appearing to be closer (pelopsia) or farther (teleopsia) than they are. Distortion may also occur for senses other than vision.
The cause of Alice in Wonderland Syndrome is currently not known, but it has often been associated with migraines, head trauma, or viral encephalitis caused by Epstein–Barr Virus Infection. It is also theorized that AIWS can be caused by abnormal amounts of electrical activity, resulting in abnormal blood flow in the parts of the brain that process visual perception and texture.
Although there are cases of Alice in Wonderland Syndrome in both adolescents and adults, it is most commonly seen in children.
Classification
The classification is not universally agreed upon in literature, however, some authors distinguish true Alice in Wonderland syndrome based solely on symptoms related to alterations in a person's body image. In contrast, they utilize the term "Alice in Wonderland-like syndrome" to encompass symptoms associated with changes in perception of vision, time, hearing, touch, or other external perceptions.
Due to the classification of all the clinical features seen in Alice in Wonderland, the table below illustrates theses features and symptoms by type with Type C having a combination of Type A and Type B symptoms.
Signs and symptoms
With over 60 associated symptoms, AIWS affects the sense of vision, sensation, touch, and hearing, as well as the perception of one's body image. Migraines, nausea, dizziness, and agitation are also commonly associated symptoms with Alice in Wonderland syndrome. Less frequent symptoms also include: loss of limb control and coordination, memory loss, lingering touch and sound sensations, and emotional instability. Alice in Wonderland syndrome is often associated with distortion of sensory perception, which involves visual, somatosensory, and non-visual symptoms. AIWS is characterized by the individual being able to recognize the distortion in the perception of their own body and is episodic. AIWS episodes vary in length from person to person. Episodes typically last from a few minutes to an hour, and each episode may vary in experience.
Visual distortions
Individuals with AIWS can experience illusions of expansion, reduction, or distortion of their body image, such as microsomatognosia (feeling that their own body or body parts are shrinking), or macrosomatognosia (feeling that their body or body parts are growing taller or larger). These changes in perception are collectively known as metamorphopsias, or Lilliputian hallucinations, which refer to objects appearing either smaller or larger than reality. People with certain neurological diseases may also experience similar visual hallucinations.
Within the category of Lilliputian hallucinations, people may experience either micropsia or macropsia. Micropsia is an abnormal visual condition, usually occurring in the context of visual hallucination, in which the affected person sees objects as being smaller than they are in reality. Macropsia is a condition where the individual sees everything larger than it is. These visual distortions are sometimes classified as "Alice in Wonderland-like syndrome" instead of true Alice in Wonderland syndrome but are often still classified as Alice in Wonderland syndrome by health professionals and researchers since the distinction is not official. Other distortions include teleopsia (objects are perceived further than they actually are) and pelopsia (objects are perceived closer than they actually are).
Depersonalization/derealization
Along with size, mass, and shape distortions of the body, those with Alice in Wonderland syndrome often experience a feeling of disconnection from one's own body, feelings, thoughts, and environment known as depersonalization-derealization disorder. Depersonalization is a term specifically used to express a true detachment from their personal self and identity. It is described as being an observer completely outside of their own actions and behaviors. Derealization is seen as "dreamlike, empty, lifeless, or visually distorted." Drug and alcohol use can exacerbate this symptom into psychosis.
Hearing and time distortions
Individuals experiencing Alice in Wonderland syndrome can also often experience paranoia as a result of disturbances in sound perception. These disturbances can include the amplification of soft sounds or the misinterpretation of common sounds. Other auditory changes include distortion in pitch and tone and hearing indistinguishable and strange voices, noises, or music.
A person affected by AIWS may also lose a sense of time, a problem similar to the lack of spatial perspective brought on by visual distortion. This condition is known as tachysensia. For those with tachysensia, time may seem to pass very slowly, similar to an LSD experience, and the lack of time and space perspective can also lead to a distorted sense of velocity. For example, an object could be moving extremely slowly in reality, but to a person experiencing time distortions, it could seem that the object was sprinting uncontrollably along a moving walkway, leading to severe, overwhelming disorientation. Having symptoms of tachysensia is correlated with various underlying conditions, including substance use, migraine, epilepsy, head trauma, and encephalitis. Regardless of an individual's disease diagnosis, tachysensia is often included as a symptom associated with Alice in Wonderland Syndrome since it is classified as a perceptual distortion. Therefore, a person can be described as having Alice in Wonderland syndrome even if that person is experiencing tachysensia due to an underlying condition.
Causes
Because AlWS is not commonly diagnosed and documented, it is difficult to estimate what the main causes are. The cause of over half of the documented cases of Alice in Wonderland syndrome is unknown. Complete and partial forms of the AIWS exist in a range of other disorders, including epilepsy, intoxicants, infectious states, fevers, and brain lesions. Furthermore, the syndrome is commonly associated with migraines, as well as excessive screen use in dark spaces and the use of psychoactive drugs. It can also be the initial symptom of the Epstein–Barr virus (see mononucleosis), and a relationship between the syndrome and mononucleosis has been suggested. Within this suggested relationship, Epstein–Barr virus appears to be the most common cause in children, while for adults it is more commonly associated with migraines.
Infectious diseases
A 2021 review found that infectious diseases are the most common cause of Alice in Wonderland syndrome, especially in pediatrics. Some of these infectious agents included Epstein–Barr virus, Varicella Zoster virus, Influenza, Zika, Coxsackievirus, Plasmodium falciparum protozoa, and Mycoplasma pneumonia/Streptococcus pyogenes bacteria. The Association of Alice in Wonderland syndrome is most commonly seen with the Epstein-Barr virus. However, pathogenesis is not well understood beyond these reviews. In some instances, Alice in Wonderland syndrome was reported to be associated with an Influenza A infection.
Cerebral hypotheses
Alice in Wonderland syndrome can be caused by abnormal amounts of electrical activity resulting in abnormal blood flow in the parts of the brain that process visual perception and texture. Nuclear medical techniques using technetium, performed on individuals during episodes of Alice in Wonderland syndrome, have demonstrated that Alice in Wonderland Syndrome is associated with reduced cerebral perfusion in various cortical regions (frontal, parietal, temporal and occipital), both in combination and in isolation. One hypothesis is that any condition resulting in a decrease in perfusion of the visual pathways or visual control centers of the brain may be responsible for the syndrome. For example, one study used single photon emission computed tomography to demonstrate reduced cerebral perfusion in the temporal lobe in people with Alice in Wonderland syndrome.
Other theories suggest the syndrome is a result of non-specific cortical dysfunction (e.g. from encephalitis, epilepsy, decreased cerebral blood flow), or reduced blood flow to other areas of the brain. Other theories suggest that distorted body image perceptions stem from within the parietal lobe. This has been demonstrated by the production of body image disturbances through electrical stimulation of the posterior parietal cortex. Other researchers suggest that metamorphopsias, or visual distortions, may be a result of reduced perfusion of the non-dominant posterior parietal lobe during migraine episodes.
Throughout all the neuroimaging studies, several cortical regions (including the temporoparietal junction within the parietal lobe, and the visual pathway, specifically the occipital lobe) are associated with the development of Alice in Wonderland syndrome symptoms.[1]
Migraines
1 in 10 people who experience migraines have symptoms of Alice in Wonderland syndrome. The role of migraines in Alice in Wonderland syndrome is still not understood, but both vascular and electrical theories have been suggested. For example, visual distortions may be a result of transient, localized ischemia in areas of the visual pathway during migraine attacks. In addition, a spreading wave of depolarization of cells (particularly glial cells) in the cerebral cortex during migraine attacks can eventually activate the trigeminal nerve's regulation of the vascular system. The intense cranial pain during migraines is due to the connection of the trigeminal nerve with the thalamus and thalamic projections onto the sensory cortex. Alice in Wonderland syndrome symptoms can precede, accompany, or replace the typical migraine symptoms. Typical migraines (aura, visual derangements, hemicrania headache, nausea, and vomiting) are both a cause and an associated symptom of Alice in Wonderland Syndrome. Alice in Wonderland Syndrome is associated with macrosomatognosia which can mostly be experienced during migraine auras.
Genetic and environmental influence
While there currently is no identified genetic locus/loci associated with Alice in Wonderland syndrome, observations suggest that a genetic component may exist but the evidence so far is inconclusive. There is also an established genetic component for migraines which may be considered to be a possible cause and influence for hereditary Alice in Wonderland syndrome. Though most frequently described in children and adolescents, observational studies have found that many parents of children experiencing Alice in Wonderland syndrome have also experienced similar symptoms themselves, though often unrecognized. Family history may then be a potential risk factor for Alice in Wonderland syndrome.
One example of environmental influences on the incidence of Alice in Wonderland syndrome includes the drug use and toxicity of topiramate. Other reports of tyramine usage and the association with Alice in Wonderland syndrome has been reported but current evidence is inconclusive. Further research is required to establish the genetic and environmental influences on Alice in Wonderland syndrome.
The neuronal effect of cortical spreading depression (CSD) on TPO-C may demonstrate the link between migraines and Alice in Wonderland Syndrome. As children experience Alice in Wonderland Syndrome more than adults, it is hypothesized that structural differences in the brain between children and adults may play a role in the development of this syndrome.
Diagnosis
Alice in Wonderland syndrome is not part of any major classifications like the ICD-10 and the DSM-5. Since there are no established diagnostic criteria for Alice in Wonderland syndrome, and because Alice in Wonderland syndrome is a disturbance of perception rather than a specific physiological condition, there is likely to be a large degree of variability in the diagnostic process and thus it can be poorly diagnosed. Often, the diagnosis can be presumed when other causes have been ruled out. Additionally, Alice in Wonderland syndrome can be presumed if the patient presents symptoms along with migraines and complains of onset during the day (although it can also occur at night). Ideally, a definite diagnosis requires a thorough physical examination, proper history taking from episodes and occurrences, and a concrete understanding of the signs and symptoms of Alice in Wonderland syndrome for differential diagnosis. A person experiencing Alice in Wonderland syndrome may be reluctant to describe their symptoms out of fear of being labeled with a psychiatric disorder, which can contribute to the difficulty in diagnosing Alice in Wonderland syndrome. In addition, younger individuals may struggle to describe their unusual symptoms, and thus, one recommended approach is to encourage children to draw their visual illusions during episodes. Cases that are suspected should warrant tests and exams such as blood tests, ECG, brain MRI, and other antibody tests for viral antibody detection. Differential diagnosis requires three levels of conceptualization. Symptoms need to be distinguished from other disorders that involve hallucinations and illusions. It is usually easy to rule out psychosis as those with Alice in Wonderland syndrome are typically aware that their hallucinations and distorted perceptions are not 'real'. Once these symptoms are distinguished and identified, the most likely cause needs to be established. Finally, the diagnosed condition needs to be evaluated to see if the condition is responsible for the symptoms that the individual is presenting. Given the wide variety of metamorphopsias and other distortions, it is not uncommon for Alice in Wonderland syndrome to be misdiagnosed or confused with other etiologies.
Anatomical relation
An area of the brain that is important to the development of Alice in Wonderland syndrome is the temporal-parietal-occipital carrefour (TPO-C), where TPO-C region is the meeting point of temporooccipital, parietooccipital, and temporoparietal junctions in the brain. The TPO-C region is also crucial as it is the location where somatosensory and visual information are interpreted by the brain to generate any internal or external manifestations. Thus, modifications to these regions of the brain may trigger the cause of Alice in Wonderland Syndrome and body schema disorders simultaneously .
Depending on which portion of the brain is damaged, the symptoms of Alice in Wonderland syndrome may differ. For example, it has been reported that injury to the anterior portion of the brain is more likely to be correlated to more complex and a wider range of symptoms, whereas damage to the occipital region has mainly been associated with only simple visual disturbances.
Prognosis
The symptoms of Alice in Wonderland syndrome themselves are not physically harmful for the experiencer. Since there is no established treatment for Alice in Wonderland syndrome, prognosis varies between patients and is based on whether an underlying cause has been identified. In many cases, the intensity of the episodes and symptoms decline. Since it is predominately a benign condition, treatment isn't always required. Limitations of the prognosis of Alice in Wonderland syndrome are due to the disorder's low prevalence. Because of this, symptoms require careful evaluation and observation by healthcare professionals.
Some cases include reoccurring symptoms in which other medical conditions have to be ruled out before diagnosing AIWS If Alice in Wonderland Syndrome is caused by underlying conditions, symptoms typically occur during the underlying disease and can last from few days to months. In most cases, symptoms may disappear either spontaneously, with the treatment of underlying causes, or after reassurances that symptoms are momentary and harmless. In some cases, individuals experience only a few episodes of symptoms. In other cases, symptoms may repeat over several episodes before resolution. In rare cases, symptoms continue to manifest years after the initial experience, sometimes with the development of new visual disorders or migraines. In these cases, medication can be introduced to counteract some of these distortions and manifestations. However, medications may also have inducing effects.
Treatment
At present (2024), Alice in Wonderland Syndrome has no standardized treatment plan. Tests including electroencephalogram (EEG) and magnetic resonance imaging (MRI) are used to view brain activity to examine possible brain injury or deficits. Since symptoms of Alice in Wonderland syndrome often disappear, either spontaneously on their own, or with the treatment of the underlying disease, most clinical and non-clinical Alice in Wonderland Syndrome cases are considered to be benign. In cases of Alice in Wonderland syndrome caused by underlying chronic disease, however, symptoms tend to reappear during the active phase of the underlying cause (e.g., migraine, epilepsy). If treatment of Alice in Wonderland Syndrome is determined necessary and useful, it should be focused on treating the suspected underlying disease. Treatment of these underlying conditions mostly involves prescription medications such as antiepileptics, migraine prophylaxis, antivirals, or antibiotics. Antipsychotics are rarely used in treating Alice in Wonderland Syndrome symptoms due to their minimal effectiveness. There are also rare cases in which these prescription medications ,specifically antipsychotics, may worsen psychosis and psychotic symptoms due to the severity of distortions.
In 2011, a patient was examined for having verbal auditory hallucinations (VAHs) and functional MRI (fMRI) was employed to localize cerebral activity during self-reported VAHs. Repetitive transcranial magnetic stimulation (rTSMS) was used on the patient's Brodmann's area 40, in charge of meaning and phonology, at a frequency of 1 Hz at T3P3. After the second week of treatment, all VAHs and sensory distortions have no effected on the patient and went through a full remission. Follow up appointments were conducted with no signs of any symptoms. By month 8, the symptoms returned. A second treatment was done with complete remission.
Migraine prophylaxis
Treatment methods revolving around migraine prophylaxis include medications and following a low-tyramine diet. Drugs that may be used to prevent migraines include anticonvulsants, antidepressants, calcium channel blockers, and beta blockers. Other treatments that have been explored for migraines include repetitive transcranial magnetic stimulation (rTMS). However, further research is needed to establish the effectiveness of this treatment regime.
Epidemiology
The lack of established diagnostic criteria or large-scale epidemiological studies, low awareness of the syndrome, and the unstandardized diagnosis criteria and definition for Alice in Wonderland syndrome mean that the exact prevalence of the syndrome is currently unknown. One study on 3,224 adolescents in Japan demonstrated the occurrence of macropsia and micropsia to be 6.5% in boys and 7.3% in girls, suggesting that the symptoms of Alice in Wonderland syndrome may not be particularly rare. This also seems to suggest a difference in the male-to-female ratio of people with Alice in Wonderland syndrome. However, according to other studies, it appears that the male/female ratio is dependent on the age range being observed. Studies showed that younger males (age range of 5 to 14 years) were 2.69 times more likely to experience Alice in Wonderland syndrome than girls of the same age, while there were no significant differences between students of 13 to 15 years of age. Conversely, female students (16- to 18-year-olds) showed a significantly greater prevalence.
Alice in Wonderland syndrome is more frequently seen in children and young adults. The average age of the start of Alice in Wonderland syndrome is six years old, but it is typical for some people to experience the syndrome from childhood up to their late twenties. Because many parents who have Alice in Wonderland syndrome report their children having it as well, the condition is thought possibly to be hereditary. Some parents report not realizing they have experienced Alice in Wonderland syndrome symptoms until after their children have been diagnosed, further indicating that many cases of Alice in Wonderland syndrome likely go unrecognized and under-reported.
Research is still being expanded upon and developed on this syndrome in a multitude of different regions and specialties. Future studies are encouraged to include global collaborative efforts that may help improve understanding of Alice in Wonderland syndrome and its epidemiology.
History
The syndrome is sometimes called Todd's syndrome, about a description of the condition in 1955 by Dr. John Todd (1914–1987), a British consultant psychiatrist at High Royds Hospital at Menston in West Yorkshire ('AIWS had been described by American Neurologist Caro Lippman in 1952, but Todd's report was the most influential'). Todd discovered that several of his patients experienced severe headaches causing them to see and perceive objects as greatly out of proportion. In addition, they had altered sense of time and touch, as well as distorted perceptions of their own body. Despite having migraine headaches, none of these patients had brain tumors, damaged eyesight, or mental illness that could have accounted for these and similar symptoms. They were all able to think lucidly and could distinguish hallucinations from reality, however, their perceptions were distorted.
Dr. Todd speculated that author Lewis Carroll had used his own migraine experiences as a source of inspiration for his famous 1865 novel Alice's Adventures in Wonderland. Carroll's diary reveals that, in 1856, he consulted William Bowman, an ophthalmologist, about the visual manifestations of the migraines he regularly experienced. In Carroll's diaries, he often wrote of a "bilious headache" that came coupled with severe nausea and vomiting. In 1885, he wrote that he had "experienced, for the second time, that odd optical affection of seeing moving fortifications, followed by a headache". Carroll wrote two books about Alice, the heroine after which the syndrome is named. In the story, Alice experiences several strange feelings that overlap with the characteristics of the syndrome, such as slowing time perception. In chapter two of Alice's Adventures in Wonderland (1865), Alice's body shrinks after drinking from a bottle labeled "DRINK ME", after which she consumed a cake that made her so large that she almost touched the ceiling. These features of the story describes the macropsia and micropsia that are so characteristic to this disease.
These symptoms have been reported before in scientific literature, including World War I and II soldiers with occipital lesions, so Todd understood that he was not the first person to discover this phenomenon. Additionally, as early as 1933, other researchers such as Coleman and Lippman had compared these symptoms to the story of Alice in Wonderland. Caro Lippman was the first to hypothesize that the bodily changes that Alice encounters mimicked those of Lewis Carroll's migraine symptoms. Others suggest that Carroll may have familiarized himself with these distorted perceptions through his knowledge of hallucinogenic mushrooms. It has been suggested that Carroll would have been aware of mycologist Mordecai Cubitt Cooke's description of the intoxicating effects of the fungus Amanita muscaria (commonly known as the fly agaric or fly amanita), in his books The Seven Sisters of Sleep and A Plain and Easy Account of British Fungi.
Notable cases
In 2018 it was suggested that the Italian artist and writer Giorgio de Chirico may have suffered from the syndrome.
Society and culture
Gulliver's Travels
Alice in Wonderland syndrome's symptom of micropsia has also been related to Jonathan Swift's novel Gulliver's Travels. It has been referred to as "Lilliput sight" and "Lilliputian hallucination", a term coined by British physician Raoul Leroy in 1909.
Alice in Wonderland
Alice in Wonderland syndrome was named after Lewis Carroll's 19th-century novel Alice's Adventures in Wonderland. In the story, Alice, the titular character, experiences numerous situations similar to those of micropsia and macropsia. The thorough descriptions of metamorphosis clearly described in the novel were the first of their kind to depict the bodily distortions associated with the condition. There is some speculation that Carroll may have written the story using his own direct experience with episodes of micropsia resulting from the numerous migraines he was known to experience. It has also been suggested that Carroll may have had temporal lobe epilepsy.
House
The condition is diagnosed in the season 8 episode "Risky Business".
Secret Garden
In episode ten of the Korean drama Secret Garden, the leading man, Kim Joo Won, suspects that he is suffering from Alice in Wonderland syndrome.
Doctors
In April 2020, a case of Alice in Wonderland syndrome was covered in an episode of the BBC daytime soap opera Doctors, when patient Hazel Gilmore (Alex Jarrett) experienced it.
See also
Charles Bonnet syndrome
Cortical homunculus
Red Queen hypothesis
References
External links
Neurological disorders
Psychopathological syndromes
Epstein–Barr virus–associated diseases
Symptoms and signs of mental disorders
Hallucinations
Alice's Adventures in Wonderland | 0.76577 | 0.99914 | 0.765111 |
Kutta–Joukowski theorem | The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed so large that the flow seen in the body-fixed frame is steady and unseparated. The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory, but it is a good approximation for real viscous flow in typical aerodynamic applications.
Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber, angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil, but it holds true for general airfoils.
Lift force formula
The theorem applies to two-dimensional flow around a fixed airfoil (or any shape of infinite span). The lift per unit span of the airfoil is given by
where and are the fluid density and the fluid velocity far upstream of the airfoil, and is the circulation defined as the line integral
around a closed contour enclosing the airfoil and followed in the negative (clockwise) direction. As explained below, this path must be in a region of potential flow and not in the boundary layer of the cylinder. The integrand is the component of the local fluid velocity in the direction tangent to the curve , and is an infinitesimal length on the curve . Equation is a form of the Kutta–Joukowski theorem.
Kuethe and Schetzer state the Kutta–Joukowski theorem as follows:
The force per unit length acting on a right cylinder of any cross section whatsoever is equal to and is perpendicular to the direction of
Circulation and the Kutta condition
A lift-producing airfoil either has camber or operates at a positive angle of attack, the angle between the chord line and the fluid flow far upstream of the airfoil. Moreover, the airfoil must have a sharp trailing edge.
Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number, defined as , and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.)
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition.
Kutta and Joukowski showed that for computing the pressure and lift of a thin airfoil for flow at large Reynolds number and small angle of attack, the flow can be assumed inviscid in the entire region outside the airfoil provided the Kutta condition is imposed. This is known as the potential flow theory and works remarkably well in practice.
Derivation
Two derivations are presented below. The first is a heuristic argument, based on physical insight. The second is a formal and technical one, requiring basic vector analysis and complex analysis.
Heuristic argument
For a heuristic argument, consider a thin airfoil of chord and infinite span, moving through air of density . Let the airfoil be inclined to the oncoming flow to produce an air speed on one side of the airfoil, and an air speed on the other side. The circulation is then
The difference in pressure between the two sides of the airfoil can be found by applying Bernoulli's equation:
so the downward force on the air, per unit span, is
and the upward force (lift) on the airfoil is
A differential version of this theorem applies on each element of the plate and is the basis of thin-airfoil theory.
Formal derivation
Lift forces for more complex situations
The lift predicted by the Kutta-Joukowski theorem within the framework of inviscid potential flow theory is quite accurate, even for real viscous flow, provided the flow is steady and unseparated.
In deriving the Kutta–Joukowski theorem, the assumption of irrotational flow was used. When there are free vortices outside of the body, as may be the case for a large number of unsteady flows, the flow is rotational. When the flow is rotational, more complicated theories should be used to derive the lift forces. Below are several important examples.
Impulsively started flow at small angle of attack
For an impulsively started flow such as obtained by suddenly accelerating an airfoil or setting an angle of attack, there is a vortex sheet continuously shed at the trailing edge and the lift force is unsteady or time-dependent. For small angle of attack starting flow, the vortex sheet follows a planar path, and the curve of the lift coefficient as function of time is given by the Wagner function. In this case the initial lift is one half of the final lift given by the Kutta–Joukowski formula. The lift attains 90% of its steady state value when the wing has traveled a distance of about seven chord lengths.
Impulsively started flow at large angle of attack
When the angle of attack is high enough, the trailing edge vortex sheet is initially in a spiral shape and the lift is singular (infinitely large) at the initial time. The lift drops for a very short time period before the usually assumed monotonically increasing lift curve is reached.
Starting flow at large angle of attack for wings with sharp leading edges
If, as for a flat plate, the leading edge is also sharp, then vortices also shed at the leading edge and the role of leading edge vortices is two-fold: 1) they are lift increasing when they are still close to the leading edge, so that they elevate the Wagner lift curve, and 2) they are detrimental to lift when they are convected to the trailing edge, inducing a new trailing edge vortex spiral moving in the lift decreasing direction. For this type of flow a vortex force line (VFL) map can be used to understand the effect of the different vortices in a variety of situations (including more situations than starting flow) and may be used to improve vortex control to enhance or reduce the lift. The vortex force line map is a two dimensional map on which vortex force lines are displayed. For a vortex at any point in the flow, its lift contribution is proportional to its speed, its circulation and the cosine of the angle between the streamline and the vortex force line. Hence the vortex force line map clearly shows whether a given vortex is lift producing or lift detrimental.
Lagally theorem
When a (mass) source is fixed outside the body, a force correction due to this source can be expressed as the product of the strength of outside source and the induced velocity at this source by all the causes except this source. This is known as the Lagally theorem. For two-dimensional inviscid flow, the classical Kutta Joukowski theorem predicts a zero drag. When, however, there is vortex outside the body, there is a vortex induced drag, in a form similar to the induced lift.
Generalized Lagally theorem
For free vortices and other bodies outside one body without bound vorticity and without vortex production, a generalized Lagally theorem holds, with which the forces are expressed as the products of strength of inner singularities (image vortices, sources and doublets inside each body) and the induced velocity at these singularities by all causes except those inside this body. The contribution due to each inner singularity sums up to give the total force. The motion of outside singularities also contributes to forces, and the force component due to this contribution is proportional to the speed of the singularity.
Individual force of each body for multiple-body rotational flow
When in addition to multiple free vortices and multiple bodies, there are bound vortices and vortex production on the body surface, the generalized Lagally theorem still holds, but a force due to vortex production exists. This vortex production force is proportional to the vortex production rate and the distance between the vortex pair in production. With this approach, an explicit and algebraic force formula, taking into account of all causes (inner singularities, outside vortices and bodies, motion of all singularities and bodies, and vortex production) holds individually for each body with the role of other bodies represented by additional singularities. Hence a force decomposition according to bodies is possible.
General three-dimensional viscous flow
For general three-dimensional, viscous and unsteady flow, force formulas are expressed in integral forms. The volume integration of certain flow quantities, such as vorticity moments, is related to forces. Various forms of integral approach are now available for unbounded domain and for artificially truncated domain. The Kutta Joukowski theorem can be recovered from these approaches when applied to a two-dimensional airfoil and when the flow is steady and unseparated.
Lifting line theory for wings, wing-tip vortices and induced drag
A wing has a finite span, and the circulation at any section of the wing varies with the spanwise direction. This variation is compensated by the release of streamwise vortices, called trailing vortices, due to conservation of vorticity or Kelvin Theorem of Circulation Conservation. These streamwise vortices merge to two counter-rotating strong spirals separated by distance close to the wingspan and their cores may be visible if relative humidity is high. Treating the trailing vortices as a series of semi-infinite straight line vortices leads to the well-known lifting line theory. By this theory, the wing has a lift force smaller than that predicted by a purely two-dimensional theory using the Kutta–Joukowski theorem. This is due to the upstream effects of the trailing vortices' added downwash on the angle of attack of the wing. This reduces the wing's effective angle of attack, decreasing the amount of lift produced at a given angle of attack and requiring a higher angle of attack to recover this lost lift. At this new higher angle of attack, drag has also increased. Induced drag effectively reduces the slope of the lift curve of a 2-D airfoil and increases the angle of attack of (while also decreasing the value of ).
See also
Horseshoe vortex
References
Bibliography
Milne-Thomson, L.M. (1973) Theoretical Aerodynamics, Dover Publications Inc, New York
Aircraft aerodynamics
Eponymous theorems of physics
Fluid dynamics
Physics theorems
Aircraft wing design | 0.773918 | 0.988612 | 0.765105 |
Hamilton's principle | In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.
Mathematical formulation
Hamilton's principle states that the true evolution of a system described by generalized coordinates between two specified states and at two specified times and is a stationary point (a point where the variation is zero) of the action functional
where is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in . The action is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation
That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path.
Euler–Lagrange equations derived from the action integral
Requiring that the true trajectory be a stationary point of the action functional is equivalent to a set of differential equations for (the Euler–Lagrange equations), which may be derived as follows.
Let represent the true evolution of the system between two specified states and at two specified times and , and let be a small perturbation that is zero at the endpoints of the trajectory
To first order in the perturbation , the change in the action functional would be
where we have expanded the Lagrangian L to first order in the perturbation .
Applying integration by parts to the last term results in
The boundary conditions causes the first term to vanish
Hamilton's principle requires that this first-order change is zero for all possible perturbations , i.e., the true path is a stationary point of the action functional (either a minimum, maximum or saddle point). This requirement can be satisfied if and only if
These equations are called the Euler–Lagrange equations for the variational problem.
Canonical momenta and constants of motion
The conjugate momentum for a generalized coordinate is defined by the equation
An important special case of the Euler–Lagrange equation occurs when L does not contain a generalized coordinate explicitly,
that is, the conjugate momentum is a constant of the motion.
In such cases, the coordinate is called a cyclic coordinate. For example, if we use polar coordinates , , to describe the planar motion of a particle, and if does not depend on , the conjugate momentum is the conserved angular momentum.
Example: Free particle in polar coordinates
Trivial examples help to appreciate the use of the action principle via the Euler–Lagrange equations. A free particle (mass m and velocity v) in Euclidean space moves in a straight line. Using the Euler–Lagrange equations, this can be shown in polar coordinates as follows. In the absence of a potential, the Lagrangian is simply equal to the kinetic energy
in orthonormal (x,y) coordinates, where the dot represents differentiation with respect to the curve parameter (usually the time, t). Therefore, upon application of the Euler–Lagrange equations,
And likewise for y. Thus the Euler–Lagrange formulation can be used to derive Newton's laws.
In polar coordinates the kinetic energy and hence the Lagrangian becomes
The radial and components of the Euler–Lagrange equations become, respectively
remembering that r is also dependent on time and the product rule is needed to compute the total time derivative .
The solution of these two equations is given by
for a set of constants , , , determined by initial conditions.
Thus, indeed, the solution is a straight line given in polar coordinates: is the velocity, is the distance of the closest approach to the origin, and is the angle of motion.
Applied to deformable bodies
Hamilton's principle is an important variational principle in elastodynamics. As opposed to a system composed of rigid bodies, deformable bodies have an infinite number of degrees of freedom and occupy continuous regions of space; consequently, the state of the system is described by using continuous functions of space and time. The extended Hamilton Principle for such bodies is given by
where is the kinetic energy, is the elastic energy, is the work done by external loads on the body, and , the initial and final times. If the system is conservative, the work done by external forces may be derived from a scalar potential . In this case,
This is called Hamilton's principle and it is invariant under coordinate transformations.
Comparison with Maupertuis' principle
Hamilton's principle and Maupertuis' principle are occasionally confused and both have been called the principle of least action. They differ in three important ways:
their definition of the action... Maupertuis' principle uses an integral over the generalized coordinates known as the abbreviated action or reduced action where p = (p1, p2, ..., pN) are the conjugate momenta defined above. By contrast, Hamilton's principle uses , the integral of the Lagrangian over time.
the solution that they determine... Hamilton's principle determines the trajectory q(t) as a function of time, whereas Maupertuis' principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis' principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy). By contrast, Hamilton's principle directly specifies the motion along the ellipse as a function of time.
...and the constraints on the variation. Maupertuis' principle requires that the two endpoint states q1 and q2 be given and that energy be conserved along every trajectory (same energy for each trajectory). This forces the endpoint times to be varied as well. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times t1 and t2 be specified as well as the endpoint states q1 and q2.
Action principle for fields
Classical field theory
The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravity.
The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle.
The path of a body in a gravitational field (i.e. free fall in space time, a so-called geodesic) can be found using the action principle.
Quantum mechanics and quantum field theory
In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all imaginable paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, that gives the probability amplitudes of the various outcomes.
Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. In particular, it is fully appreciated and best understood within quantum mechanics. Richard Feynman's path integral formulation of quantum mechanics is based on a stationary-action principle, using path integrals. Maxwell's equations can be derived as conditions of stationary action.
See also
Analytical mechanics
Configuration space
Hamilton–Jacobi equation
Phase space
Geodesics as Hamiltonian flows
References
W.R. Hamilton, "On a General Method in Dynamics.", Philosophical Transactions of the Royal Society Part II (1834) pp. 247–308; Part I (1835) pp. 95–144. (From the collection Sir William Rowan Hamilton (1805–1865): Mathematical Papers edited by David R. Wilkins, School of Mathematics, Trinity College, Dublin 2, Ireland. (2000); also reviewed as On a General Method in Dynamics)
Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 35–69.
Landau LD and Lifshitz EM (1976) Mechanics, 3rd. ed., Pergamon Press. (hardcover) and (softcover), pp. 2–4.
Arnold VI. (1989) Mathematical Methods of Classical Mechanics, 2nd ed., Springer Verlag, pp. 59–61.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Bedford A.: Hamilton's Principle in Continuum Mechanics. Pitman, 1985. Springer 2001, ISBN 978-3-030-90305-3 ISBN 978-3-030-90306-0 (eBook), https://doi.org/10.1007/978-3-030-90306-0
Lagrangian mechanics
Calculus of variations
Principles
William Rowan Hamilton | 0.773133 | 0.989601 | 0.765094 |
Gravity | In physics, gravity is a fundamental interaction primarily observed as mutual attraction between all things that have mass. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light.
On Earth, gravity gives weight to physical objects, and the Moon's gravity is responsible for sublunar tides in the oceans. The corresponding antipodal tide is caused by the inertia of the Earth and Moon orbiting one another. Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms.
The gravitational attraction between the original gaseous matter in the universe caused it to coalesce and form stars which eventually condensed into galaxies, so gravity is responsible for many of the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away.
Gravity is most accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity not as a force, but as the curvature of spacetime, caused by the uneven distribution of mass, and causing masses to move along geodesic lines. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them.
Current models of particle physics imply that the earliest instance of gravity in the universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner. Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics.
Definitions
, also known as gravitational attraction, is the mutual attraction between all masses in the universe. Gravity is the gravitational attraction at the surface of a planet or other celestial body; gravity may also include, in addition to gravitation, the centrifugal force resulting from the planet's rotation .
History
Ancient world
The nature and mechanism of gravity were explored by a wide range of ancient scholars. In Greece, Aristotle believed that objects fell towards the Earth because the Earth was the center of the Universe and attracted all of the mass in the Universe towards it. He also thought that the speed of a falling object should increase with its weight, a conclusion that was later shown to be false. While Aristotle's view was widely accepted throughout Ancient Greece, there were other thinkers such as Plutarch who correctly predicted that the attraction of gravity was not unique to the Earth.
Although he did not understand gravity as a force, the ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity. Two centuries later, the Roman engineer and architect Vitruvius contended in his De architectura that gravity is not dependent on a substance's weight but rather on its "nature".
In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force that diminishes over time.
In the seventh century CE, the Indian mathematician and astronomer Brahmagupta proposed the idea that gravity is an attractive force that draws objects to the Earth and used the term gurutvākarṣaṇ to describe it.
In the ancient Middle East, gravity was a topic of fierce debate. The Persian intellectual Al-Biruni believed that the force of gravity was not unique to the Earth, and he correctly assumed that other heavenly bodies should exert a gravitational attraction as well. In contrast, Al-Khazini held the same position as Aristotle that all matter in the Universe is attracted to the center of the Earth.
Scientific revolution
In the mid-16th century, various European scientists experimentally disproved the Aristotelian notion that heavier objects fall at a faster rate. In particular, the Spanish Dominican priest Domingo de Soto wrote in 1551 that bodies in free fall uniformly accelerate. De Soto may have been influenced by earlier experiments conducted by other Dominican priests in Italy, including those by Benedetto Varchi, Francesco Beato, Luca Ghini, and Giovan Bellaso which contradicted Aristotle's teachings on the fall of bodies.
The mid-16th century Italian physicist Giambattista Benedetti published papers claiming that, due to specific gravity, objects made of the same material but with different masses would fall at the same speed. With the 1586 Delft tower experiment, the Flemish physicist Simon Stevin observed that two cannonballs of differing sizes and weights fell at the same rate when dropped from a tower. In the late 16th century, Galileo Galilei's careful measurements of balls rolling down inclines allowed him to firmly establish that gravitational acceleration is the same for all objects. Galileo postulated that air resistance is the reason that objects with a low density and high surface area fall more slowly in an atmosphere.
In 1604, Galileo correctly hypothesized that the distance of a falling object is proportional to the square of the time elapsed. This was later confirmed by Italian scientists Jesuits Grimaldi and Riccioli between 1640 and 1650. They also calculated the magnitude of the Earth's gravity by measuring the oscillations of a pendulum.
Newton's theory of gravitation
In 1657, Robert Hooke published his Micrographia, in which he hypothesised that the Moon must have its own gravity. In 1666, he added two further principles: that all bodies move in straight lines until deflected by some force and that the attractive force is stronger for closer bodies. In a communication to the Royal Society in 1666, Hooke wrote
Hooke's 1674 Gresham lecture, An Attempt to prove the Annual Motion of the Earth, explained that gravitation applied to "all celestial bodies"
In 1684, Newton sent a manuscript to Edmond Halley titled De motu corporum in gyrum ('On the motion of bodies in an orbit'), which provided a physical justification for Kepler's laws of planetary motion. Halley was impressed by the manuscript and urged Newton to expand on it, and a few years later Newton published a groundbreaking book called Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). In this book, Newton described gravitation as a universal force, and claimed that "the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve." This statement was later condensed into the following inverse-square law:
where is the force, and are the masses of the objects interacting, is the distance between the centers of the masses and is the gravitational constant
Newton's Principia was well received by the scientific community, and his law of gravitation quickly spread across the European world. More than a century later, in 1821, his theory of gravitation rose to even greater prominence when it was used to predict the existence of Neptune. In that year, the French astronomer Alexis Bouvard used this theory to create a table modeling the orbit of Uranus, which was shown to differ significantly from the planet's actual trajectory. In order to explain this discrepancy, many astronomers speculated that there might be a large object beyond the orbit of Uranus which was disrupting its orbit. In 1846, the astronomers John Couch Adams and Urbain Le Verrier independently used Newton's law to predict Neptune's location in the night sky, and the planet was discovered there within a day.
General relativity
Eventually, astronomers noticed an eccentricity in the orbit of the planet Mercury which could not be explained by Newton's theory: the perihelion of the orbit was increasing by about 42.98 arcseconds per century. The most obvious explanation for this discrepancy was an as-yet-undiscovered celestial body, such as a planet orbiting the Sun even closer than Mercury, but all efforts to find such a body turned out to be fruitless. In 1915, Albert Einstein developed a theory of general relativity which was able to accurately model Mercury's orbit.
In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. Einstein began to toy with this idea in the form of the equivalence principle, a discovery which he later described as "the happiest thought of my life." In this theory, free fall is considered to be equivalent to inertial motion, meaning that free-falling inertial objects are accelerated relative to non-inertial observers on the ground. In contrast to Newtonian physics, Einstein believed that it was possible for this acceleration to occur without any force being applied to the object.
Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. As in Newton's first law of motion, Einstein believed that a force applied to an object would cause it to deviate from a geodesic. For instance, people standing on the surface of the Earth are prevented from following a geodesic path because the mechanical resistance of the Earth exerts an upward force on them. This explains why moving along the geodesics in spacetime is considered inertial.
Einstein's description of gravity was quickly accepted by the majority of physicists, as it was able to explain a wide variety of previously baffling experimental results. In the coming years, a wide range of experiments provided additional support for the idea of general relativity. Today, Einstein's theory of relativity is used for all gravitational calculations where absolute precision is desired, although Newton's inverse-square law is accurate enough for virtually all ordinary calculations.
Modern research
In modern physics, general relativity remains the framework for the understanding of gravity. Physicists continue to work to find solutions to the Einstein field equations that form the basis of general relativity and continue to test the theory, finding excellent agreement in all cases.
Einstein field equations
The Einstein field equations are a system of 10 partial differential equations which describe how matter affects the curvature of spacetime. The system is often expressed in the form
where is the Einstein tensor, is the metric tensor, is the stress–energy tensor, is the cosmological constant, is the Newtonian constant of gravitation and is the speed of light. The constant is referred to as the Einstein gravitational constant.
A major area of research is the discovery of exact solutions to the Einstein field equations. Solving these equations amounts to calculating a precise value for the metric tensor (which defines the curvature and geometry of spacetime) under certain physical conditions. There is no formal definition for what constitutes such solutions, but most scientists agree that they should be expressable using elementary functions or linear differential equations. Some of the most notable solutions of the equations include:
The Schwarzschild solution, which describes spacetime surrounding a spherically symmetric non-rotating uncharged massive object. For compact enough objects, this solution generated a black hole with a central singularity. At points far away from the central mass, the accelerations predicted by the Schwarzschild solution are practically identical to those predicted by Newton's theory of gravity.
The Reissner–Nordström solution, which analyzes a non-rotating spherically symmetric object with charge and was independently discovered by several different researchers between 1916 and 1921. In some cases, this solution can predict the existence of black holes with double event horizons.
The Kerr solution, which generalizes the Schwarzchild solution to rotating massive objects. Because of the difficulty of factoring in the effects of rotation into the Einstein field equations, this solution was not discovered until 1963.
The Kerr–Newman solution for charged, rotating massive objects. This solution was derived in 1964, using the same technique of complex coordinate transformation that was used for the Kerr solution.
The cosmological Friedmann–Lemaître–Robertson–Walker solution, discovered in 1922 by Alexander Friedmann and then confirmed in 1927 by Georges Lemaître. This solution was revolutionary for predicting the expansion of the Universe, which was confirmed seven years later after a series of measurements by Edwin Hubble. It even showed that general relativity was incompatible with a static universe, and Einstein later conceded that he had been wrong to design his field equations to account for a Universe that was not expanding.
Today, there remain many important situations in which the Einstein field equations have not been solved. Chief among these is the two-body problem, which concerns the geometry of spacetime around two mutually interacting massive objects, such as the Sun and the Earth, or the two stars in a binary star system. The situation gets even more complicated when considering the interactions of three or more massive bodies (the "n-body problem"), and some scientists suspect that the Einstein field equations will never be solved in this context. However, it is still possible to construct an approximate solution to the field equations in the n-body problem by using the technique of post-Newtonian expansion. In general, the extreme nonlinearity of the Einstein field equations makes it difficult to solve them in all but the most specific cases.
Gravity and quantum mechanics
Despite its success in predicting the effects of gravity at large scales, general relativity is ultimately incompatible with quantum mechanics. This is because general relativity describes gravity as a smooth, continuous distortion of spacetime, while quantum mechanics holds that all forces arise from the exchange of discrete particles known as quanta. This contradiction is especially vexing to physicists because the other three fundamental forces (strong force, weak force and electromagnetism) were reconciled with a quantum framework decades ago. As a result, modern researchers have begun to search for a theory that could unite both gravity and quantum mechanics under a more general framework.
One path is to describe gravity in the framework of quantum field theory, which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
Tests of general relativity
Testing the predictions of general relativity has historically been difficult, because they are almost identical to the predictions of Newtonian gravity for small energies and masses. Still, since its development, an ongoing series of experimental results have provided support for the theory:
In 1919, the British astrophysicist Arthur Eddington was able to confirm the predicted gravitational lensing of light during that year's solar eclipse. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. Although Eddington's analysis was later disputed, this experiment made Einstein famous almost overnight and caused general relativity to become widely accepted in the scientific community.
In 1959, American physicists Robert Pound and Glen Rebka performed an experiment in which they used gamma rays to confirm the prediction of gravitational time dilation. By sending the rays down a 74-foot tower and measuring their frequency at the bottom, the scientists confirmed that light is redshifted as it moves towards a source of gravity. The observed redshift also supported the idea that time runs more slowly in the presence of a gravitational field.
The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
In 1971, scientists discovered the first-ever black hole in the galaxy Cygnus. The black hole was detected because it was emitting bursts of x-rays as it consumed a smaller star, and it came to be known as Cygnus X-1. This discovery confirmed yet another prediction of general relativity, because Einstein's equations implied that light could not escape from a sufficiently large and compact object.
General relativity states that gravity acts on light and matter equally, meaning that a sufficiently massive object could warp light around it and create a gravitational lens. This phenomenon was first confirmed by observation in 1979 using the 2.1 meter telescope at Kitt Peak National Observatory in Arizona, which saw two mirror images of the same quasar whose light had been bent around the galaxy YGKOW G1.
Frame dragging, the idea that a rotating massive object should twist spacetime around it, was confirmed by Gravity Probe B results in 2011.
In 2015, the LIGO observatory detected faint gravitational waves, the existence of which had been predicted by general relativity. Scientists believe that the waves emanated from a black hole merger that occurred 1.5 billion light-years away.
Specifics
Earth's gravity
Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).
The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.
Gravitational radiation
General relativity predicts that energy can be transported out of a system through gravitational radiation. The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993.
The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in Physics in 2017.
Speed of gravity
In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in Science Bulletin in February 2013.
In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.
Anomalies and discrepancies
There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
Extra-fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact through gravitation but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed.
Accelerated expansion: The expansion of the universe seems to be speeding up. Dark energy has been proposed to explain this.
Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. The Pioneer anomaly has been shown to be explained by thermal recoil due to the distant sun radiation on one side of the space craft.
Alternative theories
Historical alternative theories
Aristotelian theory of gravity
Le Sage's theory of gravitation (1784) also called LeSage gravity but originally proposed by Fatio and further elaborated by Georges-Louis Le Sage, based on a fluid-based explanation where a light gas fills the entire Universe.
Ritz's theory of gravitation, Ann. Chem. Phys. 13, 145, (1908) pp. 267–271, Weber–Gauss electrodynamics applied to gravitation. Classical advancement of perihelia.
Nordström's theory of gravitation (1912, 1913), an early competitor of general relativity.
Kaluza–Klein theory (1921)
Whitehead's theory of gravitation (1922), another early competitor of general relativity.
Modern alternative theories
Brans–Dicke theory of gravity (1961)
Induced gravity (1967), a proposal by Andrei Sakharov according to which general relativity might arise from quantum field theories of matter
String theory (late 1960s)
ƒ(R) gravity (1970)
Horndeski theory (1974)
Supergravity (1976)
In the modified Newtonian dynamics (MOND) (1981), Mordehai Milgrom proposes a modification of Newton's second law of motion for small accelerations
The self-creation cosmology theory of gravity (1982) by G.A. Barber in which the Brans–Dicke theory is modified to allow mass creation
Loop quantum gravity (1988) by Carlo Rovelli, Lee Smolin, and Abhay Ashtekar
Nonsymmetric gravitational theory (NGT) (1994) by John Moffat
Tensor–vector–scalar gravity (TeVeS) (2004), a relativistic modification of MOND by Jacob Bekenstein
Chameleon theory (2004) by Justin Khoury and Amanda Weltman.
Pressuron theory (2013) by Olivier Minazzoli and Aurélien Hees.
Conformal gravity
Gravity as an entropic force, gravity arising as an emergent phenomenon from the thermodynamic concept of entropy.
In the superfluid vacuum theory the gravity and curved spacetime arise as a collective excitation mode of non-relativistic background superfluid.
Massive gravity, a theory where gravitons and gravitational waves have a non-zero mass
See also
References
Sources
Further reading
External links
The Feynman Lectures on Physics Vol. I Ch. 7: The Theory of Gravitation
Fundamental interactions
Acceleration
Articles containing video clips
Empirical laws | 0.765379 | 0.999622 | 0.765089 |
The three Rs | The three Rs are three basic skills taught in schools: reading, writing and arithmetic (the "R's", pronounced in the English alphabet "ARs", refer to "Reading, wRiting (where the W is unnecessary), and ARithmetic"). The phrase appears to have been coined at the beginning of the 19th century.
The term has also been used to name other triples (see Other uses).
Origin and meaning
The skills themselves are alluded to in St. Augustine's Confessions: 'learning to read, and write, and do arithmetic'.
The phrase is sometimes attributed to a speech given by Sir William Curtis circa 1807: this is disputed. An extended modern version of the three Rs consists of the "functional skills of literacy, numeracy and ICT".
The educationalist Louis P. Bénézet preferred "to read", "to reason", "to recite", adding, "by reciting I did not mean giving back, verbatim, the words of the teacher or of the textbook. I meant speaking the English language."
Other uses
More recent meanings of "the three Rs" are:
In the subject of CNC code generation by Edgecam Workflow: Rapid, Reliable, and Repeatable
In the subject of sustainability: Reduce, Reuse, and Recycle
In the subject of American politics and the New Deal: Relief, Recovery, and Reform
In animal welfare principles in research (see The Three Rs for animals). The Three Rs principle stands for Reduction, Refinement, and Replacement. It promotes the use of alternative methods whenever possible, reducing the number of animals used, refining the experimental techniques to minimize harm, and replacing animals with non-animal models when feasible
(See also 3R disambiguation)
See also
Standards based education reform
Traditional education
Trivium (education)
Notes
Education reform
Latin words and phrases | 0.771398 | 0.991817 | 0.765086 |
Boltzmann machine | A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann is a spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field.
Boltzmann machines are theoretically intriguing because of the locality and Hebbian nature of their training algorithm (being trained by Hebb's rule), and because of their parallelism and the resemblance of their dynamics to simple physical processes. Boltzmann machines with unconstrained connectivity have not been proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems.
They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. They were heavily popularized and promoted by Geoffrey Hinton, Terry Sejnowski and Yann LeCun in cognitive sciences communities, particularly in machine learning, as part of "energy-based models" (EBM), because Hamiltonians of spin glasses as energy are used as a starting point to define the learning task.
Structure
A Boltzmann machine, like a Sherrington–Kirkpatrick model, is a network of units with a total "energy" (Hamiltonian) defined for the overall network. Its units produce binary results. Boltzmann machine weights are stochastic. The global energy in a Boltzmann machine is identical in form to that of Hopfield networks and Ising models:
Where:
is the connection strength between unit and unit .
is the state, , of unit .
is the bias of unit in the global energy function. ( is the activation threshold for the unit.)
Often the weights are represented as a symmetric matrix with zeros along the diagonal.
Unit state probability
The difference in the global energy that results from a single unit equaling 0 (off) versus 1 (on), written , assuming a symmetric matrix of weights, is given by:
This can be expressed as the difference of energies of two states:
Substituting the energy of each state with its relative probability according to the Boltzmann factor (the property of a Boltzmann distribution that the energy of a state is proportional to the negative log probability of that state) gives:
where is the Boltzmann constant and is absorbed into the artificial notion of temperature . We then rearrange terms and consider that the probabilities of the unit being on and off must sum to one:
Solving for , the probability that the -th unit is on gives:
where the scalar is referred to as the temperature of the system. This relation is the source of the logistic function found in probability expressions in variants of the Boltzmann machine.
Equilibrium state
The network runs by repeatedly choosing a unit and resetting its state. After running for long enough at a certain temperature, the probability of a global state of the network depends only upon that global state's energy, according to a Boltzmann distribution, and not on the initial state from which the process was started. This means that log-probabilities of global states become linear in their energies. This relationship is true when the machine is "at thermal equilibrium", meaning that the probability distribution of global states has converged. Running the network beginning from a high temperature, its temperature gradually decreases until reaching a thermal equilibrium at a lower temperature. It then may converge to a distribution where the energy level fluctuates around the global minimum. This process is called simulated annealing.
To train the network so that the chance it will converge to a global state according to an external distribution over these states, the weights must be set so that the global states with the highest probabilities get the lowest energies. This is done by training.
Training
The units in the Boltzmann machine are divided into 'visible' units, V, and 'hidden' units, H. The visible units are those that receive information from the 'environment', i.e. the training set is a set of binary vectors over the set V. The distribution over the training set is denoted .
The distribution over global states converges as the Boltzmann machine reaches thermal equilibrium. We denote this distribution, after we marginalize it over the hidden units, as .
Our goal is to approximate the "real" distribution using the produced by the machine. The similarity of the two distributions is measured by the Kullback–Leibler divergence, :
where the sum is over all the possible states of . is a function of the weights, since they determine the energy of a state, and the energy determines , as promised by the Boltzmann distribution. A gradient descent algorithm over changes a given weight, , by subtracting the partial derivative of with respect to the weight.
Boltzmann machine training involves two alternating phases. One is the "positive" phase where the visible units' states are clamped to a particular binary state vector sampled from the training set (according to ). The other is the "negative" phase where the network is allowed to run freely, i.e. only the input nodes have their state determined by external data, but the output nodes are allowed to float. The gradient with respect to a given weight, , is given by the equation:
where:
is the probability that units i and j are both on when the machine is at equilibrium on the positive phase.
is the probability that units i and j are both on when the machine is at equilibrium on the negative phase.
denotes the learning rate
This result follows from the fact that at thermal equilibrium the probability of any global state when the network is free-running is given by the Boltzmann distribution.
This learning rule is biologically plausible because the only information needed to change the weights is provided by "local" information. That is, the connection (synapse, biologically) does not need information about anything other than the two neurons it connects. This is more biologically realistic than the information needed by a connection in many other neural network training algorithms, such as backpropagation.
The training of a Boltzmann machine does not use the EM algorithm, which is heavily used in machine learning. By minimizing the KL-divergence, it is equivalent to maximizing the log-likelihood of the data. Therefore, the training procedure performs gradient ascent on the log-likelihood of the observed data. This is in contrast to the EM algorithm, where the posterior distribution of the hidden nodes must be calculated before the maximization of the expected value of the complete data likelihood during the M-step.
Training the biases is similar, but uses only single node activity:
Problems
Theoretically the Boltzmann machine is a rather general computational medium. For instance, if trained on photographs, the machine would theoretically model the distribution of photographs, and could use that model to, for example, complete a partial photograph.
Unfortunately, Boltzmann machines experience a serious practical problem, namely that it seems to stop learning correctly when the machine is scaled up to anything larger than a trivial size. This is due to important effects, specifically:
the required time order to collect equilibrium statistics grows exponentially with the machine's size, and with the magnitude of the connection strengths
connection strengths are more plastic when the connected units have activation probabilities intermediate between zero and one, leading to a so-called variance trap. The net effect is that noise causes the connection strengths to follow a random walk until the activities saturate.
Types
Restricted Boltzmann machine
Although learning is impractical in general Boltzmann machines, it can be made quite efficient in a restricted Boltzmann machine (RBM) which does not allow intralayer connections between hidden units and visible units, i.e. there is no connection between visible to visible and hidden to hidden units. After training one RBM, the activities of its hidden units can be treated as data for training a higher-level RBM. This method of stacking RBMs makes it possible to train many layers of hidden units efficiently and is one of the most common deep learning strategies. As each new layer is added the generative model improves.
An extension to the restricted Boltzmann machine allows using real valued data rather than binary data.
One example of a practical RBM application is in speech recognition.
Deep Boltzmann machine
A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables. It is a network of symmetrically coupled stochastic binary units. It comprises a set of visible units and layers of hidden units . No connection links units of the same layer (like RBM). For the , the probability assigned to vector is
where are the set of hidden units, and are the model parameters, representing visible-hidden and hidden-hidden interactions. In a DBN only the top two layers form a restricted Boltzmann machine (which is an undirected graphical model), while lower layers form a directed generative model. In a DBM all layers are symmetric and undirected.
Like DBNs, DBMs can learn complex and abstract internal representations of the input in tasks such as object or speech recognition, using limited, labeled data to fine-tune the representations built using a large set of unlabeled sensory input data. However, unlike DBNs and deep convolutional neural networks, they pursue the inference and training procedure in both directions, bottom-up and top-down, which allow the DBM to better unveil the representations of the input structures.
However, the slow speed of DBMs limits their performance and functionality. Because exact maximum likelihood learning is intractable for DBMs, only approximate maximum likelihood learning is possible. Another option is to use mean-field inference to estimate data-dependent expectations and approximate the expected sufficient statistics by using Markov chain Monte Carlo (MCMC). This approximate inference, which must be done for each test input, is about 25 to 50 times slower than a single bottom-up pass in DBMs. This makes joint optimization impractical for large data sets, and restricts the use of DBMs for tasks such as feature representation.
Multimodal deep Boltzmann machine
Spike-and-slab RBMs
The need for deep learning with real-valued inputs, as in Gaussian RBMs, led to the spike-and-slab RBM (ssRBM), which models continuous-valued inputs with binary latent variables. Similar to basic RBMs and its variants, a spike-and-slab RBM is a bipartite graph, while like GRBMs, the visible units (input) are real-valued. The difference is in the hidden layer, where each hidden unit has a binary spike variable and a real-valued slab variable. A spike is a discrete probability mass at zero, while a slab is a density over continuous domain; their mixture forms a prior.
An extension of ssRBM called μ-ssRBM provides extra modeling capacity using additional terms in the energy function. One of these terms enables the model to form a conditional distribution of the spike variables by marginalizing out the slab variables given an observation.
In mathematics
In more general mathematical setting, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning it is called a log-linear model. In deep learning the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine.
History
The Boltzmann machine is based on the Sherrington–Kirkpatrick spin glass model by David Sherrington and Scott Kirkpatrick. The seminal publication by John Hopfield (1982) applied methods of statistical mechanics, mainly the recently developed (1970s) theory of spin glasses, to study associative memory (later named the "Hopfield network").
The original contribution in applying such energy-based models in cognitive science appeared in papers by Geoffrey Hinton and Terry Sejnowski. In a 1995 interview, Hinton stated that in 1983 February or March, he was going to give a talk on simulated annealing in Hopfield networks, so he had to design a learning algorithm for the talk, resulting in the Boltzmann machine learning algorithm.
The idea of applying the Ising model with annealed Gibbs sampling was used in Douglas Hofstadter's Copycat project (1984).
The explicit analogy drawn with statistical mechanics in the Boltzmann machine formulation led to the use of terminology borrowed from physics (e.g., "energy"), which became standard in the field. The widespread adoption of this terminology may have been encouraged by the fact that its use led to the adoption of a variety of concepts and methods from statistical mechanics. The various proposals to use simulated annealing for inference were apparently independent.
Similar ideas (with a change of sign in the energy function) are found in Paul Smolensky's "Harmony Theory". Ising models can be generalized to Markov random fields, which find widespread application in linguistics, robotics, computer vision and artificial intelligence.
In 2024, Hopfield and Hinton were awarded Nobel Prize in Physics for their foundational contributions to machine learning, such as the Boltzmann machine.
See also
Restricted Boltzmann machine
Helmholtz machine
Markov random field (MRF)
Ising model (Lenz–Ising model)
Hopfield network
Learning rule that uses conditional "local" information can be derived from the reversed form of ,
.
References
Further reading
Kothari P (2020): https://www.forbes.com/sites/tomtaulli/2020/02/02/coronavirus-can-ai-artificial-intelligence-make-a-difference/?sh=1eca51e55817
External links
Scholarpedia article by Hinton about Boltzmann machines
Talk at Google by Geoffrey Hinton
Neural network architectures
Machine | 0.770599 | 0.992839 | 0.765081 |
Reduced mass | In physics, reduced mass is a measure of the effective inertial mass of a system with two or more particles when the particles are interacting with each other. Reduced mass allows the two-body problem to be solved as if it were a one-body problem. Note, however, that the mass determining the gravitational force is not reduced. In the computation, one mass can be replaced with the reduced mass, if this is compensated by replacing the other mass with the sum of both masses. The reduced mass is frequently denoted by (mu), although the standard gravitational parameter is also denoted by (as are a number of other physical quantities). It has the dimensions of mass, and SI unit kg.
Reduced mass is particularly useful in classical mechanics.
Equation
Given two bodies, one with mass m1 and the other with mass m2, the equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass
where the force on this mass is given by the force between the two bodies.
Properties
The reduced mass is always less than or equal to the mass of each body:
and has the reciprocal additive property:
which by re-arrangement is equivalent to half of the harmonic mean.
In the special case that :
If , then .
Derivation
The equation can be derived as follows.
Newtonian mechanics
Using Newton's second law, the force exerted by a body (particle 2) on another body (particle 1) is:
The force exerted by particle 1 on particle 2 is:
According to Newton's third law, the force that particle 2 exerts on particle 1 is equal and opposite to the force that particle 1 exerts on particle 2:
Therefore:
The relative acceleration arel between the two bodies is given by:
Note that (since the derivative is a linear operator) the relative acceleration is equal to the acceleration of the separation between the two particles.
This simplifies the description of the system to one force (since ), one coordinate , and one mass . Thus we have reduced our problem to a single degree of freedom, and we can conclude that particle 1 moves with respect to the position of particle 2 as a single particle of mass equal to the reduced mass, .
Lagrangian mechanics
Alternatively, a Lagrangian description of the two-body problem gives a Lagrangian of
where is the position vector of mass (of particle ). The potential energy V is a function as it is only dependent on the absolute distance between the particles. If we define
and let the centre of mass coincide with our origin in this reference frame, i.e.
,
then
Then substituting above gives a new Lagrangian
where
is the reduced mass. Thus we have reduced the two-body problem to that of one body.
Applications
Reduced mass can be used in a multitude of two-body problems, where classical mechanics is applicable.
Moment of inertia of two point masses in a line
In a system with two point masses and such that they are co-linear, the two distances and to the rotation axis may be found with
where is the sum of both distances .
This holds for a rotation around the center of mass.
The moment of inertia around this axis can be then simplified to
Collisions of particles
In a collision with a coefficient of restitution e, the change in kinetic energy can be written as
,
where vrel is the relative velocity of the bodies before collision.
For typical applications in nuclear physics, where one particle's mass is much larger than the other the reduced mass can be approximated as the smaller mass of the system. The limit of the reduced mass formula as one mass goes to infinity is the smaller mass, thus this approximation is used to ease calculations, especially when the larger particle's exact mass is not known.
Motion of two massive bodies under their gravitational attraction
In the case of the gravitational potential energy
we find that the position of the first body with respect to the second is governed by the same differential equation as the position of a body with the reduced mass orbiting a body with a mass equal to the sum of the two masses, because
Non-relativistic quantum mechanics
Consider the electron (mass me) and proton (mass mp) in the hydrogen atom. They orbit each other about a common centre of mass, a two body problem. To analyze the motion of the electron, a one-body problem, the reduced mass replaces the electron mass
This idea is used to set up the Schrödinger equation for the hydrogen atom.
See also
Parallel (operator) - the general operation, of which reduced mass is just one case
Center-of-momentum frame
Momentum conservation
Harmonic oscillator
Chirp mass, a relativistic equivalent used in the post-Newtonian expansion
References
External links
Reduced Mass on HyperPhysics
Mechanics
Mass | 0.771929 | 0.991122 | 0.765076 |
Darcy–Weisbach equation | In fluid dynamics, the Darcy–Weisbach equation is an empirical equation that relates the head loss, or pressure loss, due to friction along a given length of pipe to the average velocity of the fluid flow for an incompressible fluid. The equation is named after Henry Darcy and Julius Weisbach. Currently, there is no formula more accurate or universally applicable than the Darcy-Weisbach supplemented by the Moody diagram or Colebrook equation.
The Darcy–Weisbach equation contains a dimensionless friction factor, known as the Darcy friction factor. This is also variously called the Darcy–Weisbach friction factor, friction factor, resistance coefficient, or flow coefficient.
Historical background
The Darcy-Weisbach equation, combined with the Moody chart for calculating head losses in pipes, is traditionally attributed to Henry Darcy, Julius Weisbach, and Lewis Ferry Moody. However, the development of these formulas and charts also involved other scientists and engineers over its historical development. Generally, the Bernoulli's equation would provide the head losses but in terms of quantities not known a priori, such as pressure. Therefore, empirical relationships were sought to correlate the head loss with quantities like pipe diameter and fluid velocity.
Julius Weisbach was certainly not the first to introduce a formula correlating the length and diameter of a pipe to the square of the fluid velocity. Antoine Chézy (1718-1798), in fact, had published a formula in 1770 that, although referring to open channels (i.e., not under pressure), was formally identical to the one Weisbach would later introduce, provided it was reformulated in terms of the hydraulic radius. However, Chézy's formula was lost until 1800, when Gaspard de Prony (a former student of his) published an account describing his results. It is likely that Weisbach was aware of Chézy's formula through Prony's publications.
Weisbach's formula was proposed in 1845 in the form we still use today:
where:
: head loss.
: length of the pipe.
: diameter of the pipe.
: velocity of the fluid.
: acceleration due to gravity.
However, the friction factor f was expressed by Weisbach through the following empirical formula:
with and depending on the diameter and the type of pipe wall.
Weisbach's work was published in the United States of America in 1848 and soon became well known there. In contrast, it did not initially gain much traction in France, where Prony equation, which had a polynomial form in terms of velocity (often approximated by the square of the velocity), continued to be used. Beyond the historical developments, Weisbach's formula had the objective merit of adhering to dimensional analysis, resulting in a dimensionless friction factor f. The complexity of f, dependent on the mechanics of the boundary layer and the flow regime (laminar, transitional, or turbulent), tended to obscure its dependence on the quantities in Weisbach's formula, leading many researchers to derive irrational and dimensionally inconsistent empirical formulas. It was understood not long after Weisbach's work that the friction factor f depended on the flow regime and was independent of the Reynolds number (and thus the velocity) only in the case of rough pipes in a turbulent flow regime (Prandtl-von Kármán equation).
Pressure-loss equation
In a cylindrical pipe of uniform diameter , flowing full, the pressure loss due to viscous effects is proportional to length and can be characterized by the Darcy–Weisbach equation:
where the pressure loss per unit length (SI units: Pa/m) is a function of:
, the density of the fluid (kg/m3);
, the hydraulic diameter of the pipe (for a pipe of circular section, this equals ; otherwise for a pipe of cross-sectional area and perimeter ) (m);
, the mean flow velocity, experimentally measured as the volumetric flow rate per unit cross-sectional wetted area (m/s);
, the Darcy friction factor (also called flow coefficient ).
For laminar flow in a circular pipe of diameter , the friction factor is inversely proportional to the Reynolds number alone which itself can be expressed in terms of easily measured or published physical quantities (see section below). Making this substitution the Darcy–Weisbach equation is rewritten as
where
is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/(m·s));
is the volumetric flow rate, used here to measure flow instead of mean velocity according to (m3/s).
Note that this laminar form of Darcy–Weisbach is equivalent to the Hagen–Poiseuille equation, which is analytically derived from the Navier–Stokes equations.
Head-loss formula
The head loss (or ) expresses the pressure loss due to friction in terms of the equivalent height of a column of the working fluid, so the pressure drop is
where:
= The head loss due to pipe friction over the given length of pipe (SI units: m);
= The local acceleration due to gravity (m/s2).
It is useful to present head loss per length of pipe (dimensionless):
where is the pipe length (m).
Therefore, the Darcy–Weisbach equation can also be written in terms of head loss:
In terms of volumetric flow
The relationship between mean flow velocity and volumetric flow rate is
where:
= The volumetric flow (m3/s),
= The cross-sectional wetted area (m2).
In a full-flowing, circular pipe of diameter ,
Then the Darcy–Weisbach equation in terms of is
Shear-stress form
The mean wall shear stress in a pipe or open channel is expressed in terms of the Darcy–Weisbach friction factor as
The wall shear stress has the SI unit of pascals (Pa).
Darcy friction factor
The friction factor is not a constant: it depends on such things as the characteristics of the pipe (diameter and roughness height ), the characteristics of the fluid (its kinematic viscosity [nu]), and the velocity of the fluid flow . It has been measured to high accuracy within certain flow regimes and may be evaluated by the use of various empirical relations, or it may be read from published charts. These charts are often referred to as Moody diagrams, after L. F. Moody, and hence the factor itself is sometimes erroneously called the Moody friction factor. It is also sometimes called the Blasius friction factor, after the approximate formula he proposed.
Figure 1 shows the value of as measured by experimenters for many different fluids, over a wide range of Reynolds numbers, and for pipes of various roughness heights. There are three broad regimes of fluid flow encountered in these data: laminar, critical, and turbulent.
Laminar regime
For laminar (smooth) flows, it is a consequence of Poiseuille's law (which stems from an exact classical solution for the fluid flow) that
where is the Reynolds number
and where is the viscosity of the fluid and
is known as the kinematic viscosity. In this expression for Reynolds number, the characteristic length is taken to be the hydraulic diameter of the pipe, which, for a cylindrical pipe flowing full, equals the inside diameter. In Figures 1 and 2 of friction factor versus Reynolds number, the regime demonstrates laminar flow; the friction factor is well represented by the above equation.
In effect, the friction loss in the laminar regime is more accurately characterized as being proportional to flow velocity, rather than proportional to the square of that velocity: one could regard the Darcy–Weisbach equation as not truly applicable in the laminar flow regime.
In laminar flow, friction loss arises from the transfer of momentum from the fluid in the center of the flow to the pipe wall via the viscosity of the fluid; no vortices are present in the flow. Note that the friction loss is insensitive to the pipe roughness height : the flow velocity in the neighborhood of the pipe wall is zero.
Critical regime
For Reynolds numbers in the range , the flow is unsteady (varies grossly with time) and varies from one section of the pipe to another (is not "fully developed"). The flow involves the incipient formation of vortices; it is not well understood.
Turbulent regime
For Reynolds number greater than 4000, the flow is turbulent; the resistance to flow follows the Darcy–Weisbach equation: it is proportional to the square of the mean flow velocity. Over a domain of many orders of magnitude of , the friction factor varies less than one order of magnitude. Within the turbulent flow regime, the nature of the flow can be further divided into a regime where the pipe wall is effectively smooth, and one where its roughness height is salient.
Smooth-pipe regime
When the pipe surface is smooth (the "smooth pipe" curve in Figure 2), the friction factor's variation with Re can be modeled by the Kármán–Prandtl resistance equation for turbulent flow in smooth pipes with the parameters suitably adjusted
The numbers 1.930 and 0.537 are phenomenological; these specific values provide a fairly good fit to the data. The product (called the "friction Reynolds number") can be considered, like the Reynolds number, to be a (dimensionless) parameter of the flow: at fixed values of , the friction factor is also fixed.
In the Kármán–Prandtl resistance equation, can be expressed in closed form as an analytic function of through the use of the Lambert function:
In this flow regime, many small vortices are responsible for the transfer of momentum between the bulk of the fluid to the pipe wall. As the friction Reynolds number increases, the profile of the fluid velocity approaches the wall asymptotically, thereby transferring more momentum to the pipe wall, as modeled in Blasius boundary layer theory.
Rough-pipe regime
When the pipe surface's roughness height is significant (typically at high Reynolds number), the friction factor departs from the smooth pipe curve, ultimately approaching an asymptotic value ("rough pipe" regime). In this regime, the resistance to flow varies according to the square of the mean flow velocity and is insensitive to Reynolds number. Here, it is useful to employ yet another dimensionless parameter of the flow, the roughness Reynolds number
where the roughness height is scaled to the pipe diameter .
It is illustrative to plot the roughness function :
Figure 3 shows versus for the rough pipe data of Nikuradse, Shockling, and Langelandsvik.
In this view, the data at different roughness ratio fall together when plotted against , demonstrating scaling in the variable . The following features are present:
When , then is identically zero: flow is always in the smooth pipe regime. The data for these points lie to the left extreme of the abscissa and are not within the frame of the graph.
When , the data lie on the line ; flow is in the smooth pipe regime.
When , the data asymptotically approach a horizontal line; they are independent of , , and .
The intermediate range of constitutes a transition from one behavior to the other. The data depart from the line very slowly, reach a maximum near , then fall to a constant value.
Afzal's fit to these data in the transition from smooth pipe flow to rough pipe flow employs an exponential expression in that ensures proper behavior for (the transition from the smooth pipe regime to the rough pipe regime):
and
This function shares the same values for its term in common with the Kármán–Prandtl resistance equation, plus one parameter 0.305 or 0.34 to fit the asymptotic behavior for along with one further parameter, 11, to govern the transition from smooth to rough flow. It is exhibited in Figure 3.
The friction factor for another analogous roughness becomes
:
and
:
This function shares the same values for its term in common with the Kármán–Prandtl resistance equation, plus one parameter 0.305 or 0.34 to fit the asymptotic behavior for along with one further parameter, 26, to govern the transition from smooth to rough flow.
The Colebrook–White relation fits the friction factor with a function of the form
This relation has the correct behavior at extreme values of , as shown by the labeled curve in Figure 3: when is small, it is consistent with smooth pipe flow, when large, it is consistent with rough pipe flow. However its performance in the transitional domain overestimates the friction factor by a substantial margin. Colebrook acknowledges the discrepancy with Nikuradze's data but argues that his relation is consistent with the measurements on commercial pipes. Indeed, such pipes are very different from those carefully prepared by Nikuradse: their surfaces are characterized by many different roughness heights and random spatial distribution of roughness points, while those of Nikuradse have surfaces with uniform roughness height, with the points extremely closely packed.
Calculating the friction factor from its parametrization
For turbulent flow, methods for finding the friction factor include using a diagram, such as the Moody chart, or solving equations such as the Colebrook–White equation (upon which the Moody chart is based), or the Swamee–Jain equation. While the Colebrook–White relation is, in the general case, an iterative method, the Swamee–Jain equation allows to be found directly for full flow in a circular pipe.
Direct calculation when friction loss is known
In typical engineering applications, there will be a set of given or known quantities. The acceleration of gravity and the kinematic viscosity of the fluid are known, as are the diameter of the pipe and its roughness height . If as well the head loss per unit length is a known quantity, then the friction factor can be calculated directly from the chosen fitting function. Solving the Darcy–Weisbach equation for ,
we can now express :
Expressing the roughness Reynolds number ,
we have the two parameters needed to substitute into the Colebrook–White relation, or any other function, for the friction factor , the flow velocity , and the volumetric flow rate .
Confusion with the Fanning friction factor
The Darcy–Weisbach friction factor is 4 times larger than the Fanning friction factor , so attention must be paid to note which one of these is meant in any "friction factor" chart or equation being used. Of the two, the Darcy–Weisbach factor is more commonly used by civil and mechanical engineers, and the Fanning factor by chemical engineers, but care should be taken to identify the correct factor regardless of the source of the chart or formula.
Note that
Most charts or tables indicate the type of friction factor, or at least provide the formula for the friction factor with laminar flow. If the formula for laminar flow is , it is the Fanning factor , and if the formula for laminar flow is , it is the Darcy–Weisbach factor .
Which friction factor is plotted in a Moody diagram may be determined by inspection if the publisher did not include the formula described above:
Observe the value of the friction factor for laminar flow at a Reynolds number of 1000.
If the value of the friction factor is 0.064, then the Darcy friction factor is plotted in the Moody diagram. Note that the nonzero digits in 0.064 are the numerator in the formula for the laminar Darcy friction factor: .
If the value of the friction factor is 0.016, then the Fanning friction factor is plotted in the Moody diagram. Note that the nonzero digits in 0.016 are the numerator in the formula for the laminar Fanning friction factor: .
The procedure above is similar for any available Reynolds number that is an integer power of ten. It is not necessary to remember the value 1000 for this procedure—only that an integer power of ten is of interest for this purpose.
History
Historically this equation arose as a variant on the Prony equation; this variant was developed by Henry Darcy of France, and further refined into the form used today by Julius Weisbach of Saxony in 1845. Initially, data on the variation of with velocity was lacking, so the Darcy–Weisbach equation was outperformed at first by the empirical Prony equation in many cases. In later years it was eschewed in many special-case situations in favor of a variety of empirical equations valid only for certain flow regimes, notably the Hazen–Williams equation or the Manning equation, most of which were significantly easier to use in calculations. However, since the advent of the calculator, ease of calculation is no longer a major issue, and so the Darcy–Weisbach equation's generality has made it the preferred one.
Derivation by dimensional analysis
Away from the ends of the pipe, the characteristics of the flow are independent of the position along the pipe. The key quantities are then the pressure drop along the pipe per unit length, , and the volumetric flow rate. The flow rate can be converted to a mean flow velocity by dividing by the wetted area of the flow (which equals the cross-sectional area of the pipe if the pipe is full of fluid).
Pressure has dimensions of energy per unit volume, therefore the pressure drop between two points must be proportional to the dynamic pressure q. We also know that pressure must be proportional to the length of the pipe between the two points as the pressure drop per unit length is a constant. To turn the relationship into a proportionality coefficient of dimensionless quantity, we can divide by the hydraulic diameter of the pipe, , which is also constant along the pipe. Therefore,
The proportionality coefficient is the dimensionless "Darcy friction factor" or "flow coefficient". This dimensionless coefficient will be a combination of geometric factors such as , the Reynolds number and (outside the laminar regime) the relative roughness of the pipe (the ratio of the roughness height to the hydraulic diameter).
Note that the dynamic pressure is not the kinetic energy of the fluid per unit volume, for the following reasons. Even in the case of laminar flow, where all the flow lines are parallel to the length of the pipe, the velocity of the fluid on the inner surface of the pipe is zero due to viscosity, and the velocity in the center of the pipe must therefore be larger than the average velocity obtained by dividing the volumetric flow rate by the wet area. The average kinetic energy then involves the root mean-square velocity, which always exceeds the mean velocity. In the case of turbulent flow, the fluid acquires random velocity components in all directions, including perpendicular to the length of the pipe, and thus turbulence contributes to the kinetic energy per unit volume but not to the average lengthwise velocity of the fluid.
Practical application
In a hydraulic engineering application, it is typical for the volumetric flow within a pipe (that is, its productivity) and the head loss per unit length (the concomitant power consumption) to be the critical important factors. The practical consequence is that, for a fixed volumetric flow rate , head loss decreases with the inverse fifth power of the pipe diameter, . Doubling the diameter of a pipe of a given schedule (say, ANSI schedule 40) roughly doubles the amount of material required per unit length and thus its installed cost. Meanwhile, the head loss is decreased by a factor of 32 (about a 97% reduction). Thus the energy consumed in moving a given volumetric flow of the fluid is cut down dramatically for a modest increase in capital cost.
Advantages
The Darcy-Weisbach's accuracy and universal applicability makes it the ideal formula for flow in pipes. The advantages of the equation are as follows:
It is based on fundamentals.
It is dimensionally consistent.
It is useful for any fluid, including oil, gas, brine, and sludges.
It can be derived analytically in the laminar flow region.
It is useful in the transition region between laminar flow and fully developed turbulent flow.
The friction factor variation is well documented.
See also
Bernoulli's principle
Darcy friction factor formulae
Euler number
Friction loss
Hazen–Williams equation
Hagen–Poiseuille equation
Water pipe
Notes
References
18. Afzal, Noor (2013) "Roughness effects of commercial steel pipe in turbulent flow:
Universal scaling". Canadian Journal of Civil Engineering 40, 188-193.
Further reading
External links
The History of the Darcy–Weisbach Equation
Darcy–Weisbach equation calculator
Pipe pressure drop calculator for single phase flows.
Pipe pressure drop calculator for two phase flows.
Open source pipe pressure drop calculator.
Web application with pressure drop calculations for pipes and ducts
ThermoTurb – A web application for thermal and turbulent flow analysis
Dimensionless numbers of fluid mechanics
Eponymous equations of physics
Equations of fluid dynamics
Piping | 0.766679 | 0.997893 | 0.765063 |
Grand potential | The grand potential or Landau potential or Landau free energy is a quantity used in statistical mechanics, especially for irreversible processes in open systems.
The grand potential is the characteristic state function for the grand canonical ensemble.
Definition
Grand potential is defined by
where U is the internal energy, T is the temperature of the system, S is the entropy, μ is the chemical potential, and N is the number of particles in the system.
The change in the grand potential is given by
where P is pressure and V is volume, using the fundamental thermodynamic relation (combined first and second thermodynamic laws);
When the system is in thermodynamic equilibrium, ΦG is a minimum. This can be seen by considering that dΦG is zero if the volume is fixed and the temperature and chemical potential have stopped evolving.
Landau free energy
Some authors refer to the grand potential as the Landau free energy or Landau potential and write its definition as:
named after Russian physicist Lev Landau, which may be a synonym for the grand potential, depending on system stipulations. For homogeneous systems, one obtains .
Homogeneous systems (vs. inhomogeneous systems)
In the case of a scale-invariant type of system (where a system of volume has exactly the same set of microstates as systems of volume ), then when the system expands new particles and energy will flow in from the reservoir to fill the new volume with a homogeneous extension of the original system.
The pressure, then, must be constant with respect to changes in volume:
and all extensive quantities (particle number, energy, entropy, potentials, ...) must grow linearly with volume, e.g.
In this case we simply have , as well as the familiar relationship for the Gibbs free energy.
The value of can be understood as the work that can be extracted from the system by shrinking it down to nothing (putting all the particles and energy back into the reservoir). The fact that is negative implies that the extraction of particles from the system to the reservoir requires energy input.
Such homogeneous scaling does not exist in many systems. For example, when analyzing the ensemble of electrons in a single molecule or even a piece of metal floating in space, doubling the volume of the space does double the number of electrons in the material.
The problem here is that, although electrons and energy are exchanged with a reservoir, the material host is not allowed to change.
Generally in small systems, or systems with long range interactions (those outside the thermodynamic limit), .
See also
Gibbs energy
Helmholtz energy
References
External links
Grand Potential (Manchester University)
Thermodynamics
Lev Landau | 0.781747 | 0.978655 | 0.76506 |
Positive energy theorem | The positive energy theorem (also known as the positive mass theorem) refers to a collection of foundational results in general relativity and differential geometry. Its standard form, broadly speaking, asserts that the gravitational energy of an isolated system is nonnegative, and can only be zero when the system has no gravitating objects. Although these statements are often thought of as being primarily physical in nature, they can be formalized as mathematical theorems which can be proven using techniques of differential geometry, partial differential equations, and geometric measure theory.
Richard Schoen and Shing-Tung Yau, in 1979 and 1981, were the first to give proofs of the positive mass theorem. Edward Witten, in 1982, gave the outlines of an alternative proof, which were later filled in rigorously by mathematicians. Witten and Yau were awarded the Fields medal in mathematics in part for their work on this topic.
An imprecise formulation of the Schoen-Yau / Witten positive energy theorem states the following:
The meaning of these terms is discussed below. There are alternative and non-equivalent formulations for different notions of energy-momentum and for different classes of initial data sets. Not all of these formulations have been rigorously proven, and it is currently an open problem whether the above formulation holds for initial data sets of arbitrary dimension.
Historical overview
The original proof of the theorem for ADM mass was provided by Richard Schoen and Shing-Tung Yau in 1979 using variational methods and minimal surfaces. Edward Witten gave another proof in 1981 based on the use of spinors, inspired by positive energy theorems in the context of supergravity. An extension of the theorem for the Bondi mass was given by Ludvigsen and James Vickers, Gary Horowitz and Malcolm Perry, and Schoen and Yau.
Gary Gibbons, Stephen Hawking, Horowitz and Perry proved extensions of the theorem to asymptotically anti-de Sitter spacetimes and to Einstein–Maxwell theory. The mass of an asymptotically anti-de Sitter spacetime is non-negative and only equal to zero for anti-de Sitter spacetime. In Einstein–Maxwell theory, for a spacetime with electric charge and magnetic charge , the mass of the spacetime satisfies (in Gaussian units)
with equality for the Majumdar–Papapetrou extremal black hole solutions.
Initial data sets
An initial data set consists of a Riemannian manifold and a symmetric 2-tensor field on . One says that an initial data set :
is time-symmetric if is zero
is maximal if
satisfies the dominant energy condition if
where denotes the scalar curvature of .
Note that a time-symmetric initial data set satisfies the dominant energy condition if and only if the scalar curvature of is nonnegative. One says that a Lorentzian manifold is a development of an initial data set if there is a (necessarily spacelike) hypersurface embedding of into , together with a continuous unit normal vector field, such that the induced metric is and the second fundamental form with respect to the given unit normal is .
This definition is motivated from Lorentzian geometry. Given a Lorentzian manifold of dimension and a spacelike immersion from a connected -dimensional manifold into which has a trivial normal bundle, one may consider the induced Riemannian metric as well as the second fundamental form of with respect to either of the two choices of continuous unit normal vector field along . The triple is an initial data set. According to the Gauss-Codazzi equations, one has
where denotes the Einstein tensor of and denotes the continuous unit normal vector field along used to define . So the dominant energy condition as given above is, in this Lorentzian context, identical to the assertion that , when viewed as a vector field along , is timelike or null and is oriented in the same direction as .
The ends of asymptotically flat initial data sets
In the literature there are several different notions of "asymptotically flat" which are not mutually equivalent. Usually it is defined in terms of weighted Hölder spaces or weighted Sobolev spaces.
However, there are some features which are common to virtually all approaches. One considers an initial data set which may or may not have a boundary; let denote its dimension. One requires that there is a compact subset of such that each connected component of the complement is diffeomorphic to the complement of a closed ball in Euclidean space . Such connected components are called the ends of .
Formal statements
Schoen and Yau (1979)
Let be a time-symmetric initial data set satisfying the dominant energy condition. Suppose that is an oriented three-dimensional smooth Riemannian manifold-with-boundary, and that each boundary component has positive mean curvature. Suppose that it has one end, and it is asymptotically Schwarzschild in the following sense:
Schoen and Yau's theorem asserts that must be nonnegative. If, in addition, the functions and are bounded for any then must be positive unless the boundary of is empty and is isometric to with its standard Riemannian metric.
Note that the conditions on are asserting that , together with some of its derivatives, are small when is large. Since is measuring the defect between in the coordinates and the standard representation of the slice of the Schwarzschild metric, these conditions are a quantification of the term "asymptotically Schwarzschild". This can be interpreted in a purely mathematical sense as a strong form of "asymptotically flat", where the coefficient of the part of the expansion of the metric is declared to be a constant multiple of the Euclidean metric, as opposed to a general symmetric 2-tensor.
Note also that Schoen and Yau's theorem, as stated above, is actually (despite appearances) a strong form of the "multiple ends" case. If is a complete Riemannian manifold with multiple ends, then the above result applies to any single end, provided that there is a positive mean curvature sphere in every other end. This is guaranteed, for instance, if each end is asymptotically flat in the above sense; one can choose a large coordinate sphere as a boundary, and remove the corresponding remainder of each end until one has a Riemannian manifold-with-boundary with a single end.
Schoen and Yau (1981)
Let be an initial data set satisfying the dominant energy condition. Suppose that is an oriented three-dimensional smooth complete Riemannian manifold (without boundary); suppose that it has finitely many ends, each of which is asymptotically flat in the following sense.
Suppose that is an open precompact subset such that has finitely many connected components and for each there is a diffeomorphism such that the symmetric 2-tensor satisfies the following conditions:
and are bounded for all
Also suppose that
and are bounded for any
and for any
is bounded.
The conclusion is that the ADM energy of each defined as
is nonnegative. Furthermore, supposing in addition that
and are bounded for any
the assumption that for some implies that , that is diffeomorphic to , and that Minkowski space is a development of the initial data set .
Witten (1981)
Let be an oriented three-dimensional smooth complete Riemannian manifold (without boundary). Let be a smooth symmetric 2-tensor on such that
Suppose that is an open precompact subset such that has finitely many connected components and for each there is a diffeomorphism such that the symmetric 2-tensor satisfies the following conditions:
and are bounded for all
and are bounded for all
For each define the ADM energy and linear momentum by
For each consider this as a vector in Minkowski space. Witten's conclusion is that for each it is necessarily a future-pointing non-spacelike vector. If this vector is zero for any then is diffeomorphic to and the maximal globally hyperbolic development of the initial data set has zero curvature.
Extensions and remarks
According to the above statements, Witten's conclusion is stronger than Schoen and Yau's. However, a third paper by Schoen and Yau shows that their 1981 result implies Witten's, retaining only the extra assumption that and are bounded for any It also must be noted that Schoen and Yau's 1981 result relies on their
1979 result, which is proved by contradiction; therefore their extension of their 1981 result is also by contradiction. By contrast, Witten's proof is logically direct, exhibiting the ADM energy directly as a nonnegative quantity. Furthermore, Witten's proof in the case can be extended without much effort to higher-dimensional manifolds, under the topological condition that the manifold admits a spin structure. Schoen and Yau's 1979 result and proof can be extended to the case of any dimension less than eight. More recently, Witten's result, using Schoen and Yau (1981)'s methods, has been extended to the same context. In summary: following Schoen and Yau's methods, the positive energy theorem has been proven in dimension less than eight, while following Witten, it has been proven in any dimension but with a restriction to the setting of spin manifolds.
As of April 2017, Schoen and Yau have released a preprint which proves the general higher-dimensional case in the special case without any restriction on dimension or topology. However, it has not yet (as of May 2020) appeared in an academic journal.
Applications
In 1984 Schoen used the positive mass theorem in his work which completed the solution of the Yamabe problem.
The positive mass theorem was used in Hubert Bray's proof of the Riemannian Penrose inequality.
References
Textbooks
Choquet-Bruhat, Yvonne. General relativity and the Einstein equations. Oxford Mathematical Monographs. Oxford University Press, Oxford, 2009. xxvi+785 pp.
Wald, Robert M. General relativity. University of Chicago Press, Chicago, IL, 1984. xiii+491 pp.
Mathematical methods in general relativity
Theorems in general relativity | 0.779932 | 0.980926 | 0.765055 |
Orthogonality | In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity. Whereas perpendicular is typically followed by to when relating two lines to one another (e.g., "line A is perpendicular to line B"), orthogonal is commonly used without to (e.g., "orthogonal lines A and B").
Orthogonality is also used with various meanings that are often weakly related or not related at all with the mathematical meanings.
Etymology
The word comes from the Ancient Greek , meaning "upright", and , meaning "angle".
The Ancient Greek and Classical Latin originally denoted a rectangle. Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle.
Mathematics
Physics
Optics
In optics, polarization states are said to be orthogonal when they propagate independently of each other, as in vertical and horizontal linear polarization or right- and left-handed circular polarization.
Special relativity
In special relativity, a time axis determined by a rapidity of motion is hyperbolic-orthogonal to a space axis of simultaneous events, also determined by the rapidity. The theory features relativity of simultaneity.
Hyperbolic orthogonality
Quantum mechanics
In quantum mechanics, a sufficient (but not necessary) condition that two eigenstates of a Hermitian operator, and , are orthogonal is that they correspond to different eigenvalues. This means, in Dirac notation, that if and correspond to different eigenvalues. This follows from the fact that Schrödinger's equation is a Sturm–Liouville equation (in Schrödinger's formulation) or that observables are given by Hermitian operators (in Heisenberg's formulation).
Art
In art, the perspective (imaginary) lines pointing to the vanishing point are referred to as "orthogonal lines". The term "orthogonal line" often has a quite different meaning in the literature of modern art criticism. Many works by painters such as Piet Mondrian and Burgoyne Diller are noted for their exclusive use of "orthogonal lines" — not, however, with reference to perspective, but rather referring to lines that are straight and exclusively horizontal or vertical, forming right angles where they intersect. For example, an essay at the web site of the Thyssen-Bornemisza Museum states that "Mondrian ... dedicated his entire oeuvre to the investigation of the balance between orthogonal lines and primary colours."
Computer science
Orthogonality in programming language design is the ability to use various language features in arbitrary combinations with consistent results. This usage was introduced by Van Wijngaarden in the design of Algol 68:
The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied “orthogonally” in order to maximize the expressive power of the language while trying to avoid deleterious superfluities.
Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system. Typically this is achieved through the separation of concerns and encapsulation, and it is essential for feasible and compact designs of complex systems. The emergent behavior of a system consisting of components should be controlled strictly by formal definitions of its logic and not by side effects resulting from poor integration, i.e., non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designs that neither cause side effects nor depend on them.
Orthogonal instruction set
An instruction set is said to be orthogonal if it lacks redundancy (i.e., there is only a single instruction that can be used to accomplish a given task) and is designed such that instructions can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are the instruction fields. One field identifies the registers to be operated upon and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers and addressing modes.
Telecommunications
In telecommunications, multiple access schemes are orthogonal when an ideal receiver can completely reject arbitrarily strong unwanted signals from the desired signal using different basis functions. One such scheme is time-division multiple access (TDMA), where the orthogonal basis functions are nonoverlapping rectangular pulses ("time slots").
Orthogonal frequency-division multiplexing
Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single transmitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make them orthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of 802.11 Wi-Fi; WiMAX; ITU-T G.hn, DVB-T, the terrestrial digital TV broadcast system used in most of the world outside North America; and DMT (Discrete Multi Tone), the standard form of ADSL.
In OFDM, the subcarrier frequencies are chosen so that the subcarriers are orthogonal to each other, meaning that crosstalk between the subchannels is eliminated and intercarrier guard bands are not required. This greatly simplifies the design of both the transmitter and the receiver. In conventional FDM, a separate filter for each subchannel is required.
Statistics, econometrics, and economics
When performing statistical analysis, independent variables that affect a particular dependent variable are said to be orthogonal if they are uncorrelated, since the covariance forms an inner product. In this case the same results are obtained for the effect of any of the independent variables upon the dependent variable, regardless of whether one models the effects of the variables individually with simple regression or simultaneously with multiple regression. If correlation is present, the factors are not orthogonal and different results are obtained by the two methods. This usage arises from the fact that if centered by subtracting the expected value (the mean), uncorrelated variables are orthogonal in the geometric sense discussed above, both as observed data (i.e., vectors) and as random variables (i.e., density functions).
One econometric formalism that is alternative to the maximum likelihood framework, the Generalized Method of Moments, relies on orthogonality conditions. In particular, the Ordinary Least Squares estimator may be easily derived from an orthogonality condition between the explanatory variables and model residuals.
Taxonomy
In taxonomy, an orthogonal classification is one in which no item is a member of more than one group, that is, the classifications are mutually exclusive.
Chemistry and biochemistry
In chemistry and biochemistry, an orthogonal interaction occurs when there are two pairs of substances and each substance can interact with their respective partner, but does not interact with either substance of the other pair. For example, DNA has two orthogonal pairs: cytosine and guanine form a base-pair, and adenine and thymine form another base-pair, but other base-pair combinations are strongly disfavored. As a chemical example, tetrazine reacts with transcyclooctene and azide reacts with cyclooctyne without any cross-reaction, so these are mutually orthogonal reactions, and so, can be performed simultaneously and selectively.
Organic synthesis
In organic synthesis, orthogonal protection is a strategy allowing the deprotection of functional groups independently of each other.
Bioorthogonal chemistry
Supramolecular chemistry
In supramolecular chemistry the notion of orthogonality refers to the possibility of two or more supramolecular, often non-covalent, interactions being compatible; reversibly forming without interference from the other.
Analytical chemistry
In analytical chemistry, analyses are "orthogonal" if they make a measurement or identification in completely different ways, thus increasing the reliability of the measurement. Orthogonal testing thus can be viewed as "cross-checking" of results, and the "cross" notion corresponds to the etymologic origin of orthogonality. Orthogonal testing is often required as a part of a new drug application.
System reliability
In the field of system reliability orthogonal redundancy is that form of redundancy where the form of backup device or method is completely different from the prone to error device or method. The failure mode of an orthogonally redundant back-up device or method does not intersect with and is completely different from the failure mode of the device or method in need of redundancy to safeguard the total system against catastrophic failure.
Neuroscience
In neuroscience, a sensory map in the brain which has overlapping stimulus coding (e.g. location and quality) is called an orthogonal map.
Philosophy
In philosophy, two topics, authors, or pieces of writing are said to be "orthogonal" to each other when they do not substantively cover what could be considered potentially overlapping or competing claims. Thus, texts in philosophy can either support and complement one another, they can offer competing explanations or systems, or they can be orthogonal to each other in cases where the scope, content, and purpose of the pieces of writing are entirely unrelated.
Gaming
In board games such as chess which feature a grid of squares, 'orthogonal' is used to mean "in the same row/'rank' or column/'file'". This is the counterpart to squares which are "diagonally adjacent". In the ancient Chinese board game Go a player can capture the stones of an opponent by occupying all orthogonally adjacent points.
Other examples
Stereo vinyl records encode both the left and right stereo channels in a single groove. The V-shaped groove in the vinyl has walls that are 90 degrees to each other, with variations in each wall separately encoding one of the two analogue channels that make up the stereo signal. The cartridge senses the motion of the stylus following the groove in two orthogonal directions: 45 degrees from vertical to either side. A pure horizontal motion corresponds to a mono signal, equivalent to a stereo signal in which both channels carry identical (in-phase) signals.
See also
Orthogonal ligand-protein pair
Up tack
References | 0.767128 | 0.997234 | 0.765006 |
Kinetic energy penetrator | A kinetic energy penetrator (KEP), also known as long-rod penetrator (LRP), is a type of ammunition designed to penetrate vehicle armour using a flechette-like, high-sectional density projectile. Like a bullet or kinetic energy weapon, this type of ammunition does not contain explosive payloads and uses purely kinetic energy to penetrate the target. Modern KEP munitions are typically of the armour-piercing fin-stabilized discarding sabot (APFSDS) type.
History
Early cannons fired kinetic energy ammunition, initially consisting of heavy balls of worked stone and later of dense metals. From the beginning, combining high muzzle energy with projectile weight and hardness have been the foremost factors in the design of such weapons. Similarly, the foremost purpose of such weapons has generally been to defeat protective shells of armored vehicles or other defensive structures, whether it is stone walls, sailship timbers, or modern tank armour. Kinetic energy ammunition, in its various forms, has consistently been the choice for those weapons due to the highly focused terminal ballistics.
The development of the modern KE penetrator combines two aspects of artillery design, high muzzle velocity and concentrated force. High muzzle velocity is achieved by using a projectile with a low mass and large base area in the gun barrel. Firing a small-diameter projectile wrapped in a lightweight outer shell, called a sabot, raises the muzzle velocity. Once the shell clears the barrel, the sabot is no longer needed and falls off in pieces. This leaves the projectile traveling at high velocity with a smaller cross-sectional area and reduced aerodynamic drag during the flight to the target (see external ballistics and terminal ballistics). Germany developed modern sabots under the name "treibspiegel" ("thrust mirror") to give extra altitude to its anti-aircraft guns during the Second World War. Before this, primitive wooden sabots had been used for centuries in the form of a wooden plug attached to or breech loaded before cannonballs in the barrel, placed between the propellant charge and the projectile. The name "sabot" (pronounced in English usage) is the French word for clog (a wooden shoe traditionally worn in some European countries).
Concentration of force into a smaller area was initially attained by replacing the single metal (usually steel) shot with a composite shot using two metals, a heavy core (based on tungsten) inside a lighter metal outer shell. These designs were known as armour-piercing composite rigid (APCR) by the British, high-velocity armor-piercing (HVAP) by the US, and hartkern (hard core) by the Germans. On impact, the core had a much more concentrated effect than plain metal shot of the same weight and size. The air resistance and other effects were the same as for the shell of identical size. High-velocity armor-piercing (HVAP) rounds were primarily used by tank destroyers in the US Army and were relatively uncommon as the tungsten core was expensive and prioritized for other applications.
Between 1941 and 1943, the British combined the two techniques in the armour-piercing discarding sabot (APDS) round. The sabot replaced the outer metal shell of the APCR. While in the gun, the shot had a large base area to get maximum acceleration from the propelling charge but once outside, the sabot fell away to reveal a heavy shot with a small cross-sectional area. APDS rounds served as the primary kinetic energy weapon of most tanks during the early-Cold War period, though they suffered the primary drawback of inaccuracy. This was resolved with the introduction of the armour-piercing fin-stabilized discarding sabot (APFSDS) round during the 1970s, which added stabilising fins to the penetrator, greatly increasing accuracy.
Design
The principle of the kinetic energy penetrator is that it uses its kinetic energy, which is a function of its mass and velocity, to force its way through armor. If the armor is defeated, the heat and spalling (particle spray) generated by the penetrator going through the armor, and the pressure wave that develops, ideally destroys the target.
The modern kinetic energy weapon maximizes the stress (kinetic energy divided by impact area) delivered to the target by:
maximizing the mass – that is, using the densest metals practical, which is one of the reasons depleted uranium or tungsten carbide is often used – and muzzle velocity of the projectile, as kinetic energy scales with the mass m and the square of the velocity v of the projectile
minimizing the width, since if the projectile does not tumble, it will hit the target face first. As most modern projectiles have circular cross-sectional areas, their impact area will scale with the square of the radius r (the impact area being )
The penetrator length plays a large role in determining the ultimate depth of penetration. Generally, a penetrator is incapable of penetrating deeper than its own length, as the sheer stress of impact and perforation ablates it. This has led to the current designs which resemble a long metal arrow.
For monobloc penetrators made of a single material, a perforation formula devised by Wili Odermatt and W. Lanz can calculate the penetration depth of an APFSDS round.
In 1982, an analytical investigation drawing from concepts of gas dynamics and experiments on target penetration led to the conclusion on the efficiency of impactors that penetration is deeper using unconventional three-dimensional shapes.
The opposite method of KE-penetrators uses chemical energy penetrators. Two types of such shells are in use: high-explosive anti-tank (HEAT) and high-explosive squash head (HESH). They have been widely used against armour in the past and still have a role but are less effective against modern composite armour, such as Chobham as used on main battle tanks today. Main battle tanks usually use KE-penetrators, while HEAT is mainly found in missile systems that are shoulder-launched or vehicle-mounted, and HESH is usually favored for fortification demolition.
See also
Compact Kinetic Energy Missile
Earthquake bomb
Flechette
Hellfire R9X
Impact depth
Kinetic bombardment
MGM-166 LOSAT
Röchling shell
Notes
References
Anti-tank rounds
Projectiles
Ammunition
Collision | 0.769223 | 0.994517 | 0.765005 |
Euclidean geometry | Euclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry, Elements. Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates) and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated earlier, Euclid was the first to organize these propositions into a logical system in which each result is proved from axioms and previously proved theorems.
The Elements begins with plane geometry, still taught in secondary school (high school) as the first axiomatic system and the first examples of mathematical proofs. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, explained in geometrical language.
For more than two thousand years, the adjective "Euclidean" was unnecessary because
Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that theorems proved from them were deemed absolutely true, and thus no other sorts of geometry were possible. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only over short distances (relative to the strength of the gravitational field).
Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects. This is in contrast to analytic geometry, introduced almost 2,000 years later by René Descartes, which uses coordinates to express geometric properties by means of algebraic formulas.
The Elements
The Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost.
There are 13 books in the Elements:
Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, for example, "In any triangle, two angles taken together in any manner are less than two right angles." (Book I proposition 17) and the Pythagorean theorem "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." (Book I, proposition 47)
Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of surface regions. Notions such as prime numbers and rational and irrational numbers are introduced. It is proved that there are infinitely many prime numbers.
Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base. The platonic solids are constructed.
Axioms
Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of simple axioms. Until the advent of non-Euclidean geometry, these axioms were considered to be obviously true in the physical world, so that all the theorems would be equally true. However, Euclid's reasoning from assumptions to conclusions remains valid independently from the physical reality.
Near the beginning of the first book of the Elements, Euclid gives five postulates (axioms) for plane geometry, stated in terms of constructions (as translated by Thomas Heath):
Let the following be postulated:
To draw a straight line from any point to any point.
To produce (extend) a finite straight line continuously in a straight line.
To describe a circle with any centre and distance (radius).
That all right angles are equal to one another.
[The parallel postulate]: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles.
Although Euclid explicitly only asserts the existence of the constructed objects, in his reasoning he also implicitly assumes them to be unique.
The Elements also include the following five "common notions":
Things that are equal to the same thing are also equal to one another (the transitive property of a Euclidean relation).
If equals are added to equals, then the wholes are equal (Addition property of equality).
If equals are subtracted from equals, then the differences are equal (subtraction property of equality).
Things that coincide with one another are equal to one another (reflexive property).
The whole is greater than the part.
Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more extensive and complete sets of axioms.
Parallel postulate
To the ancients, the parallel postulate seemed less obvious than the others. They aspired to create a system of absolutely certain propositions, and to them, it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible since one can construct consistent systems of geometry (obeying the other axioms) in which the parallel postulate is true, and others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: his first 28 propositions are those that can be proved without it.
Many alternative axioms can be formulated which are logically equivalent to the parallel postulate (in the context of the other axioms). For example, Playfair's axiom states:
In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line.
The "at most" clause is all that is needed since it can be proved from the remaining axioms that at least one parallel line exists.
Methods of proof
Euclidean Geometry is constructive. Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are models of the objects defined within the formal system, rather than instances of those objects. For example, a Euclidean straight line has no width, but any real drawn line will have. Though nearly all modern mathematicians consider nonconstructive proofs just as sound as constructive ones, they are often considered less elegant, intuitive, or practically useful. Euclid's constructive proofs often supplanted fallacious nonconstructive ones, e.g. some Pythagorean proofs that assumed all numbers are rational, usually requiring a statement such as "Find the greatest common measure of ..."
Euclid often used proof by contradiction.
Notation and terminology
Naming of points and figures
Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C.
Complementary and supplementary angles
Angles whose sum is a right angle are called complementary. Complementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays is infinite.
Angles whose sum is a straight angle are supplementary. Supplementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the straight angle (180 degree angle). The number of rays in between the two original rays is infinite.
Modern versions of Euclid's notation
In modern terminology, angles would normally be measured in degrees or radians.
Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length", although he occasionally referred to "infinite lines". A "line" for Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary.
Some important or well known results
Pons asinorum
The pons asinorum (bridge of asses) states that in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another. Its name may be attributed to its frequent role as the first real test in the Elements of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross.
Congruence of triangles
Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent.
Triangle angle sum
The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have three interior angles of 60 degrees. Also, it causes every triangle to have at least two acute angles and up to one obtuse or right angle.
Pythagorean theorem
The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
Thales' theorem
Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid Book I, Prop. 32 after the manner of Euclid Book III, Prop. 31.
Scaling of area and volume
In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, , and the volume of a solid to the cube, . Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. For instance, it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder.
System of measurement and arithmetic
Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, for example, a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain nonzero length as the unit, and other distances are expressed in relation to it. Addition of distances is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction.
Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, for example in the proof of book IX, proposition 20.
Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal respectively, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are equal and corresponding sides are in proportion to each other.
In engineering
Design and Analysis
Stress Analysis: Stress Analysis - Euclidean geometry is pivotal in determining stress distribution in mechanical components, which is essential for ensuring structural integrity and durability.
Gear Design: Gear - The design of gears, a crucial element in many mechanical systems, relies heavily on Euclidean geometry to ensure proper tooth shape and engagement for efficient power transmission.
Heat Exchanger Design: Heat exchanger - In thermal engineering, Euclidean geometry is used to design heat exchangers, where the geometric configuration greatly influences thermal efficiency. See shell-and-tube heat exchangers and plate heat exchangers for more details.
Lens Design: Lens - In optical engineering, Euclidean geometry is critical in the design of lenses, where precise geometric shapes determine the focusing properties. Geometric optics analyzes the focusing of light by lenses and mirrors.
Dynamics
Vibration Analysis: Vibration - Euclidean geometry is essential in analyzing and understanding the vibrations in mechanical systems, aiding in the design of systems that can withstand or utilize these vibrations effectively.
Wing Design: Aircraft Wing Design - The application of Euclidean geometry in aerodynamics is evident in aircraft wing design, airfoils, and hydrofoils where geometric shape directly impacts lift and drag characteristics.
Satellite Orbits: Satellite Orbits - Euclidean geometry helps in calculating and predicting the orbits of satellites, essential for successful space missions and satellite operations. Also see astrodynamics, celestial mechanics, and elliptic orbit.
CAD Systems
3D Modeling: In CAD (computer-aided design) systems, Euclidean geometry is fundamental for creating accurate 3D models of mechanical parts. These models are crucial for visualizing and testing designs before manufacturing.
Design and Manufacturing: Much of CAM (computer-aided manufacturing) relies on Euclidean geometry. The design geometry in CAD/CAM typically consists of shapes bounded by planes, cylinders, cones, tori, and other similar Euclidean forms. Today, CAD/CAM is essential in the design of a wide range of products, from cars and airplanes to ships and smartphones.
Evolution of Drafting Practices: Historically, advanced Euclidean geometry, including theorems like Pascal's theorem and Brianchon's theorem, was integral to drafting practices. However, with the advent of modern CAD systems, such in-depth knowledge of these theorems is less necessary in contemporary design and manufacturing processes.
Circuit Design
PCB Layouts: Printed Circuit Board (PCB) Design utilizes Euclidean geometry for the efficient placement and routing of components, ensuring functionality while optimizing space. Efficient layout of electronic components on PCBs is critical for minimizing signal interference and optimizing circuit performance.
Electromagnetic and Fluid Flow Fields
Antenna Design: Antenna Design - Euclidean geometry of antennas helps in designing antennas, where the spatial arrangement and dimensions directly affect antenna and array performance in transmitting and receiving electromagnetic waves.
Field Theory: Complex Potential Flow - In the study of inviscid flow fields and electromagnetic fields, Euclidean geometry aids in visualizing and solving potential flow problems. This is essential for understanding fluid velocity field and electromagnetic field interactions in three-dimensional space. The relationship of which is characterized by an irrotational solenoidal field or a conservative vector field.
Controls
Control System Analysis: Control Systems - The application of Euclidean geometry in control theory helps in the analysis and design of control systems, particularly in understanding and optimizing system stability and response.
Calculation Tools: Jacobian - Euclidean geometry is integral in using Jacobian matrices for transformations and control systems in both mechanical and electrical engineering fields, providing insights into system behavior and properties. The Jacobian serves as a linearized design matrix in statistical regression and curve fitting; see non-linear least squares. The Jacobian is also used in random matrices, moment, statistics, and diagnostics.
Other general applications
Because of Euclidean geometry's fundamental status in mathematics, it is impractical to give more than a representative sampling of applications here.
As suggested by the etymology of the word, one of the earliest reasons for interest in and also one of the most common current uses of geometry is surveying. In addition it has been used in classical mechanics and the cognitive and computational approaches to visual perception of objects. Certain practical results from Euclidean geometry (such as the right-angle property of the 3-4-5 triangle) were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, both of which can be measured directly by a surveyor. Historically, distances were often measured by chains, such as Gunter's chain, and angles using graduated circles and, later, the theodolite.
An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction.
Geometry is used extensively in architecture.
Geometry can be used to design origami. Some classical construction problems of geometry are impossible using compass and straightedge, but can be solved using origami.
Later history
Archimedes and Apollonius
Archimedes, a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers.
Apollonius of Perga is mainly known for his investigation of conic sections.
17th century: Descartes
René Descartes (1596–1650) developed analytic geometry, an alternative method for formalizing geometry which focused on turning geometry into algebra.
In this approach, a point on a plane is represented by its Cartesian (x, y) coordinates, a line is represented by its equation, and so on.
In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems.
The equation
defining the distance between two points P = (px, py) and Q = (qx, qy) is then known as the Euclidean metric, and other metrics define non-Euclidean geometries.
In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., y = 2x + 1 (a line), or x2 + y2 = 7 (a circle).
Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced.
18th century
Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763, at least 28 different proofs had been published, but all were found incorrect.
Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve equations whose order is an integral power of two, while doubling a cube requires the solution of a third-order equation.
Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint).
19th century
In the early 19th century, Carnot and Möbius systematically developed the use of signed angles and line segments as a way of simplifying and unifying results.
Higher dimensions
In the 1840s William Rowan Hamilton developed the quaternions, and John T. Graves and Arthur Cayley the octonions. These are normed algebras which extend the complex numbers. Later it was understood that the quaternions are also a Euclidean geometric system with four real Cartesian coordinates. Cayley used quaternions to study rotations in 4-dimensional Euclidean space.
At mid-century Ludwig Schläfli developed the general concept of Euclidean space, extending Euclidean geometry to higher dimensions. He defined polyschemes, later called polytopes, which are the higher-dimensional analogues of polygons and polyhedra. He developed their theory and discovered all the regular polytopes, i.e. the -dimensional analogues of regular polygons and Platonic solids. He found there are six regular convex polytopes in dimension four, and three in all higher dimensions.
Schläfli performed this work in relative obscurity and it was published in full only posthumously in 1901. It had little influence until it was rediscovered and fully documented in 1948 by H.S.M. Coxeter.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions. The Clifford torus on the surface of the 3-sphere is the simplest and most symmetric flat embedding of the Cartesian product of two circles (in the same sense that the surface of a cylinder is "flat").
Non-Euclidean geometry
The century's most influential development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates.
In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of the theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the Elements, shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski.
20th century and relativity
Einstein's theory of special relativity involves a four-dimensional space-time, the Minkowski space, which is non-Euclidean. This shows that non-Euclidean geometries, which had been introduced a few years earlier for showing that the parallel postulate cannot be proved, are also useful for describing the physical world.
However, the three-dimensional "space part" of the Minkowski space remains the space of Euclidean geometry. This is not the case with general relativity, for which the geometry of the space part of space-time is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the Sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting these deviations in rays of light from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system.
As a description of the structure of space
Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called Euclidean motions, which include translations, reflections and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries; postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature).
As discussed above, Albert Einstein's theory of relativity significantly modifies this view.
The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1–4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry).
Treatment of infinity
Infinite objects
Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite.
The notion of infinitesimal quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals.
Later ancient commentators, such as Proclus (410–485 CE), treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it.
At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work.
Infinite processes
Ancient geometers may have considered the parallel postulate – that two parallel lines do not ever intersect – less certain than the others because it makes a statement about infinitely remote regions of space, and so cannot be physically verified.
The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes.
Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite.
Logical basis
Classical logic
Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true.
Modern standards of rigor
Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference:
That is, mathematics is context-independent knowledge within a hierarchical framework. As said by Bertrand Russell:
Such foundational approaches range between foundationalism and formalism.
Axiomatic formulations
Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time. It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean.
Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate.
Birkhoff's axioms: Birkhoff proposed four postulates for Euclidean geometry that can be confirmed experimentally with scale and protractor. This system relies heavily on the properties of the real numbers. The notions of angle and distance become primitive concepts.
Tarski's axioms: Alfred Tarski (1902–1983) and his students defined elementary Euclidean geometry as the geometry that can be expressed in first-order logic and does not depend on set theory for its logical basis, in contrast to Hilbert's axioms, which involve point sets. Tarski proved that his axiomatic formulation of elementary Euclidean geometry is consistent and complete in a certain sense: there is an algorithm that, for every proposition, can be shown either true or false. (This does not violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) This is equivalent to the decidability of real closed fields, of which elementary Euclidean geometry is a model.
See also
Absolute geometry
Analytic geometry
Birkhoff's axioms
Cartesian coordinate system
Hilbert's axioms
Incidence geometry
List of interactive geometry software
Metric space
Non-Euclidean geometry
Ordered geometry
Parallel postulate
Type theory
Classical theorems
Angle bisector theorem
Butterfly theorem
Ceva's theorem
Heron's formula
Menelaus' theorem
Nine-point circle
Pythagorean theorem
Notes
References
In 3 vols.: vol. 1 , vol. 2 , vol. 3 . Heath's authoritative translation of Euclid's Elements, plus his extensive historical research and detailed commentary throughout the text.
External links
Kiran Kedlaya, Geometry Unbound (a treatment using analytic geometry; PDF format, GFDL licensed)
Greek inventions | 0.766057 | 0.998615 | 0.764996 |
Rarefaction | Rarefaction is the reduction of an item's density, the opposite of compression. Like compression, which can travel in waves (sound waves, for instance), rarefaction waves also exist in nature. A common rarefaction wave is the area of low relative pressure following a shock wave (see picture).
Rarefaction waves expand with time (much like sea waves spread out as they reach a beach); in most cases rarefaction waves keep the same overall profile ('shape') at all times throughout the wave's movement: it is a self-similar expansion. Each part of the wave travels at the local speed of sound, in the local medium. This expansion behaviour contrasts with that of pressure increases, which gets narrower with time until they steepen into shock waves.
Physical examples
A natural example of rarefaction occurs in the layers of Earth's atmosphere. Because the atmosphere has mass, most atmospheric matter is nearer to the Earth due to the Earth's gravitation. Therefore, air at higher layers of the atmosphere is less dense, or rarefied, relative to air at lower layers. Thus, rarefaction can refer either to a reduction in density over space at a single point of time, or a reduction of density over time for one particular area.
Rarefaction can be easily observed by compressing a spring and releasing it.
In manufacturing
Modern construction of guitars is an example of using rarefaction in manufacturing. By forcing the reduction of density (loss of oils and other impurities) in the cellular structure of the soundboard, a rarefied guitar top produces a tonal decompression affecting the sound of the instrument, mimicking aged wood.
See also
Longitudinal wave
P-wave
Prandtl–Meyer expansion fan
Citations
Sound
Acoustics
Waves
Conservation equations | 0.775538 | 0.986393 | 0.764985 |
Poincaré group | The Poincaré group, named after Henri Poincaré (1905), was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics.
Overview
The Poincaré group consists of all coordinate transformations of Minkowski space that do not change the spacetime interval between events. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stopwatch that you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift.
In total, there are ten degrees of freedom for such transformations. They may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a "boost" in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with rotations being produced as the composition of an even number of reflections.
In classical physics, the Galilean group is a comparable ten-parameter group that acts on absolute time and space. Instead of boosts, it features shear mappings to relate co-moving frames of reference.
In general relativity, i.e. under the effects of gravity, Poincaré symmetry applies only locally. A treatment of symmetries in general relativity is not in the scope of this article.
Poincaré symmetry
Poincaré symmetry is the full symmetry of special relativity. It includes:
translations (displacements) in time and space, forming the abelian Lie group of spacetime translations (P);
rotations in space, forming the non-abelian Lie group of three-dimensional rotations (J);
boosts, transformations connecting two uniformly moving bodies (K).
The last two symmetries, J and K, together make the Lorentz group (see also Lorentz invariance); the semi-direct product of the spacetime translations group and the Lorentz group then produce the Poincaré group. Objects that are invariant under this group are then said to possess Poincaré invariance or relativistic invariance.
10 generators (in four spacetime dimensions) associated with the Poincaré symmetry, by Noether's theorem, imply 10 conservation laws:
1 for the energy – associated with translations through time
3 for the momentum – associated with translations through spatial dimensions
3 for the angular momentum – associated with rotations between spatial dimensions
3 for a quantity involving the velocity of the center of mass – associated with hyperbolic rotations between each spatial dimension and time
Poincaré group
The Poincaré group is the group of Minkowski spacetime isometries. It is a ten-dimensional noncompact Lie group. The four-dimensional abelian group of spacetime translations is a normal subgroup, while the six-dimensional Lorentz group is also a subgroup, the stabilizer of the origin. The Poincaré group itself is the minimal subgroup of the affine group which includes all translations and Lorentz transformations. More precisely, it is a semidirect product of the spacetime translations group and the Lorentz group,
with group multiplication
.
Another way of putting this is that the Poincaré group is a group extension of the Lorentz group by a vector representation of it; it is sometimes dubbed, informally, as the inhomogeneous Lorentz group. In turn, it can also be obtained as a group contraction of the de Sitter group , as the de Sitter radius goes to infinity.
Its positive energy unitary irreducible representations are indexed by mass (nonnegative number) and spin (integer or half integer) and are associated with particles in quantum mechanics (see Wigner's classification).
In accordance with the Erlangen program, the geometry of Minkowski space is defined by the Poincaré group: Minkowski space is considered as a homogeneous space for the group.
In quantum field theory, the universal cover of the Poincaré group
which may be identified with the double cover
is more important, because representations of are not able to describe fields with spin 1/2; i.e. fermions. Here is the group of complex matrices with unit determinant, isomorphic to the Lorentz-signature spin group .
Poincaré algebra
The Poincaré algebra is the Lie algebra of the Poincaré group. It is a Lie algebra extension of the Lie algebra of the Lorentz group. More specifically, the proper, orthochronous part of the Lorentz subgroup (its identity component), , is connected to the identity and is thus provided by the exponentiation of this Lie algebra. In component form, the Poincaré algebra is given by the commutation relations:
where is the generator of translations, is the generator of Lorentz transformations, and is the Minkowski metric (see Sign convention).
The bottom commutation relation is the ("homogeneous") Lorentz group, consisting of rotations, , and boosts, . In this notation, the entire Poincaré algebra is expressible in noncovariant (but more practical) language as
where the bottom line commutator of two boosts is often referred to as a "Wigner rotation". The simplification permits reduction of the Lorentz subalgebra to and efficient treatment of its associated representations. In terms of the physical parameters, we have
The Casimir invariants of this algebra are and where is the Pauli–Lubanski pseudovector; they serve as labels for the representations of the group.
The Poincaré group is the full symmetry group of any relativistic field theory. As a result, all elementary particles fall in representations of this group. These are usually specified by the four-momentum squared of each particle (i.e. its mass squared) and the intrinsic quantum numbers , where is the spin quantum number, is the parity and is the charge-conjugation quantum number. In practice, charge conjugation and parity are violated by many quantum field theories; where this occurs, and are forfeited. Since CPT symmetry is invariant in quantum field theory, a time-reversal quantum number may be constructed from those given.
As a topological space, the group has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time-reversed and spatially inverted.
Other dimensions
The definitions above can be generalized to arbitrary dimensions in a straightforward manner. The -dimensional Poincaré group is analogously defined by the semi-direct product
with the analogous multiplication
.
The Lie algebra retains its form, with indices and now taking values between and . The alternative representation in terms of and has no analogue in higher dimensions.
See also
Euclidean group
Galilean group
Representation theory of the Poincaré group
Wigner's classification
Symmetry in quantum mechanics
Pauli–Lubanski pseudovector
Particle physics and representation theory
Continuous spin particle
super-Poincaré algebra
Notes
References
Lie groups
Group
Quantum field theory
Theory of relativity
Symmetry | 0.770076 | 0.993368 | 0.764969 |
Luminiferous aether | Luminiferous aether or ether (luminiferous meaning 'light-bearing') was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space (a vacuum), something that waves should not be able to do. The assumption of a spatial plenum (space completely filled with matter) of luminiferous aether, rather than a spatial vacuum, provided the theoretical medium that was required by wave theories of light.
The aether hypothesis was the topic of considerable debate throughout its history, as it required the existence of an invisible and infinite material with no interaction with physical objects. As the nature of light was explored, especially in the 19th century, the physical qualities required of an aether became increasingly contradictory. By the late 19th century, the existence of the aether was being questioned, although there was no physical theory to replace it.
The negative outcome of the Michelson–Morley experiment (1887) suggested that the aether did not exist, a finding that was confirmed in subsequent experiments through the 1920s. This led to considerable theoretical work to explain the propagation of light without an aether. A major breakthrough was the special theory of relativity, which could explain why the experiment failed to see aether, but was more broadly interpreted to suggest that it was not needed. The Michelson–Morley experiment, along with the blackbody radiator and photoelectric effect, was a key experiment in the development of modern physics, which includes both relativity and quantum theory, the latter of which explains the particle-like nature of light.
The history of light and aether
Particles vs. waves
In the 17th century, Robert Boyle was a proponent of an aether hypothesis. According to Boyle, the aether consists of subtle particles, one sort of which explains the absence of vacuum and the mechanical interactions between bodies, and the other sort of which explains phenomena such as magnetism (and possibly gravity) that are, otherwise, inexplicable on the basis of purely mechanical interactions of macroscopic bodies, "though in the ether of the ancients there was nothing taken notice of but a diffused and very subtle substance; yet we are at present content to allow that there is always in the air a swarm of streams moving in a determinate course between the north pole and the south".
Christiaan Huygens's Treatise on Light (1690) hypothesized that light is a wave propagating through an aether. He and Isaac Newton could only envision light waves as being longitudinal, propagating like sound and other mechanical waves in fluids. However, longitudinal waves necessarily have only one form for a given propagation direction, rather than two polarizations like a transverse wave. Thus, longitudinal waves can not explain birefringence, in which two polarizations of light are refracted differently by a crystal. In addition, Newton rejected light as waves in a medium because such a medium would have to extend everywhere in space, and would thereby "disturb and retard the Motions of those great Bodies" (the planets and comets) and thus "as it is of no use, and hinders the Operation of Nature, and makes her languish, so there is no evidence for its Existence, and therefore it ought to be rejected".
Isaac Newton contended that light is made up of numerous small particles. This can explain such features as light's ability to travel in straight lines and reflect off surfaces. Newton imagined light particles as non-spherical "corpuscles", with different "sides" that give rise to birefringence. But the particle theory of light can not satisfactorily explain refraction and diffraction. To explain refraction, Newton's Third Book of Opticks (1st ed. 1704, 4th ed. 1730) postulated an "aethereal medium" transmitting vibrations faster than light, by which light, when overtaken, is put into "Fits of easy Reflexion and easy Transmission", which caused refraction and diffraction. Newton believed that these vibrations were related to heat radiation:
Is not the Heat of the warm Room convey'd through the vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum? And is not this Medium the same with that Medium by which Light is refracted and reflected, and by whose Vibrations Light communicates Heat to Bodies, and is put into Fits of easy Reflexion and easy Transmission?
In contrast to the modern understanding that heat radiation and light are both electromagnetic radiation, Newton viewed heat and light as two different phenomena. He believed heat vibrations to be excited "when a Ray of Light falls upon the Surface of any pellucid Body". He wrote, "I do not know what this Aether is", but that if it consists of particles then they must be exceedingly smaller than those of Air, or even than those of Light: The exceeding smallness of its Particles may contribute to the greatness of the force by which those Particles may recede from one another, and thereby make that Medium exceedingly more rare and elastic than Air, and by consequence exceedingly less able to resist the motions of Projectiles, and exceedingly more able to press upon gross Bodies, by endeavoring to expand itself.
Bradley suggests particles
In 1720, James Bradley carried out a series of experiments attempting to measure stellar parallax by taking measurements of stars at different times of the year. As the Earth moves around the Sun, the apparent angle to a given distant spot changes. By measuring those angles the distance to the star can be calculated based on the known orbital circumference of the Earth around the Sun. He failed to detect any parallax, thereby placing a lower limit on the distance to stars.
During these experiments, Bradley also discovered a related effect; the apparent positions of the stars did change over the year, but not as expected. Instead of the apparent angle being maximized when the Earth was at either end of its orbit with respect to the star, the angle was maximized when the Earth was at its fastest sideways velocity with respect to the star. This effect is now known as stellar aberration.
Bradley explained this effect in the context of Newton's corpuscular theory of light, by showing that the aberration angle was given by simple vector addition of the Earth's orbital velocity and the velocity of the corpuscles of light, just as vertically falling raindrops strike a moving object at an angle. Knowing the Earth's velocity and the aberration angle enabled him to estimate the speed of light.
Explaining stellar aberration in the context of an aether-based theory of light was regarded as more problematic. As the aberration relied on relative velocities, and the measured velocity was dependent on the motion of the Earth, the aether had to be remaining stationary with respect to the star as the Earth moved through it. This meant that the Earth could travel through the aether, a physical medium, with no apparent effect – precisely the problem that led Newton to reject a wave model in the first place.
Wave-theory triumphs
A century later, Thomas Young and Augustin-Jean Fresnel revived the wave theory of light when they pointed out that light could be a transverse wave rather than a longitudinal wave; the polarization of a transverse wave (like Newton's "sides" of light) could explain birefringence, and in the wake of a series of experiments on diffraction the particle model of Newton was finally abandoned. Physicists assumed, moreover, that, like mechanical waves, light waves required a medium for propagation, and thus required Huygens's idea of an aether "gas" permeating all space.
However, a transverse wave apparently required the propagating medium to behave as a solid, as opposed to a fluid. The idea of a solid that did not interact with other matter seemed a bit odd, and Augustin-Louis Cauchy suggested that perhaps there was some sort of "dragging", or "entrainment", but this made the aberration measurements difficult to understand. He also suggested that the absence of longitudinal waves suggested that the aether had negative compressibility. George Green pointed out that such a fluid would be unstable. George Gabriel Stokes became a champion of the entrainment interpretation, developing a model in which the aether might, like pine pitch, be dilatant (fluid at slow speeds and rigid at fast speeds). Thus the Earth could move through it fairly freely, but it would be rigid enough to support light.
Electromagnetism
In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the numerical value of the ratio of the electrostatic unit of charge to the electromagnetic unit of charge. They found that the ratio between the electrostatic unit of charge and the electromagnetic unit of charge is the speed of light c. The following year, Gustav Kirchhoff wrote a paper in which he showed that the speed of a signal along an electric wire was equal to the speed of light. These are the first recorded historical links between the speed of light and electromagnetic phenomena.
James Clerk Maxwell began working on Michael Faraday's lines of force. In his 1861 paper On Physical Lines of Force he modelled these magnetic lines of force using a sea of molecular vortices that he considered to be partly made of aether and partly made of ordinary matter. He derived expressions for the dielectric constant and the magnetic permeability in terms of the transverse elasticity and the density of this elastic medium. He then equated the ratio of the dielectric constant to the magnetic permeability with a suitably adapted version of Weber and Kohlrausch's result of 1856, and he substituted this result into Newton's equation for the speed of sound. On obtaining a value that was close to the speed of light as measured by Hippolyte Fizeau, Maxwell concluded that light consists in undulations of the same medium that is the cause of electric and magnetic phenomena.
Maxwell had, however, expressed some uncertainties surrounding the precise nature of his molecular vortices and so he began to embark on a purely dynamical approach to the problem. He wrote another paper in 1864, entitled "A Dynamical Theory of the Electromagnetic Field", in which the details of the luminiferous medium were less explicit. Although Maxwell did not explicitly mention the sea of molecular vortices, his derivation of Ampère's circuital law was carried over from the 1861 paper and he used a dynamical approach involving rotational motion within the electromagnetic field which he likened to the action of flywheels. Using this approach to justify the electromotive force equation (the precursor of the Lorentz force equation), he derived a wave equation from a set of eight equations which appeared in the paper and which included the electromotive force equation and Ampère's circuital law. Maxwell once again used the experimental results of Weber and Kohlrausch to show that this wave equation represented an electromagnetic wave that propagates at the speed of light, hence supporting the view that light is a form of electromagnetic radiation.
In 1887–1889, Heinrich Hertz experimentally demonstrated the electric magnetic waves are identical to light waves. This unification of electromagnetic wave and optics indicated that there was a single luminiferous aether instead of many different kinds of aether media.
The apparent need for a propagation medium for such Hertzian waves (later called radio waves) can be seen by the fact that they consist of orthogonal electric (E) and magnetic (B or H) waves. The E waves consist of undulating dipolar electric fields, and all such dipoles appeared to require separated and opposite electric charges. Electric charge is an inextricable property of matter, so it appeared that some form of matter was required to provide the alternating current that would seem to have to exist at any point along the propagation path of the wave. Propagation of waves in a true vacuum would imply the existence of electric fields without associated electric charge, or of electric charge without associated matter. Albeit compatible with Maxwell's equations, electromagnetic induction of electric fields could not be demonstrated in vacuum, because all methods of detecting electric fields required electrically charged matter.
In addition, Maxwell's equations required that all electromagnetic waves in vacuum propagate at a fixed speed, c. As this can only occur in one reference frame in Newtonian physics (see Galilean relativity), the aether was hypothesized as the absolute and unique frame of reference in which Maxwell's equations hold. That is, the aether must be "still" universally, otherwise c would vary along with any variations that might occur in its supportive medium. Maxwell himself proposed several mechanical models of aether based on wheels and gears, and George Francis FitzGerald even constructed a working model of one of them. These models had to agree with the fact that the electromagnetic waves are transverse but never longitudinal.
Problems
By this point the mechanical qualities of the aether had become more and more magical: it had to be a fluid in order to fill space, but one that was millions of times more rigid than steel in order to support the high frequencies of light waves. It also had to be massless and without viscosity, otherwise it would visibly affect the orbits of planets. Additionally it appeared it had to be completely transparent, non-dispersive, incompressible, and continuous at a very small scale. Maxwell wrote in Encyclopædia Britannica:
Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, until all space had been filled three or four times over with aethers. ... The only aether which has survived is that which was invented by Huygens to explain the propagation of light.
By the early 20th century, aether theory was in trouble. A series of increasingly complex experiments had been carried out in the late 19th century to try to detect the motion of the Earth through the aether, and had failed to do so. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Lorentz and FitzGerald offered within the framework of Lorentz ether theory a more elegant solution to how the motion of an absolute aether could be undetectable (length contraction), but if their equations were correct, the new special theory of relativity (1905) could generate the same mathematics without referring to an aether at all. Aether fell to Occam's Razor.
Relative motion between the Earth and aether
Aether drag
The two most important models, which were aimed to describe the relative motion of the Earth and aether, were Augustin-Jean Fresnel's (1818) model of the (nearly) stationary aether including a partial aether drag determined by Fresnel's dragging coefficient, and George Gabriel Stokes' (1844)
model of complete aether drag. The latter theory was not considered as correct, since it was not compatible with the aberration of light, and the auxiliary hypotheses developed to explain this problem were not convincing. Also, subsequent experiments as the Sagnac effect (1913) also showed that this model is untenable. However, the most important experiment supporting Fresnel's theory was Fizeau's 1851 experimental confirmation of Fresnel's 1818 prediction that a medium with refractive index n moving with a velocity v would increase the speed of light travelling through the medium in the same direction as v from c/n to:
That is, movement adds only a fraction of the medium's velocity to the light (predicted by Fresnel in order to make Snell's law work in all frames of reference, consistent with stellar aberration). This was initially interpreted to mean that the medium drags the aether along, with a portion of the medium's velocity, but that understanding became very problematic after Wilhelm Veltmann demonstrated that the index n in Fresnel's formula depended upon the wavelength of light, so that the aether could not be moving at a wavelength-independent speed. This implied that there must be a separate aether for each of the infinitely many frequencies.
Negative aether-drift experiments
The key difficulty with Fresnel's aether hypothesis arose from the juxtaposition of the two well-established theories of Newtonian dynamics and Maxwell's electromagnetism. Under a Galilean transformation the equations of Newtonian dynamics are invariant, whereas those of electromagnetism are not. Basically this means that while physics should remain the same in non-accelerated experiments, light would not follow the same rules because it is travelling in the universal "aether frame". Some effect caused by this difference should be detectable.
A simple example concerns the model on which aether was originally built: sound. The speed of propagation for mechanical waves, the speed of sound, is defined by the mechanical properties of the medium. Sound travels 4.3 times faster in water than in air. This explains why a person hearing an explosion underwater and quickly surfacing can hear it again as the slower travelling sound arrives through the air. Similarly, a traveller on an airliner can still carry on a conversation with another traveller because the sound of words is travelling along with the air inside the aircraft. This effect is basic to all Newtonian dynamics, which says that everything from sound to the trajectory of a thrown baseball should all remain the same in the aircraft flying (at least at a constant speed) as if still sitting on the ground. This is the basis of the Galilean transformation, and the concept of frame of reference.
But the same was not supposed to be true for light, since Maxwell's mathematics demanded a single universal speed for the propagation of light, based, not on local conditions, but on two measured properties, the permittivity and permeability of free space, that were assumed to be the same throughout the universe. If these numbers did change, there should be noticeable effects in the sky; stars in different directions would have different colours, for instance.
Thus at any point there should be one special coordinate system, "at rest relative to the aether". Maxwell noted in the late 1870s that detecting motion relative to this aether should be easy enough—light travelling along with the motion of the Earth would have a different speed than light travelling backward, as they would both be moving against the unmoving aether. Even if the aether had an overall universal flow, changes in position during the day/night cycle, or over the span of seasons, should allow the drift to be detected.
First-order experiments
Although the aether is almost stationary according to Fresnel, his theory predicts a positive outcome of aether drift experiments only to second order in because Fresnel's dragging coefficient would cause a negative outcome of all optical experiments capable of measuring effects to first order in . This was confirmed by the following first-order experiments, all of which gave negative results. The following list is based on the description of Wilhelm Wien (1898), with changes and additional experiments according to the descriptions of Edmund Taylor Whittaker (1910) and Jakob Laub (1910):
The experiment of François Arago (1810), to confirm whether refraction, and thus the aberration of light, is influenced by Earth's motion. Similar experiments were conducted by George Biddell Airy (1871) by means of a telescope filled with water, and Éleuthère Mascart (1872).
The experiment of Fizeau (1860), to find whether the rotation of the polarization plane through glass columns is changed by Earth's motion. He obtained a positive result, but Lorentz could show that the results have been contradictory. DeWitt Bristol Brace (1905) and Strasser (1907) repeated the experiment with improved accuracy, and obtained negative results.
The experiment of Martin Hoek (1868). This experiment is a more precise variation of the Fizeau experiment (1851). Two light rays were sent in opposite directions – one of them traverses a path filled with resting water, the other one follows a path through air. In agreement with Fresnel's dragging coefficient, he obtained a negative result.
The experiment of Wilhelm Klinkerfues (1870) investigated whether an influence of Earth's motion on the absorption line of sodium exists. He obtained a positive result, but this was shown to be an experimental error, because a repetition of the experiment by Haga (1901) gave a negative result.
The experiment of Ketteler (1872), in which two rays of an interferometer were sent in opposite directions through two mutually inclined tubes filled with water. No change of the interference fringes occurred. Later, Mascart (1872) showed that the interference fringes of polarized light in calcite remained uninfluenced as well.
The experiment of Éleuthère Mascart (1872) to find a change of rotation of the polarization plane in quartz. No change of rotation was found when the light rays had the direction of Earth's motion and then the opposite direction. Lord Rayleigh conducted similar experiments with improved accuracy, and obtained a negative result as well.
Besides those optical experiments, also electrodynamic first-order experiments were conducted, which should have led to positive results according to Fresnel. However, Hendrik Antoon Lorentz (1895) modified Fresnel's theory and showed that those experiments can be explained by a stationary aether as well:
The experiment of Wilhelm Röntgen (1888), to find whether a charged capacitor produces magnetic forces due to Earth's motion.
The experiment of Theodor des Coudres (1889), to find whether the inductive effect of two wire rolls upon a third one is influenced by the direction of Earth's motion. Lorentz showed that this effect is cancelled to first order by the electrostatic charge (produced by Earth's motion) upon the conductors.
The experiment of Königsberger (1905). The plates of a capacitor are located in the field of a strong electromagnet. Due to Earth's motion, the plates should have become charged. No such effect was observed.
The experiment of Frederick Thomas Trouton (1902). A capacitor was brought parallel to Earth's motion, and it was assumed that momentum is produced when the capacitor is charged. The negative result can be explained by Lorentz's theory, according to which the electromagnetic momentum compensates the momentum due to Earth's motion. Lorentz could also show, that the sensitivity of the apparatus was much too low to observe such an effect.
Second-order experiments
While the first-order experiments could be explained by a modified stationary aether, more precise second-order experiments were expected to give positive results. However, no such results could be found.
The famous Michelson–Morley experiment compared the source light with itself after being sent in different directions and looked for changes in phase in a manner that could be measured with extremely high accuracy. In this experiment, their goal was to determine the velocity of the Earth through the aether. The publication of their result in 1887, the null result, was the first clear demonstration that something was seriously wrong with the aether hypothesis (Michelson's first experiment in 1881 was not entirely conclusive). In this case the MM experiment yielded a shift of the fringing pattern of about 0.01 of a fringe, corresponding to a small velocity. However, it was incompatible with the expected aether wind effect due to the Earth's (seasonally varying) velocity which would have required a shift of 0.4 of a fringe, and the error was small enough that the value may have indeed been zero. Therefore, the null hypothesis, the hypothesis that there was no aether wind, could not be rejected. More modern experiments have since reduced the possible value to a number very close to zero, about 10−17.
A series of experiments using similar but increasingly sophisticated apparatuses all returned the null result as well. Conceptually different experiments that also attempted to detect the motion of the aether were the Trouton–Noble experiment (1903), whose objective was to detect torsion effects caused by electrostatic fields, and the experiments of Rayleigh and Brace (1902, 1904), to detect double refraction in various media. However, all of them obtained a null result, like Michelson–Morley (MM) previously did.
These "aether-wind" experiments led to a flurry of efforts to "save" aether by assigning to it ever more complex properties, and only a few scientists, like Emil Cohn or Alfred Bucherer, considered the possibility of the abandonment of the aether hypothesis. Of particular interest was the possibility of "aether entrainment" or "aether drag", which would lower the magnitude of the measurement, perhaps enough to explain the results of the Michelson–Morley experiment. However, as noted earlier, aether dragging already had problems of its own, notably aberration. In addition, the interference experiments of Lodge (1893, 1897) and Ludwig Zehnder (1895), aimed to show whether the aether is dragged by various, rotating masses, showed no aether drag. A more precise measurement was made in the Hammar experiment (1935), which ran a complete MM experiment with one of the "legs" placed between two massive lead blocks. If the aether was dragged by mass then this experiment would have been able to detect the drag caused by the lead, but again the null result was achieved. The theory was again modified, this time to suggest that the entrainment only worked for very large masses or those masses with large magnetic fields. This too was shown to be incorrect by the Michelson–Gale–Pearson experiment, which detected the Sagnac effect due to Earth's rotation (see Aether drag hypothesis).
Another completely different attempt to save "absolute" aether was made in the Lorentz–FitzGerald contraction hypothesis, which posited that everything was affected by travel through the aether. In this theory, the reason that the Michelson–Morley experiment "failed" was that the apparatus contracted in length in the direction of travel. That is, the light was being affected in the "natural" manner by its travel through the aether as predicted, but so was the apparatus itself, cancelling out any difference when measured. FitzGerald had inferred this hypothesis from a paper by Oliver Heaviside. Without referral to an aether, this physical interpretation of relativistic effects was shared by Kennedy and Thorndike in 1932 as they concluded that the interferometer's arm contracts and also the frequency of its light source "very nearly" varies in the way required by relativity.
Similarly, the Sagnac effect, observed by G. Sagnac in 1913, was immediately seen to be fully consistent with special relativity. In fact, the Michelson–Gale–Pearson experiment in 1925 was proposed specifically as a test to confirm the relativity theory, although it was also recognized that such tests, which merely measure absolute rotation, are also consistent with non-relativistic theories.
During the 1920s, the experiments pioneered by Michelson were repeated by Dayton Miller, who publicly proclaimed positive results on several occasions, although they were not large enough to be consistent with any known aether theory. However, other researchers were unable to duplicate Miller's claimed results. Over the years the experimental accuracy of such measurements has been raised by many orders of magnitude, and no trace of any violations of Lorentz invariance has been seen. (A later re-analysis of Miller's results concluded that he had underestimated the variations due to temperature.)
Since the Miller experiment and its unclear results there have been many more experimental attempts to detect the aether. Many experimenters have claimed positive results. These results have not gained much attention from mainstream science, since they contradict a large quantity of high-precision measurements, all the results of which were consistent with special relativity.
Lorentz aether theory
Between 1892 and 1904, Hendrik Lorentz developed an electron–aether theory, in which he avoided making assumptions about the aether. In his model the aether is completely motionless, and by that he meant that it could not be set in motion in the neighborhood of ponderable matter. Contrary to earlier electron models, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field cannot propagate faster than the speed of light. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that an observer moving relative to the aether makes the same observations as a resting observer, after a suitable change of variables. Lorentz noticed that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892) to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. This resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900) and Lorentz (1899, 1904), whereby (it was noted by Larmor) the complete formulation of local time is accompanied by some sort of time dilation of electrons moving in the aether. As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice. Therefore, Lorentz's theorem is seen by modern authors as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion.
The work of Lorentz was mathematically perfected by Henri Poincaré, who formulated on many occasions the Principle of Relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 and 1904 he physically interpreted Lorentz's local time as the result of clock synchronization by light signals. In June and July 1905 he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. However, he used the notion of an aether as a perfectly undetectable medium and distinguished between apparent and real time, so most historians of science argue that he failed to invent special relativity.
End of aether
Special relativity
Aether theory was dealt another blow when the Galilean transformation and Newtonian dynamics were both modified by Albert Einstein's special theory of relativity, giving the mathematics of Lorentzian electrodynamics a new, "non-aether" context. Unlike most major shifts in scientific thought, special relativity was adopted by the scientific community remarkably quickly, consistent with Einstein's later comment that the laws of physics described by the Special Theory were "ripe for discovery" in 1905. Max Planck's early advocacy of the special theory, along with the elegant formulation given to it by Hermann Minkowski, contributed much to the rapid acceptance of special relativity among working scientists.
Einstein based his theory on Lorentz's earlier work. Instead of suggesting that the mechanical properties of objects changed with their constant-velocity motion through an undetectable aether, Einstein proposed to deduce the characteristics that any successful theory must possess in order to be consistent with the most basic and firmly established principles, independent of the existence of a hypothetical aether. He found that the Lorentz transformation must transcend its connection with Maxwell's equations, and must represent the fundamental relations between the space and time coordinates of inertial frames of reference. In this way he demonstrated that the laws of physics remained invariant as they had with the Galilean transformation, but that light was now invariant as well.
With the development of the special theory of relativity, the need to account for a single universal frame of reference had disappeared – and acceptance of the 19th-century theory of a luminiferous aether disappeared with it. For Einstein, the Lorentz transformation implied a conceptual change: that the concept of position in space or time was not absolute, but could differ depending on the observer's location and velocity.
Moreover, in another paper published the same month in 1905, Einstein made several observations on a then-thorny problem, the photoelectric effect. In this work he demonstrated that light can be considered as particles that have a "wave-like nature". Particles obviously do not need a medium to travel, and thus, neither did light. This was the first step that would lead to the full development of quantum mechanics, in which the wave-like nature and the particle-like nature of light are both considered as valid descriptions of light. A summary of Einstein's thinking about the aether hypothesis, relativity and light quanta may be found in his 1909 (originally German) lecture "The Development of Our Views on the Composition and Essence of Radiation".
Lorentz on his side continued to use the aether hypothesis. In his lectures of around 1911, he pointed out that what "the theory of relativity has to say ... can be carried out independently of what one thinks of the aether and the time". He commented that "whether there is an aether or not, electromagnetic fields certainly exist, and so also does the energy of the electrical oscillations" so that, "if we do not like the name of 'aether', we must use another word as a peg to hang all these things upon". He concluded that "one cannot deny the bearer of these concepts a certain substantiality".
Nevertheless, in 1920, Einstein gave an address at Leiden University in which he commented "More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity". He concluded his address by saying that "according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable."
Other models
In later years there have been a few individuals who advocated a neo-Lorentzian approach to physics, which is Lorentzian in the sense of positing an absolute true state of rest that is undetectable and which plays no role in the predictions of the theory. (No violations of Lorentz covariance have ever been detected, despite strenuous efforts.) Hence these theories resemble the 19th century aether theories in name only. For example, the founder of quantum field theory, Paul Dirac, stated in 1951 in an article in Nature, titled "Is there an Aether?" that "we are rather forced to have an aether". However, Dirac never formulated a complete theory, and so his speculations found no acceptance by the scientific community.
Einstein's views on the aether
When Einstein was still a student in the Zurich Polytechnic in 1900, he was very interested in the idea of aether. His initial proposal of research thesis was to do an experiment to measure how fast the Earth was moving through the aether. "The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces."
In 1916, after Einstein completed his foundational work on general relativity, Lorentz wrote a letter to him in which he speculated that within general relativity the aether was re-introduced. In his response Einstein wrote that one can actually speak about a "new aether", but one may not speak of motion in relation to that aether. This was further elaborated by Einstein in some semi-popular articles (1918, 1920, 1924, 1930).
In 1918, Einstein publicly alluded to that new definition for the first time. Then, in the early 1920s, in a lecture which he was invited to give at Lorentz's university in Leiden, Einstein sought to reconcile the theory of relativity with Lorentzian aether. In this lecture Einstein stressed that special relativity took away the last mechanical property of the aether: immobility. However, he continued that special relativity does not necessarily rule out the aether, because the latter can be used to give physical reality to acceleration and rotation. This concept was fully elaborated within general relativity, in which physical properties (which are partially determined by matter) are attributed to space, but no substance or state of motion can be attributed to that "aether" (by which he meant curved space-time).
In another paper of 1924, named "Concerning the Aether", Einstein argued that Newton's absolute space, in which acceleration is absolute, is the "Aether of Mechanics". And within the electromagnetic theory of Maxwell and Lorentz one can speak of the "Aether of Electrodynamics", in which the aether possesses an absolute state of motion. As regards special relativity, also in this theory acceleration is absolute as in Newton's mechanics. However, the difference from the electromagnetic aether of Maxwell and Lorentz lies in the fact that "because it was no longer possible to speak, in any absolute sense, of simultaneous states at different locations in the aether, the aether became, as it were, four-dimensional since there was no objective way of ordering its states by time alone". Now the "aether of special relativity" is still "absolute", because matter is affected by the properties of the aether, but the aether is not affected by the presence of matter. This asymmetry was solved within general relativity. Einstein explained that the "aether of general relativity" is not absolute, because matter is influenced by the aether, just as matter influences the structure of the aether.
The only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics.
Aether concepts
Aether theories
Aether (classical element)
Aether drag hypothesis
Astral light
See also
Dirac sea
Etheric plane
Galactic year
History of special relativity
Le Sage's theory of gravitation
One-way speed of light
Preferred frame
Superseded scientific theories
Virtual particle
Welteislehre
References
Footnotes
Citations
Primary sources
Experiments
Secondary sources
External links
Harry Bateman (1915) The Structure of the Aether, Bulletin of the American Mathematical Society 21(6):299–309.
The Aether of Space – Lord Rayleigh's address
ScienceWeek Theoretical Physics: On the Aether and Broken Symmetry
The New Student's Reference Work/Ether
Aether theories | 0.767202 | 0.997024 | 0.764919 |
Convection | Convection is single or multiphase fluid flow that occurs spontaneously due to the combined effects of material property heterogeneity and body forces on a fluid, most commonly density and gravity (see buoyancy). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell). The convection may be due to gravitational, electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere, its oceans, and its mantle. Discrete convective cells in the atmosphere can be identified by clouds, with stronger convection resulting in thunderstorms. Natural convection also plays a role in stellar physics. Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place.
Granular convection is a similar phenomenon in granular material instead of fluids.
Advection is fluid motion created by velocity instead of thermal gradients.
Convective heat transfer is the intentional use of convection as a method for heat transfer. Convection is a process in which heat is carried from place to place by the bulk movement of a fluid and gases.
History
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
Terminology
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics, convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference.
In thermodynamics, convection often refers to heat transfer by convection, where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer.
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection.
Mechanisms
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation: the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall (inertial) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire, plate tectonics, oceanic currents (thermohaline circulation) and sea-wind formation (where upward convection is also modified by Coriolis forces). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number (Ra).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force (proper acceleration).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink. Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense, and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational or buoyant convection
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection. For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline.
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Solid-state convection in ice
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa, and other bodies in the outer Solar System.
Thermomagnetic convection
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility. In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field.
Combustion
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Examples and applications
Systems of natural circulation include tornadoes and other weather systems, ocean currents, and household ventilation. Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres, oceans, planetary mantles, and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane. On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes, at speeds which may closely approach that of light.
Demonstration experiments
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow.
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature).
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously.
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively.
Double diffusive convection
Convection cells
A convection cell, also known as a Bénard cell, is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes.
Atmospheric convection
Atmospheric circulation
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth, together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator, and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex, with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity, allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat, but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation.
Weather
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle. For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze.
Warm air has a lower density than cool air, so warm air rises within cooler air, similar to hot air balloons. Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense. When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Oceanic circulation
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles, while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere, and the reverse across the Southern Hemisphere. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary.
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation, leaving a saltier brine. In this process, the water becomes saltier and denser. and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp.) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water, a south-going stream.
Mantle convection
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface.
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation (accretion) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics. Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle, and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary. Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40K, uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy, as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
Stack effect
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
Stellar physics
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation. This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy.
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water convection at freezing temperatures
Water is a fluid that does not obey the Boussinesq approximation. This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling, where the water is cooled to below freezing temperatures but does not immediately begin to freeze. Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. This plume is caused by a delay in the nucleation of the ice. Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped.
Nuclear reactors
In a nuclear reactor, natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G
and S8G United States Naval reactors, which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
Mathematical models of convection
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number, Grashof number, Richardson number, and the Rayleigh number.
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If , forced convection may be neglected, whereas if , natural convection may be neglected. If the ratio, known as the Richardson number, is approximately one, then both forced and natural convection need to be taken into account.
Onset
The onset of natural convection is determined by the Rayleigh number (Ra). This dimensionless number is given by
where
is the difference in density between the two parcels of material that are mixing
is the local gravitational acceleration
is the characteristic length-scale of convection: the depth of the boiling pot, for example
is the diffusivity of the characteristic that is causing the convection, and
is the dynamic viscosity.
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
is the reference density, typically picked to be the average density of the medium,
is the coefficient of thermal expansion, and
is the temperature difference across the medium.
The general diffusivity, , is redefined as a thermal diffusivity, .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection.
Turbulence
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr).
In very sticky, viscous fluids (large ν), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
Behavior
The Grashof number can be formulated for natural convection occurring due to a concentration gradient, sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f4(Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning: The values indicated for the Horizontal cylinder are wrong; see discussion.
Natural convection from a vertical plate
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection.
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar.
Num = 0.478(Gr0.25)
Mean Nusselt number = Num = hmL/k
where
hm = mean coefficient applicable between the lower edge of the plate and any point in a distance L (W/m2. K)
L = height of the vertical surface (m)
k = thermal conductivity (W/m. K)
Grashof number = Gr =
where
g = gravitational acceleration (m/s2)
L = distance above the lower edge (m)
ts = temperature of the wall (K)
t∞ = fluid temperature outside the thermal boundary layer (K)
v = kinematic viscosity of the fluid (m2/s)
T = absolute temperature (K)
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number) must be used.
Note that the above equation differs from the usual expression for Grashof number because the value has been replaced by its approximation , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Pattern formation
Convection, especially Rayleigh–Bénard convection, where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system.
When heat is fed into the system from one direction (usually below), at small values it merely diffuses (conducts) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number, the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity, which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell.
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals, may begin to appear.
See also
References
External links
Fluid mechanics
Physical phenomena | 0.766581 | 0.997829 | 0.764916 |
Electromagnetic wave equation | The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field or the magnetic field , takes the form:
where
is the speed of light (i.e. phase velocity) in a medium with permeability , and permittivity , and is the Laplace operator. In a vacuum, , a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, is called the magnetic flux density or magnetic induction. The following equationspredicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The origin of the electromagnetic wave equation
In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In Part VI of his 1864 paper titled Electromagnetic Theory of Light, Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented:
The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Ampère's circuital law with Faraday's law of induction.
To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. In a vacuum- and charge-free space, these equations are:
These are the general Maxwell's equations specialized to the case with charge and current both set to zero.
Taking the curl of the curl equations gives:
We can use the vector identity
where is any vector function of space. And
where is a dyadic which when operated on by the divergence operator yields a vector. Since
then the first term on the right in the identity vanishes and we obtain the wave equations:
where
is the speed of light in free space.
Covariant form of the homogeneous wave equation
These relativistic equations can be written in contravariant form as
where the electromagnetic four-potential is
with the Lorenz gauge condition:
and where
is the d'Alembert operator.
Homogeneous wave equation in curved spacetime
The electromagnetic wave equation is modified in two ways, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears.
where is the Ricci curvature tensor and the semicolon indicates covariant differentiation.
The generalization of the Lorenz gauge condition in curved spacetime is assumed:
Inhomogeneous electromagnetic wave equation
Localized time-varying charge and current densities can act as sources of electromagnetic waves in a vacuum. Maxwell's equations can be written in the form of a wave equation with sources. The addition of sources to the wave equations makes the partial differential equations inhomogeneous.
Solutions to the homogeneous electromagnetic wave equation
The general solution to the electromagnetic wave equation is a linear superposition of waves of the form
for virtually well-behaved function of dimensionless argument , where is the angular frequency (in radians per second), and is the wave vector (in radians per meter).
Although the function can be and often is a monochromatic sine wave, it does not have to be sinusoidal, or even periodic. In practice, cannot have infinite periodicity because any real electromagnetic wave must always have a finite extent in time and space. As a result, and based on the theory of Fourier decomposition, a real wave must consist of the superposition of an infinite set of sinusoidal frequencies.
In addition, for a valid solution, the wave vector and the angular frequency are not independent; they must adhere to the dispersion relation:
where is the wavenumber and is the wavelength. The variable can only be used in this equation when the electromagnetic wave is in a vacuum.
Monochromatic, sinusoidal steady-state
The simplest set of solutions to the wave equation result from assuming sinusoidal waveforms of a single frequency in separable form:
where
is the imaginary unit,
is the angular frequency in radians per second,
is the' frequency in hertz, and
is Euler's formula.
Plane wave solutions
Consider a plane defined by a unit normal vector
Then planar traveling wave solutions of the wave equations are
where is the position vector (in meters).
These solutions represent planar waves traveling in the direction of the normal vector . If we define the direction as the direction of , and the direction as the direction of , then by Faraday's Law the magnetic field lies in the direction and is related to the electric field by the relation
Because the divergence of the electric and magnetic fields are zero, there are no fields in the direction of propagation.
This solution is the linearly polarized solution of the wave equations. There are also circularly polarized solutions in which the fields rotate about the normal vector.
Spectral decomposition
Because of the linearity of Maxwell's equations in a vacuum, solutions can be decomposed into a superposition of sinusoids. This is the basis for the Fourier transform method for the solution of differential equations. The sinusoidal solution to the electromagnetic wave equation takes the form
where
is time (in seconds),
is the angular frequency (in radians per second),
is the wave vector (in radians per meter), and
is the phase angle (in radians).
The wave vector is related to the angular frequency by
where is the wavenumber and is the wavelength.
The electromagnetic spectrum is a plot of the field magnitudes (or energies) as a function of wavelength.
Multipole expansion
Assuming monochromatic fields varying in time as , if one uses Maxwell's Equations to eliminate , the electromagnetic wave equation reduces to the Helmholtz equation for :
with as given above. Alternatively, one can eliminate in favor of to obtain:
A generic electromagnetic field with frequency can be written as a sum of solutions to these two equations. The three-dimensional solutions of the Helmholtz Equation can be expressed as expansions in spherical harmonics with coefficients proportional to the spherical Bessel functions. However, applying this expansion to each vector component of or will give solutions that are not generically divergence-free, and therefore require additional restrictions on the coefficients.
The multipole expansion circumvents this difficulty by expanding not or , but or into spherical harmonics. These expansions still solve the original Helmholtz equations for and because for a divergence-free field , . The resulting expressions for a generic electromagnetic field are:
where and are the electric multipole fields of order (l, m), and and are the corresponding magnetic multipole fields, and and are the coefficients of the expansion. The multipole fields are given by
where are the spherical Hankel functions, and are determined by boundary conditions, and
are vector spherical harmonics normalized so that
The multipole expansion of the electromagnetic field finds application in a number of problems involving spherical symmetry, for example antennae radiation patterns, or nuclear gamma decay. In these applications, one is often interested in the power radiated in the far-field. In this regions, the and fields asymptotically approach
The angular distribution of the time-averaged radiated power is then given by
See also
Theory and experiment
Maxwell's equations
Wave equation
Partial differential equation
Computational electromagnetics
Electromagnetic radiation
Charge conservation
Light
Electromagnetic spectrum
Optics
Special relativity
General relativity
Inhomogeneous electromagnetic wave equation
Photon polarization
Larmor power formula
Applications
Rainbow
Cosmic microwave background
Laser
Laser fusion
Photography
X-ray
X-ray crystallography
Radar
Radio wave
Optical computing
Microwave
Holography
Microscope
Telescope
Gravitational lens
Black-body radiation
Biographies
André-Marie Ampère
Albert Einstein
Michael Faraday
Heinrich Hertz
Oliver Heaviside
James Clerk Maxwell
Hendrik Lorentz
Notes
Further reading
Electromagnetism
Journal articles
Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
Undergraduate-level textbooks
Edward M. Purcell, Electricity and Magnetism (McGraw-Hill, New York, 1985). .
Hermann A. Haus and James R. Melcher, Electromagnetic Fields and Energy (Prentice-Hall, 1989) .
Banesh Hoffmann, Relativity and Its Roots (Freeman, New York, 1983). .
David H. Staelin, Ann W. Morgenthaler, and Jin Au Kong, Electromagnetic Waves (Prentice-Hall, 1994) .
Charles F. Stevens, The Six Core Theories of Modern Physics, (MIT Press, 1995) .
Markus Zahn, Electromagnetic Field Theory: a problem solving approach, (John Wiley & Sons, 1979)
Graduate-level textbooks
Landau, L. D., The Classical Theory of Fields (Course of Theoretical Physics: Volume 2), (Butterworth-Heinemann: Oxford, 1987). .
Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; . (Provides a treatment of Maxwell's equations in terms of differential forms.)Vector calculus
P. C. Matthews Vector Calculus, Springer 1998,
H. M. Schey, Div Grad Curl and all that: An informal text on vector calculus'', 4th edition (W. W. Norton & Company, 2005) .
Electrodynamics
Electromagnetic radiation
Electromagnetism
Hyperbolic partial differential equations
Mathematical physics
Equations of physics | 0.769955 | 0.993437 | 0.764901 |
Armature (electrical) | In electrical engineering, the armature is the winding (or set of windings) of an electric machine which carries alternating current. The armature windings conduct AC even on DC machines, due to the commutator action (which periodically reverses current direction) or due to electronic commutation, as in brushless DC motors. The armature can be on either the rotor (rotating part) or the stator (stationary part), depending on the type of electric machine.
The armature windings interact with the magnetic field (magnetic flux) in the air-gap; the magnetic field is generated either by permanent magnets, or electromagnets formed by a conducting coil.
The armature must carry current, so it is always a conductor or a conductive coil, oriented normal to both the field and to the direction of motion, torque (rotating machine), or force (linear machine). The armature's role is twofold. The first is to carry current across the field, thus creating shaft torque in a rotating machine or force in a linear machine. The second role is to generate an electromotive force (EMF).
In the armature, an electromotive force is created by the relative motion of the armature and the field. When the machine or motor is used as a motor, this EMF opposes the armature current, and the armature converts electrical power to mechanical power in the form of torque, and transfers it via the shaft. When the machine is used as a generator, the armature EMF drives the armature current, and the shaft's movement is converted to electrical power. In an induction generator, generated power is drawn from the stator.
A growler is used to check the armature for short and open circuits and leakages to ground.
Terminology
The word armature was first used in its electrical sense, i.e. keeper of a magnet, in mid 19th century.
The parts of an alternator or related equipment can be expressed in either mechanical terms or electrical terms. Although distinctly separate these two sets of terminology are frequently used interchangeably or in combinations that include one mechanical term and one electrical term. This may cause confusion when working with compound machines like brushless alternators, or in conversation among people who are accustomed to work with differently configured machinery.
In most generators, the field magnet is rotating, and is part of the rotor, while the armature is stationary, and is part of the stator. Both motors and generators can be built either with a stationary armature and a rotating field or a rotating armature and a stationary field. The pole piece of a permanent magnet or electromagnet and the moving, iron part of a solenoid, especially if the latter acts as a switch or relay, may also be referred to as armatures.
Armature reaction in a DC machine
In a DC machine, two sources of magnetic fluxes are present; 'armature flux' and 'main field flux'. The effect of armature flux on the main field flux is called "armature reaction". The armature reaction changes the distribution of the magnetic field, which affects the operation of the machine. The effects of the armature flux can be offset by adding a compensating winding to the main poles, or in some machines adding intermediate magnetic poles, connected in the armature circuit.
Armature reaction is essential in amplidyne rotating amplifiers.
Armature reaction drop is the effect of a magnetic field on the distribution of the flux under main poles of a generator.
Since an armature is wound with coils of wire, a magnetic field is set up in the armature whenever a current flows in the coils. This field is at right angles to the generator field and is called cross magnetization of the armature. The effect of the armature field is to distort the generator field and shift the neutral plane. The neutral plane is the position where the armature windings are moving parallel to the magnetic flux lines, that is why an axis lying in this plane is called as magnetic neutral axis (MNA). This effect is known as armature reaction and is proportional to the current flowing in the armature coils.
The geometrical neutral axis (GNA) is the axis that bisects the angle between the centre line of adjacent poles. The magnetic neutral axis (MNA) is the axis drawn perpendicular to the mean direction of the flux passing through the centre of the armature. No e.m.f. is produced in the armature conductors along this axis because then they cut no flux. When no current is there in the armature conductors, the MNA coincides with GNA.
The brushes of a generator must be set in the neutral plane; that is, they must contact segments of the commutator that are connected to armature coils having no induced emf. If the brushes were contacting commutator segments outside the neutral plane, they would short-circuit "live" coils and cause arcing and loss of power.
Without armature reaction, the magnetic neutral axis (MNA) would coincide with geometrical neutral axis (GNA). Armature reaction causes the neutral plane to shift in the direction of rotation, and if the brushes are in the neutral plane at no load, that is, when no armature current is flowing, they will not be in the neutral plane when armature current is flowing. For this reason it is desirable to incorporate a corrective system into the generator design.
These are two principal methods by which the effect of armature reaction is overcome. The first method is to shift the position of the brushes so that they are in the neutral plane when the generator is producing its normal load current. in the other method, special field poles, called interpoles, are installed in the generator to counteract the effect of armature reaction.
The brush-setting method is satisfactory in installations in which the generator operates under a fairly constant load. If the load varies to a marked degree, the neutral plane will shift proportionately, and the brushes will not be the correct position at all times. The brush-setting method is the most common means of correcting for armature reaction in small generators (those producing approximately 1,000 W or less). Larger generators require the use of interpoles.
Winding circuits
Coils of the winding are distributed over the entire surface of the air gap, which may be the rotor or the stator of the machine. In a "lap" winding, there are as many current paths between the brush (or line) connections as there are poles in the field winding. In a "wave" winding, there are only two paths, and there are as many coils in series as half the number of poles. So, for a given rating of machine, a wave winding is more suitable for large currents and low voltages.
Windings are held in slots in the rotor or armature covered by stator magnets. The exact distribution of the windings and selection of the number of slots per pole of the field greatly influences the design of the machine and its performance, affecting such factors as commutation in a DC machine or the waveform of an AC machine.
Winding materials
Armature wiring is made from copper or aluminum. Copper armature wiring enhances electrical efficiencies due to its higher electrical conductivity. Aluminum armature wiring is lighter and less expensive than copper.
See also
Balancing machine
Commutator
References
External links
Example Diagram of an Armature Coil and data used to specify armature coil parameters
How to Check a Motor Armature for Damaged Windings
Electromagnetic components
Electric motors | 0.77057 | 0.992624 | 0.764887 |
Euler–Lagrange equation | In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero.
In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field.
History
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766.
Statement
Let be a real dynamical system with degrees of freedom. Here is the configuration space and the Lagrangian, i.e. a smooth real-valued function such that and is an -dimensional "vector of speed". (For those familiar with differential geometry, is a smooth manifold, and where is the tangent bundle of
Let be the set of smooth paths for which and
The action functional is defined via
A path is a stationary point of if and only if
Here, is the time derivative of When we say stationary point, we mean a stationary point of with respect to any small perturbation in . See proofs below for more rigorous detail.
Example
A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible.
the integrand function being .
The partial derivatives of L are:
By substituting these into the Euler–Lagrange equation, we obtain
that is, the function must have a constant first derivative, and thus its graph is a straight line.
Generalizations
Single function of single variable with higher derivatives
The stationary values of the functional
can be obtained from the Euler–Lagrange equation
under fixed boundary conditions for the function itself as well as for the first derivatives (i.e. for all ). The endpoint values of the highest derivative remain flexible.
Several functions of single variable with single derivative
If the problem involves finding several functions of a single independent variable that define an extremum of the functional
then the corresponding Euler–Lagrange equations are
Single function of several variables with single derivative
A multi-dimensional generalization comes from considering a function on n variables. If is some surface, then
is extremized only if f satisfies the partial differential equation
When n = 2 and functional is the energy functional, this leads to the soap-film minimal surface problem.
Several functions of several variables with single derivative
If there are several unknown functions to be determined and several variables such that
the system of Euler–Lagrange equations is
Single function of two variables with higher derivatives
If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that
then the Euler–Lagrange equation is
which can be represented shortly as:
wherein are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the indices is only over in order to avoid counting the same partial derivative multiple times, for example appears only once in the previous equation.
Several functions of several variables with higher derivatives
If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that
where are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is
where the summation over the is avoiding counting the same derivative several times, just as in the previous subsection. This can be expressed more compactly as
Generalization to manifolds
Let be a smooth manifold, and let denote the space of smooth functions . Then, for functionals of the form
where is the Lagrangian, the statement is equivalent to the statement that, for all , each coordinate frame trivialization of a neighborhood of yields the following equations:
Euler-Lagrange equations can also be written in a coordinate-free form as
where is the canonical momenta 1-form corresponding to the Lagrangian . The vector field generating time translations is denoted by and the Lie derivative is denoted by . One can use local charts in which and and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations.
See also
Lagrangian mechanics
Hamiltonian mechanics
Analytical mechanics
Beltrami identity
Functional derivative
Notes
References
Roubicek, T.: ''Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, , pp. 551–588.
Eponymous equations of mathematics
Eponymous equations of physics
Ordinary differential equations
Partial differential equations
Calculus of variations
Articles containing proofs
Leonhard Euler | 0.766534 | 0.997823 | 0.764866 |
Maggie Aderin-Pocock | Dame Margaret Ebunoluwa Aderin-Pocock (; born 9 March 1968) is a British space scientist and science educator. She is an honorary research associate of University College London's Department of Physics and Astronomy, and has been the chancellor of the University of Leicester since 1 March 2023. Since February 2014, she has co-presented the long-running astronomy television programme The Sky at Night with Chris Lintott. In 2020 she was awarded the Institute of Physics William Thomson, Lord Kelvin Medal and Prize for her public engagement in physics. She is the first black woman to win a gold medal in the Physics News Award and she served as the president of the British Science Association from 2021 to 2022.
Early life and education
Margaret Ebunoluwa Aderin was born in London on 9 March 1968 to Nigerian parents, Caroline Philips and Justus Adebayo Aderin, and was raised in Camden, London. Her middle name Ebunoluwa comes from the Yoruba words "ebun" meaning "gift" and Oluwa meaning "God", which is also a variant form of the word "Oluwabunmi" or "Olubunmi", meaning "gift of God" in Yoruba. She attended La Sainte Union Convent School in North London. She is dyslexic. As a child, when she told a teacher she wanted to be an astronaut, it was suggested she try nursing, "because that's scientific, too". She gained A-Levels in mathematics, physics, chemistry, and biology.
She studied at Imperial College London, graduated with a BSc in physics in 1990, and completed her PhD in mechanical engineering under the supervision of Hugh Spikes in 1994. Her research investigated the development of an ultra-thin film measurement system using spectroscopy and interferometry to the 2.5 nm level. This involved improving the optical performance and the mechanical design of the system, as well as the development of control and image processing software. Other techniques at the time could only operate to the micron level with much poorer resolution. This development work resulted in the instrument being sold by an Imperial College University spin-off company, PCS Instruments.
Career and research
Aderin-Pocock has worked on many projects in private industry, academia, and government. From 1996-99 she worked at the Defence Evaluation and Research Agency, a branch of the Ministry of Defence. Initially, she was a systems scientist on aircraft missile warning systems; from 1997-99, she was a project manager developing hand-held instruments to detect landmines. In 1999, Aderin-Pocock returned to Imperial College on a fellowship from the Science and Technology Facilities Council to work with the group developing a high-resolution spectrograph for the Gemini telescope in Chile. The high spectral resolution of the instrument allowed studies of stellar populations, interstellar medium, and some physical phenomena in stars with small masses.
She worked on and managed the observation instruments for the Aeolus satellite, which measured wind speeds to help the investigation of climate change. She is a pioneering figure in communicating science to the public, specifically school children. Her company, Science Innovation Ltd, engages children and adults through its "Tours of the Universe" a programme that explains about the science of space.
Aderin-Pocock is committed to inspiring new generations of astronauts, engineers, and scientists. She has spoken to approximately 25,000 children, many from inner-city schools, explaining how and why she became a scientist, challenging perceptions about careers, class, and gender. She helps encourage scientific endeavours of young people by being a judge at the National Science + Engineering Competition. The finals of this competition are held at The Big Bang Fair in March each year, and reward young people who have achieved excellence in a science, technology, engineering, or mathematics project.
Aderin-Pocock was the scientific consultant for the 2009 mini-series Paradox, and also appeared on Doctor Who Confidential. In February 2011, she presented Do We Really Need the Moon? on BBC Two. She presented In Orbit: How Satellites Rule Our World on BBC Two on 26 March 2012.
As well as presenting The Sky at Night with Chris Lintott, Aderin-Pocock has presented Stargazing on CBeebies with Chris Jarvis, and Out of This World on CBBC with her daughter Lauren. She has also appeared on Would I Lie to You?, Dara O Briain's Go 8 Bit, Richard Osman's House of Games, and QI.
Since 2006, Aderin-Pocock has served as a research fellow at UCL Department of Science and Technology Studies, supported by a Science in Society fellowship 2010–13 funded by Science and Technology Facilities Council (STFC). She previously held two other fellowships related to science communication, including science and society fellowships 2006–08 Particle Physics and Astronomy Research Council (PPARC) and 2008–10 (STFC). In 2006, she was one of six "Women of Outstanding Achievement" winners with GetSET Women.
In 2014, the pseudonymously written Ephraim Hardcastle diary column in the Daily Mail claimed that Aderin-Pocock (along with Hiranya Peiris) had been selected to discuss results from the Background Imaging of Cosmic Extragalactic Polarization 2 (BICEP-2) experiment on Newsnight because of her gender and ethnicity. The comments were condemned by mainstream media, the Royal Astronomical Society and Aderin-Pocock and Pereis's university, University College London. The Daily Mail withdrew its claim within days, acknowledging that the women were chosen because they are highly qualified in their fields.
She is an honorary research associate of University College London's Department of Physics and Astronomy.
In 2020–21 she served as a commissioner on the UK Government's Commission on Race and Ethnic Disparities (CRED). The commission's controversial report concluded that the "claim the country is still institutionally racist is not borne out by the evidence", but experts complained that the report misrepresented evidence, and that recommendations from business leaders were ignored. After the report was published, Aderin-Pocock stated that it "was not denying institutional racism existed but said the commission had not discovered evidence of it in the areas it had looked".
Since December 2021, Aderin-Pocock has been a question-setter for the Channel 4 game show I Literally Just Told You.
Honours and awards
Aderin-Pocock was appointed a Member of the Order of the British Empire in the 2009 New Year Honours for services to science education, and was elevated to Dame Commander of the Order of the British Empire (DBE) in the 2024 New Year Honours for services to science education and diversity.
2005 – Awarded "Certificate of Excellence" by the Champions Club UK
2009 – Honorary Doctor of Science, Staffordshire University for contributions to the field of science education
2011 – Winner of the "New Talent" award from the WFTV (Women in Film and Television)
2012 – UK Powerlist, listed as one of the UK top 100 most influential black people
2013 – UK Power List, listed as one of the UK top 10 most influential black people
2013 – Yale University Centre for Dyslexia "Out of the box thinking award"
2014 – Honorary Doctor of Science, University of Bath
2016 – Powerlist Ranked sixth most influential Black Briton
2017 – Honorary Doctor of Science, Loughborough University
2018 – Honorary Doctor of Science, University of Leicester
2020 – Institute of Physics William Thomson, Lord Kelvin Medal and Prize for her public engagement in physics
She was appointed as a vice-president of the Royal Central School of Speech and Drama in 2022.
In 2023, Mattel created a Barbie doll of Aderin-Pocock to celebrate International Women's Day.
Personal life
Aderin-Pocock discussed her life on BBC Radio 4's Desert Island Discs in March 2010, and has been the subject of numerous biographical articles on women in science.
She married Martin Pocock in 2002. They have one daughter, Lauren, born in 2010, and live in Guildford, Surrey.
Publications
Aderin-Pocock, Maggie. "The Story of the Solar System: A Visual Journey" Publisher: BBC Books, Sept 2024, ,
Aderin-Pocock, Maggie. "Dr. Maggie's Grand Tour of the Solar System" Publisher: Buster Books, Sept 2019, ,
Aderin-Pocock, Maggie. "The Knowledge: Stargazing" Publisher: Quadrille Publishing Ltd, 10 September 2015, ,
Aderin, M. "Space Instrumentation: Physics and Astronomy in Harmony?" Paper presented at the Engineering and Physics – Synergy for Success, 5 October 2006, UK.
Barlow, M. J., A. S. Hales, P. J. Storey, X. W. Liu, Y. G. Tsamis, and M. E. Aderin. "Bhros High Spectral Resolution Observations of Pn Forbidden and Recombination Line Profiles." Proceedings of the International Astronomical Union 2, no. Symposium S234 (2006): 367–68.
Aderin, M. E. "Bhros Installation and System Performance." Paper presented at the Ground-based Instrumentation for Astronomy, 21–25 June 2004, USA.
Aderin, M., I. Crawford, P. D'Arrigo, and A. Charalambous. "High Resolution Optical Spectrograph (Hros): A Summary of Progress." Paper presented at the Conference on Optical and IR Telescope Instrumentation and Detectors, 27–31 March 2000, Munich, Germany.
Aderin, M. E., and I. A. Burch. "Countermine: Hand Held and Vehicle Mounted Mine Detection." Paper presented at the Second International Conference on Detection of Abandoned Land Mines, 12–14 October 1998, London, UK.
Aderin, Margaret Ebunoluwa. "Interferometric Studies of Very Thin Lubricant Films in Concentrated Contacts."
Cann, P. M., M. Aderin, G. J. Johnston, and H. A. Spikes. "An Investigation into the Orientation of Lubricant Molecules in EHD Contacts." In Wear Particles: From the Cradle to the Grave, edited by D. Dowson, G. Dalmaz, T. H. C. Childs, C. M. Taylor and M. Godet. 209–18: Elsevier Science Publishers, 1992.
References
External links
Academic webpage
1968 births
Living people
Alumni of Imperial College London
British space scientists
Dames Commander of the Order of the British Empire
People from Islington (district)
Television presenters from Guildford
English people of Yoruba descent
Academics of University College London
Black British women academics
British women academics
Black British academics
English women scientists
Scientists with dyslexia
British scientists with disabilities
20th-century British women scientists
20th-century English scientists
21st-century British women scientists
21st-century English scientists
21st-century English women scientists | 0.767406 | 0.996659 | 0.764842 |
Unified field theory | In physics, a unified field theory (UFT) is a type of field theory that allows all that is usually thought of as fundamental forces and elementary particles to be written in terms of a pair of physical and virtual fields. According to modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interpreted by intermediary entities called fields.
However, a duality of the fields is combined into a single physical field. For over a century, unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. The "Theory of Everything" and Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature. Earlier attempts based on classical physics are described in the article on classical unified field theories.
The goal of a unified field theory has led to a great deal of progress for future theoretical physics, and progress continues.
Introduction
Forces
All four of the known fundamental forces are mediated by fields, which in the Standard Model of particle physics result from the exchange of gauge bosons. Specifically, the four fundamental interactions to be unified are:
Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon.
Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force.
Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons.
Gravitational interaction: a long-range attractive interaction that acts on all particles. The postulated exchange particle has been named the graviton.
Modern unified field theory attempts to bring these four forces and matter together into a single framework.
History
Classic theory
The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime.
In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories.
Modern progress
In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and , respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984.
After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV.
Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory.
Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime.
Current status
Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics.
See also
Sheldon Glashow
Unification (physics)
References
Further reading
Jeroen van Dongen Einstein's Unification, Cambridge University Press (July 26, 2010)
Varadarajan, V.S. Supersymmetry for Mathematicians: An Introduction (Courant Lecture Notes), American Mathematical Society (July 2004)
External links
On the History of Unified Field Theories, by Hubert F. M. Goenner
Particle physics
Theories of gravity
Unsolved problems in physics | 0.767528 | 0.996494 | 0.764837 |
Physical cosmology | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began in 1915 with the development of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
Subject history
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed in order to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance from Earth. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
Energy of the cosmos
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
History of the universe
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
Equations of motion
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
Particle physics in cosmology
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time.
Timeline of the Big Bang
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Areas of study
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
Very early universe
The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
Big Bang Theory
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
Standard model of Big Bang cosmology
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
Cosmic microwave background
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
Formation and evolution of large-scale structure
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology.
Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.
Dark matter
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
Dark energy
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant (CC) which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe.
Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky.
Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones.
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
Gravitational waves
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
Other areas of inquiry
Cosmologists also study:
Whether primordial black holes were formed in our universe, and what happened to them.
Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies.
The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe.
Biophysical cosmology: a type of physical cosmology that studies and understands life as part or an inherent part of physical cosmology. It stresses that life is inherent to the universe and therefore frequent.
See also
Accretion
Hubble's law
Illustris project
List of cosmologists
Physical ontology
Quantum cosmology
String cosmology
Universal Rotation Curve
References
Further reading
Popular
Textbooks
Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
Modern introduction to cosmology covering the homogeneous and inhomogeneous universe as well as inflation and the CMB.
An introductory text, released slightly before the WMAP results.
For undergraduates; mathematically gentle with a strong historical focus.
An introductory astronomy text.
The classic reference for researchers.
Cosmology without general relativity.
An introduction to cosmology with a thorough discussion of inflation.
Discusses the formation of large-scale structures in detail.
An introduction including more on general relativity and quantum field theory than most.
Strong historical focus.
The classic work on large-scale structure and correlation functions.
A standard reference for the mathematical formalism.
External links
From groups
Cambridge Cosmology – from Cambridge University (public home page)
Cosmology 101 – from the NASA WMAP group
Center for Cosmological Physics. University of Chicago, Chicago, Illinois
Origins, Nova Online – Provided by PBS
From individuals
Gale, George, "Cosmology: Methodological Debates in the 1930s and 1940s", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.)
Madore, Barry F., "Level 5 : A Knowledgebase for Extragalactic Astronomy and Cosmology". Caltech and Carnegie. Pasadena, California.
Tyler, Pat, and Newman, Phil, "Beyond Einstein". Laboratory for High Energy Astrophysics (LHEA) NASA Goddard Space Flight Center.
Wright, Ned. "Cosmology tutorial and FAQ". Division of Astronomy & Astrophysics, UCLA.
Philosophy of physics
Philosophy of time
Astronomical sub-disciplines
Astrophysics | 0.768853 | 0.994736 | 0.764806 |
Ecological pyramid | An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted (pyramid of biomass for marine region) or take other shapes (spindle shaped pyramid).
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalization is when portions of a food web are supported by inputs of resources from outside the local community. In small, forested streams, for example, the volume of higher levels is greater than could be supported by the local primary production.
Energy usually enters ecosystems from the Sun. The primary producers at the base of the pyramid use solar radiation to power photosynthesis which produces food. However most wavelengths in solar radiation cannot be used for photosynthesis, so they are reflected back into space or absorbed elsewhere and converted to heat. Only 1 to 2 percent of the energy from the sun is absorbed by photosynthetic processes and converted into food. When energy is transferred to higher trophic levels, on average only about 10% is used at each level to build biomass, becoming stored energy. The rest goes to metabolic processes such as growth, respiration, and reproduction.
Advantages of the pyramid of energy as a representation:
It takes account of the rate of production over a period of time.
Two species of comparable biomass may have very different life spans. Thus, a direct comparison of their total biomasses is misleading, but their productivity is directly comparable.
The relative energy chain within an ecosystem can be compared using pyramids of energy; also different ecosystems can be compared.
There are no inverted pyramids.
The input of solar energy can be added.
Disadvantages of the pyramid of energy as a representation:
The rate of biomass production of an organism is required, which involves measuring growth and reproduction through time.
There is still the difficulty of assigning the organisms to a specific trophic level. As well as the organisms in the food chains there is the problem of assigning the decomposers and detritivores to a particular level.
Pyramid of biomass
A pyramid of biomass shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. It is a graphical representation of biomass (total amount of living or organic matter in an ecosystem) present in unit area in different trophic levels. Typical units are grams per square meter, or calories per square meter.
The pyramid of biomass may be "inverted". For example, in a pond ecosystem, the standing crop of phytoplankton, the major producers, at any given point will be lower than the mass of the heterotrophs, such as fish and insects. This is explained as the phytoplankton reproduce very quickly, but have much shorter individual lives.
Pyramid of Numbers
A pyramid of numbers shows graphically the population, or abundance, in terms of the number of individual organisms involved at each level in a food chain. This shows the number of organisms in each trophic level without any consideration for their individual sizes or biomass. The pyramid is not necessarily upright. For example, it will be inverted if beetles are feeding from the output of forest trees, or parasites are feeding on large host animals.
History
The concept of a pyramid of numbers ("Eltonian pyramid") was developed by Charles Elton (1927). Later, it would also be expressed in terms of biomass by Bodenheimer (1938). The idea of the pyramid of productivity or energy relies on the works of G. Evelyn Hutchinson and Raymond Lindeman (1942).
See also
Trophic cascade
References
Bibliography
Odum, E.P. 1971. Fundamentals of Ecology. Third Edition. W.B. Saunders Company, Philadelphia,
External links
Food Chains
Ecology
Food chains | 0.767384 | 0.996617 | 0.764788 |
Stopping power (particle radiation) | In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy.
Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle. Its application is important in a wide range of thermodynamic areas such as radiation protection, ion implantation and nuclear medicine.
Definition and Bragg curve
Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below.
The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy per unit path length, :
The minus sign makes positive.
The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve. This is of great practical importance for radiation therapy.
The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar. The mass stopping power then depends only very little on the density of the material.
The picture shows how the stopping power of 5.49 MeV alpha particles increases while the particle traverses air, until it reaches the maximum. This particular energy corresponds to that of the alpha particle radiation from naturally radioactive gas radon (222Rn) which is present in the air in minute amounts.
The mean range can be calculated by integrating the reciprocal stopping power over energy:
where:
is the initial kinetic energy of the particle
is the "continuous slowing down approximation (CSDA)" range and
is the linear stopping power.
The deposited energy can be obtained by integrating the stopping power over the entire path length of the ion while it moves in the material.
Electronic, nuclear and radiative stopping
Electronic stopping refers to the slowing down of a projectile ion due to the inelastic collisions between bound electrons in the medium and the ion moving through it. The term inelastic is used to signify that energy is lost during the process (the collisions may result both in excitations of bound electrons of the medium, and in excitations of the electron cloud of the ion as well). Linear electronic stopping power is identical to unrestricted linear energy transfer.
Instead of energy transfer, some models consider the electronic stopping power as momentum transfer between electron gas and energetic ion. This is consistent with the result of Bethe in the high energy range.
Since the number of collisions an ion experiences with electrons is large, and since the charge state of the ion while traversing the medium may change frequently, it is very difficult to describe all possible interactions for all possible ion charge states. Instead, the electronic stopping power is often given as a simple function of energy which is an average taken over all energy loss processes for different charge states. It can be theoretically determined to an accuracy of a few % in the energy range above several hundred keV per nucleon from theoretical treatments, the best known being the Bethe formula. At energies lower than about 100 keV per nucleon, it becomes more difficult to determine the electronic stopping using analytical models. Recently real-time Time-dependent density functional theory has been successfully used to accurately determine the electronic stopping for various ion-target systems over a wide range of energies including the low energy regime.
Graphical presentations of experimental values of the electronic stopping power for many ions in many substances have been given by Paul. The accuracy of various stopping tables has been determined using statistical comparisons.
Nuclear stopping power refers to the elastic collisions between the projectile ion and atoms in the sample (the established designation "nuclear" may be confusing since nuclear stopping is not due to nuclear forces, but it is meant to note that this type of stopping involves the interaction of the ion with the nuclei in the target). If one knows the form of the repulsive potential energy between two atoms (see below), it is possible to calculate the nuclear stopping power . In the stopping power figure shown above for aluminium ions in aluminum, nuclear stopping is negligible except at the lowest energy. Nuclear stopping increases when the mass of the ion increases. In the figure shown on the right, nuclear stopping is larger than electronic stopping at low energy. For very light ions slowing down in heavy materials, the nuclear stopping is weaker than the electronic at all energies.
Especially in the field of radiation damage in detectors, the term "non-ionizing energy loss" (NIEL) is used as a term opposite to the linear energy transfer (LET), see e.g. Refs. Since per definition nuclear stopping power does not involve electronic excitations, NIEL and nuclear stopping can be considered to be the same quantity in the absence of nuclear reactions.
The total non-relativistic stopping power is therefore the sum of two terms: . Several semi-empirical stopping power formulas have been devised. The model given by Ziegler, Biersack and Littmark (the so-called "ZBL" stopping, see next chapter), implemented in different versions of the TRIM/SRIM codes, is used most often today.
Radiative stopping power, which is due to the emission of bremsstrahlung in the electric fields of the particles in the material traversed, must be considered at extremely high ion energies. For electron projectiles, radiative stopping is always important. At high ion energies, there may also be energy losses due to nuclear reactions, but such processes are not normally described by stopping power.
Close to the surface of a solid target material, both nuclear and electronic stopping may lead to sputtering.
The slowing-down process in solids
In the beginning of the slowing-down process at high energies, the ion is slowed mainly by electronic stopping, and it moves almost in a straight path. When the ion has slowed sufficiently, the collisions with nuclei (the nuclear stopping) become more and more probable, finally dominating the slowing down. When atoms of the solid receive significant recoil energies when struck by the ion, they will be removed from their lattice positions, and produce a cascade of further collisions in the material. These
collision cascades are the main cause of damage production during ion implantation in metals and semiconductors.
When the energies of all atoms in the system have fallen below the threshold displacement energy, the production of new damage ceases, and the concept of nuclear stopping is no longer meaningful.
The total amount of energy deposited by the nuclear collisions to atoms in the materials is called the nuclear deposited energy.
The inset in the figure shows a typical range distribution of ions deposited in the solid. The case shown here might, for instance, be the slowing down of a 1 MeV silicon ion in silicon. The mean range for a 1 MeV ion is typically in the micrometer range.
Repulsive interatomic potentials
At very small distances between the nuclei the repulsive interaction can be regarded as essentially Coulombic. At greater distances, the electron clouds screen the nuclei from each other. Thus the repulsive potential can be described by multiplying the Coulombic repulsion between nuclei with a screening function φ(r/a),
where φ(r/a) → 1 when r → 0. Here and are the charges of the interacting nuclei, and r the distance between them; a is the so-called screening parameter.
A large number of different repulsive potentials and screening functions have been proposed over the years, some determined semi-empirically, others from theoretical calculations. A much used repulsive potential is the one given by Ziegler, Biersack and Littmark, the so-called ZBL repulsive potential. It has been constructed by fitting a universal screening function to theoretically obtained potentials calculated for a large variety of atom pairs. The ZBL screening parameter and function have the forms
and
where x = r/au, and a0 is the Bohr atomic radius = 0.529 Å.
The standard deviation of the fit of the universal ZBL repulsive potential to the theoretically calculated pair-specific potentials it is fit to is 18% above 2 eV.
Even more accurate repulsive potentials can be obtained from self-consistent total energy calculations using density-functional theory and the local-density approximation
(LDA) for electronic exchange and correlation.
Channeling
In crystalline materials the ion may in some instances get "channeled", i.e., get focused into a channel between crystal planes where it experiences almost no collisions with nuclei. Also, the electronic stopping power may be weaker in the channel. Thus the nuclear and electronic stopping do not only depend on material type and density but also on its microscopic structure and cross-section.
Computer simulations of ion slowing down
Computer simulation methods to calculate the motion of ions in a medium have been developed since the 1960s, and are now the dominant way of treating stopping power theoretically. The basic idea in them is to follow the movement of the ion in the medium by simulating the collisions with nuclei in the medium. The electronic stopping power is usually taken into account as a frictional force slowing down the ion.
Conventional methods used to calculate ion ranges are based on the binary collision approximation (BCA). In these methods the movement of ions in the implanted sample is treated as a succession of individual collisions between the recoil ion and atoms in the sample. For each individual collision the classical scattering integral is solved by numerical integration.
The impact parameter p in the scattering integral is determined either from a stochastic distribution or in a way that takes into account the crystal structure of the sample. The former method is suitable only in simulations of implantation into amorphous materials, as it does not account for channeling.
The best known BCA simulation program is TRIM/SRIM (acronym for TRansport of Ions in Matter, in more recent versions called Stopping and Range of Ions in Matter), which is based on the ZBL electronic stopping and interatomic potential. It has a very easy-to-use user interface, and has default parameters for all ions in all materials up to an ion energy of 1 GeV, which has made it immensely popular. However, it doesn't take account of the crystal structure, which severely limits its usefulness in many cases. Several BCA programs overcome this difficulty; some fairly well known are MARLOWE, BCCRYS and crystal-TRIM.
Although the BCA methods have been successfully used in describing many physical processes, they have some obstacles for describing the slowing down process of energetic ions realistically. Basic assumption that collisions are binary results in severe problems when trying to take multiple interactions into account. Also, in simulating crystalline materials the selection process of the next colliding lattice atom and the impact parameter p always involve several parameters which may not have perfectly well defined values, which may affect the results 10–20% even for quite reasonable-seeming choices of the parameter values. The best reliability in BCA is obtained by including multiple collisions in the calculations, which is not easy to do correctly. However, at least MARLOWE does this.
A fundamentally more straightforward way to model multiple atomic collisions is provided by molecular dynamics (MD) simulations, in which the time evolution of a system of atoms is calculated by solving the equations of motion numerically. Special MD methods have been devised in which the number of interactions and atoms involved in MD simulations have been reduced in order to make them efficient enough for calculating ion ranges. The MD simulations this automatically describe the nuclear stopping power. The electronic stopping power can be readily included in molecular dynamics simulations, either as a frictional force
or in a more advanced manner by also following the heating of the electronic systems and coupling the electronic and atomic degrees of freedom.
Minimum ionizing particle
Beyond the maximum, stopping power decreases approximately like 1/v2 with increasing particle velocity v, but after a minimum, it increases again. A minimum ionizing particle (MIP) is a particle whose mean energy loss rate through matter is close to the minimum. In many practical cases, relativistic particles (e.g., cosmic-ray muons) are minimum ionizing particles.
An important property of all minimum ionizing particles is that is approximately true where and are the usual relativistic kinematic quantities. Moreover, all of the MIPs have almost the same energy loss in the material which value is: .
See also
Radiation length
Attenuation length
Collision cascade
Radiation material science
References
Further reading
(Lindhard 1963) J. Lindhard, M. Scharff, and H. E. Shiøtt. Range concepts and heavy ion ranges. Mat. Fys. Medd. Dan. Vid. Selsk., 33(14):1, 1963.
(Smith 1997) R. Smith (ed.), Atomic & ion collisions in solids and at surfaces: theory, simulation and applications, Cambridge University Press, Cambridge, UK, 1997.
External links
Stopping power and energy loss straggling calculations in solids by MELF-GOS model
A Web-based module for Range and Stopping Power in Nucleonica
Passage of charged particles through matter
Stopping-Power and Range Tables for Electrons, Protons, and Helium Ions
Stopping Power: Graphs and Data
Penetration of charged particles through matter; lecture notes by E. Bonderup
Condensed matter physics
Materials science
Nuclear physics
Radiation | 0.775162 | 0.986614 | 0.764785 |
Physical paradox | A physical paradox is an apparent contradiction in physical descriptions of the universe. While many physical paradoxes have accepted resolutions, others defy resolution and may indicate flaws in theory. In physics as in all of science, contradictions and paradoxes are generally assumed to be artifacts of error and incompleteness because reality is assumed to be completely consistent, although this is itself a philosophical assumption. When, as in fields such as quantum physics and relativity theory, existing assumptions about reality have been shown to break down, this has usually been dealt with by changing our understanding of reality to a new one which remains self-consistent in the presence of the new evidence.
Paradoxes relating to false assumptions
Certain physical paradoxes defy common sense predictions about physical situations. In some cases, this is the result of modern physics correctly describing the natural world in circumstances which are far outside of everyday experience. For example, special relativity has traditionally yielded two common paradoxes: the twin paradox and the ladder paradox. Both of these paradoxes involve thought experiments which defy traditional common sense assumptions about time and space. In particular, the effects of time dilation and length contraction are used in both of these paradoxes to create situations which seemingly contradict each other. It turns out that the fundamental postulate of special relativity that the speed of light is invariant in all frames of reference requires that concepts such as simultaneity and absolute time are not applicable when comparing radically different frames of reference.
Another paradox associated with relativity is Supplee's paradox which seems to describe two reference frames that are irreconcilable. In this case, the problem is assumed to be well-posed in special relativity, but because the effect is dependent on objects and fluids with mass, the effects of general relativity need to be taken into account. Taking the correct assumptions, the resolution is actually a way of restating the equivalence principle.
Babinet's paradox is that contrary to naïve expectations, the amount of radiation removed from a beam in the diffraction limit is equal to twice the cross-sectional area. This is because there are two separate processes which remove radiation from the beam in equal amounts: absorption and diffraction.
Similarly, there exists a set of physical paradoxes that directly rely on one or more assumptions that are incorrect. The Gibbs paradox of statistical mechanics yields an apparent contradiction when calculating the entropy of mixing. If the assumption that the particles in an ideal gas are indistinguishable is not appropriately taken into account, the calculated entropy is not an extensive variable as it should be.
Olbers' paradox shows that an infinite universe with a uniform distribution of stars necessarily leads to a sky that is as bright as a star. The observed dark night sky can be alternatively resolvable by stating that one of the two assumptions is incorrect. This paradox was sometimes used to argue that a homogeneous and isotropic universe as required by the cosmological principle was necessarily finite in extent, but it turns out that there are ways to relax the assumptions in other ways that admit alternative resolutions.
Mpemba paradox is that under certain conditions, hot water will freeze faster than cold water even though it must pass through the same temperature as the cold water during the freezing process. This is a seeming violation of Newton's law of cooling but in reality it is due to non-linear effects that influence the freezing process. The assumption that only the temperature of the water will affect freezing is not correct.
Paradoxes relating to unphysical mathematical idealizations
A common paradox occurs with mathematical idealizations such as point sources which describe physical phenomena well at distant or global scales but break down at the point itself. These paradoxes are sometimes seen as relating to Zeno's paradoxes which all deal with the physical manifestations of mathematical properties of continuity, infinitesimals, and infinities often associated with space and time. For example, the electric field associated with a point charge is infinite at the location of the point charge. A consequence of this apparent paradox is that the electric field of a point-charge can only be described in a limiting sense by a carefully constructed Dirac delta function. This mathematically inelegant but physically useful concept allows for the efficient calculation of the associated physical conditions while conveniently sidestepping the philosophical issue of what actually occurs at the infinitesimally-defined point: a question that physics is as yet unable to answer. Fortunately, a consistent theory of quantum electrodynamics removes the need for infinitesimal point charges altogether.
A similar situation occurs in general relativity with the gravitational singularity associated with the Schwarzschild solution that describes the geometry of a black hole. The curvature of spacetime at the singularity is infinite which is another way of stating that the theory does not describe the physical conditions at this point. It is hoped that the solution to this paradox will be found with a consistent theory of quantum gravity, something which has thus far remained elusive. A consequence of this paradox is that the associated singularity that occurred at the supposed starting point of the universe (see Big Bang) is not adequately described by physics. Before a theoretical extrapolation of a singularity can occur, quantum mechanical effects become important during the Planck era. Without a consistent theory, there can be no meaningful statement about the physical conditions associated with the universe before this point.
Another paradox due to mathematical idealization is D'Alembert's paradox of fluid mechanics. When the forces associated with two-dimensional, incompressible, irrotational, inviscid steady flow across a body are calculated, there is no drag. This is in contradiction with observations of such flows, but as it turns out a fluid that rigorously satisfies all the conditions is a physical impossibility. The mathematical model breaks down at the surface of the body, and new solutions involving boundary layers have to be considered to correctly model the drag effects.
Quantum mechanical paradoxes
A significant set of physical paradoxes are associated with the privileged position of the observer in quantum mechanics.
Two of these are:
the EPR paradox and
the Schrödinger's cat paradox,
These thought experiments supposedly to use principles frome quantum mechanics to derive conclusions that are seemingly contradictory.
In the case of Schrödinger's cat this takes the form of a seeming absurdity.
A cat is placed in a box sealed off from observation with a quantum mechanical switch designed to kill the cat when appropriately deployed. While in the box, the cat is described as being in a quantum superposition of "dead" and "alive" states, though opening the box effectively collapses the cat's wave function to one of the two conditions.
In the case of the EPR paradox, quantum entanglement appears to allow for the physical impossibility of information transmitted faster than the speed of light, violating special relativity. Related to the EPR paradox is the phenomenon of quantum pseudo-telepathy in which parties who are prevented from communicating do manage to accomplish tasks that seem to require direct contact.
These paradoxes arise when quantum mechanic is interpreted incorrectly. For example, quantum mechanics makes no claim to represent "a cat". Quantum mechanics represents probabilities for the occurrence of specific events; it can predict the probability of the being alive when the box is opened. Likewise, the EPR paradox is a consequence of reasoning about two distinct "particles".
Speculative theories of quantum gravity that combine general relativity with quantum mechanics have their own associated paradoxes that are generally accepted to be artifacts of the lack of a consistent physical model that unites the two formulations. One such paradox is the black hole information paradox which points out that information associated with a particle that falls into a black hole is not conserved when the theoretical Hawking radiation causes the black hole to evaporate.
Causality paradoxes
A set of similar paradoxes occurs within the area of physics involving arrow of time and causality. One of these, the grandfather paradox, deals with the peculiar nature of causality in closed time-like loops. In its most crude conception, the paradox involves a person traveling back in time and murdering an ancestor who hadn't yet had a chance to procreate. The speculative nature of time travel to the past means that there is no agreed upon resolution to the paradox, nor is it even clear that there are physically possible solutions to the Einstein equations that would allow for the conditions required for the paradox to be met. Nevertheless, there are two common explanations for possible resolutions for this paradox that take on similar flavor for the explanations of quantum mechanical paradoxes. In the so-called self-consistent solution, reality is constructed in such a way as to deterministically prevent such paradoxes from occurring. This idea makes many free will advocates uncomfortable, though it is very satisfying to many philosophical naturalists. Alternatively, the many worlds idealization or the concept of parallel universes is sometimes conjectured to allow for a continual fracturing of possible worldlines into many different alternative realities. This would mean that any person who traveled back in time would necessarily enter a different parallel universe that would have a different history from the point of the time travel forward.
Another paradox associated with the causality and the one-way nature of time is Loschmidt's paradox which poses the question how can microprocesses that are time-reversible produce a time-irreversible increase in entropy. A partial resolution to this paradox is rigorously provided for by the fluctuation theorem which relies on carefully keeping track of time averaged quantities to show that from a statistical mechanics point of view, entropy is far more likely to increase than to decrease. However, if no assumptions about initial boundary conditions are made, the fluctuation theorem should apply equally well in reverse, predicting that a system currently in a low-entropy state is more likely to have been at a higher-entropy state in the past, in contradiction with what would usually be seen in a reversed film of a nonequilibrium state going to equilibrium. Thus, the overall asymmetry in thermodynamics which is at the heart of Loschmidt's paradox is still not resolved by the fluctuation theorem. Most physicists believe that the thermodynamic arrow of time can only be explained by appealing to low entropy conditions shortly after the Big Bang, although the explanation for the low entropy of the Big Bang itself is still debated.
Observational paradoxes
A further set of physical paradoxes are based on sets of observations that fail to be adequately explained by current physical models. These may simply be indications of the incompleteness of current theories. It is recognized that unification has not been accomplished yet which may hint at fundamental problems with the current scientific paradigms. Whether this is the harbinger of a scientific revolution yet to come or whether these observations will yield to future refinements or be found to be erroneous is yet to be determined. A brief list of these yet inadequately explained observations includes observations implying the existence of dark matter, observations implying the existence of dark energy, the observed matter-antimatter asymmetry, the GZK paradox, the heat death paradox, and the Fermi paradox.
See also
List of paradoxes
References
External links
Usenet Physics FAQ by John Baez
Time travel and modern physics
Philosophy of physics
Thought experiments in physics | 0.786708 | 0.972122 | 0.764775 |
KT (energy) | kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to
More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k.
In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = is used instead of kT, turning into .
RT
RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. . The SI units for RT are joules per mole (J/mol).
It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J):
kT =
References
Thermodynamics
Statistical mechanics | 0.770243 | 0.992899 | 0.764773 |
Hypertonia | Hypertonia is a term sometimes used synonymously with spasticity and rigidity in the literature surrounding damage to the central nervous system, namely upper motor neuron lesions. Impaired ability of damaged motor neurons to regulate descending pathways gives rise to disordered spinal reflexes, increased excitability of muscle spindles, and decreased synaptic inhibition. These consequences result in abnormally increased muscle tone of symptomatic muscles. Some authors suggest that the current definition for spasticity, the velocity-dependent over-activity of the stretch reflex, is not sufficient as it fails to take into account patients exhibiting increased muscle tone in the absence of stretch reflex over-activity. They instead suggest that "reversible hypertonia" is more appropriate and represents a treatable condition that is responsive to various therapy modalities like drug or physical therapy.
Presentation
Symptoms associated with central nervous systems disorders are classified into positive and negative categories. Positive symptoms include those that increase muscle activity through hyper-excitability of the stretch reflex (i.e., rigidity and spasticity) where negative symptoms include those of insufficient muscle activity (i.e. weakness) and reduced motor function. Often the two classifications are thought to be separate entities of a disorder; however, some authors propose that they may be closely related.
Pathophysiology
Hypertonia is caused by upper motor neuron lesions which may result from injury, disease, or conditions that involve damage to the central nervous system. The lack of or decrease in upper motor neuron function leads to loss of inhibition with resultant hyperactivity of lower motor neurons. Different patterns of muscle weakness or hyperactivity can occur based on the location of the lesion, causing a multitude of neurological symptoms, including spasticity, rigidity, or dystonia.
Spastic hypertonia involves uncontrollable muscle spasms, stiffening or straightening out of muscles, shock-like contractions of all or part of a group of muscles, and abnormal muscle tone. It is seen in disorders such as cerebral palsy, stroke, and spinal cord injury. Rigidity is a severe state of hypertonia where muscle resistance occurs throughout the entire range of motion of the affected joint independent of velocity. It is frequently associated with lesions of the basal ganglia. Individuals with rigidity present with stiffness, decreased range of motion and loss of motor control. Rigidity is a nonselective increase in the tone of agonist and antagonist without velocity dependence, and the increased tone remains uniform throughout the range of movement. On the contrary, spasticity is a velocity-dependent increase in tone resulting from the hyper excitability of stretch reflexes. It primarily involves the antigravity muscles – flexors of the upper limb and extensors of the lower limb. During the passive stretch, a brief “free interval” is appreciated in spasticity but not in rigidity because the resting muscle is electromyographically silent in spasticity. In contrast, in rigidity, the resting muscle shows firing. Dystonic hypertonia refers to muscle resistance to passive stretching (in which a therapist gently stretches the inactive contracted muscle to a comfortable length at very low speeds of movement) and a tendency of a limb to return to a fixed involuntary (and sometimes abnormal) posture following movement.
Management
Therapeutic interventions are best individualized to particular patients. Basic principles of treatment for hypertonia are to avoid noxious stimuli and provide frequent range of motion exercise.
Physical interventions
Physiotherapy has been shown to be effective in controlling hypertonia through the use of stretching aimed to reduce motor neuron excitability. The aim of a physical therapy session could be to inhibit excessive tone as far as possible, give the patient a sensation of normal position and movement, and to facilitate normal movement patterns. While static stretch has been the classical means to increase range of motion, PNF stretching has been used in many clinical settings to effectively reduce muscle spasticity.
Icing and other topical anesthetics may decrease the reflexive activity for short period of time in order to facilitate motor function. Inhibitory pressure (applying firm pressure over muscle tendon) and promoting body heat retention and rhythmic rotation (slow repeated rotation of affected body part to stimulate relaxation) have also been proposed as potential methods to decrease hypertonia. Aside from static stretch casting, splinting techniques are extremely valuable to extend joint range of motion lost to hypertonicity. A more unconventional method for limiting tone is to deploy quick repeated passive movements to an involved joint in cyclical fashion; this has also been demonstrated to show results on persons without physical disabilities. For a more permanent state of improvement, exercise and patient education is imperative. Isokinetic, aerobic, and strength training exercises should be performed as prescribed by a physiotherapist, and stressful situations that may cause increased tone should be minimized or avoided.
Pharmaceutical interventions
Baclofen, diazepam and dantrolene remain the three most commonly used pharmacologic agents in the treatment of spastic hypertonia. Baclofen is generally the drug of choice for spinal cord types of spasticity, while sodium dantrolene is the only agent which acts directly on muscle tissue. Tizanidine is also available. Phenytoin with chlorpromazine may be potentially useful if sedation does not limit their use. Ketazolam, not yet available in the United States, may be a significant addition to the pharmacologic set of options. Intrathecal administration of antispastic medications allows for high concentrations of drug near the site of action, which limits side effects.
See also
Dystonia
Hypotonia
Paratonia
Spasticity
Clasp-knife response
References
External links
Cerebral palsy types
Symptoms and signs: Nervous system
Muscular disorders
Pediatrics | 0.766681 | 0.997501 | 0.764765 |
Spiral Dynamics | Spiral Dynamics (SD) is a model of the evolutionary development of individuals, organizations, and societies. It was initially developed by Don Edward Beck and Christopher Cowan based on the emergent cyclical theory of Clare W. Graves, combined with memetics. A later collaboration between Beck and Ken Wilber produced Spiral Dynamics Integral (SDi). Several variations of Spiral Dynamics continue to exist, both independently and incorporated into or drawing on Wilber's Integral theory. Spiral Dynamics has applications in management theory and business ethics, and as an example of applied memetics. However, it lacks mainstream academic support.
Overview
Spiral Dynamics describes how value systems and worldviews emerge from the interaction of "life conditions" and the mind's capacities. The emphasis on life conditions as essential to the progression through value systems is unusual among similar theories, and leads to the view that no level is inherently positive or negative, but rather is a response to the local environment, social circumstances, place and time. Through these value systems, groups and cultures structure their societies and individuals integrate within them. Each distinct set of values is developed as a response to solving the problems of the previous system. Changes between states may occur incrementally (first order change) or in a sudden breakthrough (second order change). The value systems develop in a specific order, and the most important question when considering the value system being expressed in a particular behavior is why the behavior occurs.
Overview of the levels
Development of the theory
University of North Texas (UNT) professor Don Beck sought out Union College psychology professor Clare W. Graves after reading about his work in The Futurist. They met in person in 1975, and Beck, soon joined by UNT faculty member Chris Cowan, worked closely with Graves until his death in 1986. Beck made over 60 trips to South Africa during the 1980s and 1990s, applying Graves's emergent cyclical theory in various projects. This experience, along with others Beck and Cowan had applying the theory in North America, motivated the development of Spiral Dynamics.
Beck and Cowan first published their extension and adaptation of Graves's emergent cyclical theory in Spiral Dynamics: Mastering Values, Leadership, and Change (Exploring the New Science of Memetics) (1996). They introduced a simple color-coding for the eight value systems identified by Graves (and a predicted ninth) which is better known than Graves's letter pair identifiers. Additionally, Beck and Cowan integrated ideas from the field of memetics as created by Dawkins and further developed by Csikszentmihalyi, identifying memetic attractors for each of Graves's levels. These attractors, which they called "VMemes", are said to bind memes into cohesive packages which structure the world views of both individuals and societies.
Diversification of views
While Spiral Dynamics began as a single formulation and extension of Graves's work, a series of disagreements and shifting collaborations have produced three distinct approaches. By 2010, these had settled as Christopher Cowan and Natasha Todorovic advocating their trademarked "SPIRAL DYNAMICS®" as fundamentally the same as Graves's emergent cyclical theory, Don Beck advocating Spiral Dynamics Integral (SDi) with a community of practice around various chapters of his Centers for Human Emergence, and Ken Wilber subordinating SDi to his similarly but-not-identically colored Integral AQAL "altitudes", with a greater focus on spirituality.
This state of affairs has led to practitioners noting the "lineage" of their approach in publications.
Timeline
The following timeline shows the development of the various Spiral Dynamics factions and the major figures involved in them, as well as the initial work done by Graves. Splits and changes between factions are based on publications or public announcements, or approximated to the nearest year based on well-documented events.
Vertical bars indicate notable publications, which are listed along with a few other significant events after the timeline.
Bolded years indicate publications that appear as vertical bars in the chart above:
1966: Graves: first major publication (in The Harvard Business review)
1970: Graves: peer reviewed publication in Journal of Humanistic Psychology
1974: Graves: article in The Futurist (Beck first becomes aware of Graves's theory; Cowan a bit later)
1977: Graves abandons manuscript of what would later become The Never Ending Quest
1979: Beck and Cowan found National Values Center, Inc. (NVC)
1981: Beck and Cowan resign from UNT to work with Graves; Beck begins applying theory in South Africa
1986: Death of Clare Graves
1995: Wilber: Sex, Ecology, Spirituality (introduces quadrant model, first mention of Graves's ECLET)
1996: Beck and Cowan: Spiral Dynamics: Mastering Values, Leadership, and Change
1998: Cowan and Todorovic form NVC Consulting (NVCC) as an "outgrowth" of NVC
1998: Cowan files for "Spiral Dynamics" service mark, registered to NVC
1999: Beck (against SD as service mark) and Cowan (against Wilber's Integral theory) cease collaborating
1999: Wilber: The Collected Works of Ken Wilber, Vol. 4: Integral Psychology (first Spiral Dynamics reference)
2000: Cowan and Todorovic: "Spiral Dynamics: The Layers of Human Values in Strategy" in Strategy & Leadership (peer reviewed)
2000: Wilber: A Theory of Everything (integrates SD with AQAL, defines MGM: "Mean Green Meme")
2000: Wilber founds the Integral Institute with Beck as a founding associate around this time
2002: Beck: "SDi: Spiral Dynamics in the Integral Age" (launches SDi as a brand)
2002: Todorovic: "The Mean Green Hypothesis: Fact or Fiction?" (refutes MGM)
2002: Graves; William R. Lee (annot.); Cowan and Todorovic (eds.): Levels of Human Existence, transcription of Graves's 1971 three-day seminar
2004: Beck founds the Center for Human Emergence (CHE),
2005: Beck, Elza S. Maalouf and Said E. Dawlabani found the Center for Human Emergence Middle East
2005: Graves; Cowan and Todorovic (eds.): The Never Ending Quest
2005: Beck and Wilber cease collaborating around this time, disagreeing on Wilber's changes to SDi
2006: Wilber: Integral Spirituality (adds altitudes colored to align with both SDi and chakras)
2009: NVC dissolved as business entity, original SD service mark (officially registered to NVC) canceled
2010: Cowan and Todorovic re-file for SD service mark and trademark, registered to NVC Consulting
2015: Death of Chris Cowan
2017: Wilber: Religion of Tomorrow (further elaborates on the altitude concept and coloring)
2018: Beck et al.: Spiral Dynamics in Action
2022: Death of Don Beck
Cowan and Todorovic's "Spiral Dynamics"
Chris Cowan's decision to trademark "Spiral Dynamics" in the US and form a consulting business with Natasha Todorovic contributed to the split between Beck and him in 1999. Cowan and Todorovic subsequently published an article on Spiral Dynamics in the peer-reviewed journal Strategy & Leadership, edited and published Graves's unfinished manuscript, and generally took the position that the distinction between Spiral Dynamics and Graves's ECLET is primarily one of terminology. Holding this view, they opposed interpretations seen as "heterodox."
In particular, Cowan and Todorovic's view of Spiral Dynamics stands in opposition to that of Ken Wilber. Wilber biographer Frank Visser describes Cowan as a "strong" critic of Wilber and his Integral theory, particularly the concept of a "Mean Green Meme." Todorovic produced a paper arguing that research refutes the existence of the "Mean Green Meme" as Beck and particularly Wilber described it.
Beck's "Spiral Dynamics integral" (SDi)
By early 2000, Don Beck was corresponding with integral philosopher Ken Wilber about Spiral Dynamics and using a "4Q/8L" diagram combining Wilber's four quadrants with the eight known levels of Spiral Dynamics. Beck officially announced SDi as launching on January 1, 2002, aligning Spiral Dynamics with integral theory and additionally citing the influence of John Petersen of the Arlington Institute and Ichak Adizes. By 2006, Wilber had introduced a slightly different color sequence for his AQAL "altitudes", diverging from Beck's SDi and relegating it to the values line, which is one of many lines within AQAL.
Later influences on SDi include the work of Muzafer Sherif and Carolyn Sherif in the fields of realistic conflict and social judgement, specifically their Assimilation Contrast Effect model and Robber's Cave study
SD/SDi and Ken Wilber's Integral Theory
Ken Wilber briefly referenced Graves in his 1986 book (with Jack Engler and Daniel P. Brown) Transformations of Consciousness, and again in 1995's Sex, Ecology, Spirituality which also introduced his four quadrants model. However, it was not until the "Integral Psychology" section of 1999's Collected Works: Volume 4 that he integrated Gravesian theory, now in the form of Spiral Dynamics. Beck and Wilber began discussing their ideas with each other around this time.
AQAL "altitudes"
By 2006, Wilber was using SDi only for the values line, one of many lines in his All Quadrants, All Levels/Lines (AQAL) framework. In the book Integral Spirituality published that year, he introduced the concept of "altitudes" as an overall "content-free" system to correlate developmental stages across all of the theories on all of the lines integrated by AQAL.
The altitudes used a set of colors that were ordered according to the rainbow, which Wilber explained was necessary to align with color energies in the tantric tradition. This left only Red, Orange, Green, and Turquoise in place, changing all of the other colors to greater or lesser degrees. Furthermore, where Spiral Dynamics theorizes that the 2nd tier would have six stages repeating the themes of the six stages of the 1st tier, in the altitude system the 2nd tier contains only two levels (corresponding to the first two SD 2nd tier levels) followed by a 3rd tier of four spiritually-oriented levels inspired by the work of Sri Aurobindo. Beck and Cowan each consider this 3rd tier to be non-Gravesian.
Wilber critic Frank Visser notes that while Wilber gives a correspondence of his altitude colors to chakras, his correspondence does not actually match any traditional system for coloring chakras, despite Wilber's assertion that using the wrong colors would "backfire badly when any actual energies were used." He goes on to note that Wilber's criticism of the SD colors as "inadequate" ignores that they were not intended to correlate with any system such as chakras. In this context, Visser expresses sympathy for Beck and Cowan's dismay over what Visser describes as "vandalism" regarding the color scheme, concluding that the altitude colors are an "awkward hybrid" of the SD and rainbow/chakra color systems, both lacking the expressiveness of the former and failing to accurately correlate with the latter.
Criticism and limitations
As an extension of Graves's theory, most criticisms of that theory apply to Spiral Dynamics as well. Likewise, to the extent that Spiral Dynamics Integral incorporates Ken Wilber's integral theory, criticism of that theory, and the lack of mainstream academic support for it are also relevant.
In addition, there have been criticisms of various aspects of SD and/or SDi that are specific to those extensions. Nicholas Reitter, writing in the Journal of Conscious Evolution, observes:
On the other hand, the SD authors seem also to have magnified some of the weaknesses in Graves' approach. The occasional messianism, unevenness of presentation and constant business-orientation of Graves' (2005) manuscript is transmuted in the SD authors' book (Beck and Cowan 1996) into a sometimes- bewildering array of references to world history, pop culture and other topics, often made in helter-skelter fashion.
Spiral Dynamics has been criticized by some as appearing to be like a cult, with undue prominence given to the business and intellectual property concerns of its leading advocates.
Metamodernists Daniel Görtz and Emil Friis, writing as Hanzi Freinacht, who created a multi-part system combining aspects of SD with other developmental measurements dismissed the Turquoise level, saying that while there will eventually be another level, it does not currently exist. They argue that attempts to build Turquoise communities are likely to lead to the development of "abusive cults"
Psychologist Keith Rice, discussing his application of SDi in individual psychotherapy, notes that it encounters limitations in accounting for temperament and the unconscious. However, regarding SDi's "low profile among academics," he notes that it can easily be matched to more well-known models "such as Maslow, Loevinger, Kohlberg, Adorno, etc.," in order to establish trust with clients.
Influence and applications
Spiral Dynamics has influenced management theory, which was the primary focus of the 1996 Spiral Dynamics book. John Mackey and Rajendra Sisodia write that the vision and values of conscious capitalism as they articulate it are consistent with the "2nd tier" VMEMES of Spiral Dynamics. Rica Viljoen's case study of economic development in Ghana demonstrates how understanding the Purple VMEME allows for organizational storytelling that connects with diverse (non-Western) worldviews.
Spiral Dynamics has also been noted as an example of applied memetics. In his chapter, "'Meme Wars': A Brief Overview of Memetics and Some Essential Context" in the peer-reviewed book Memetics and Evolutionary Economics, Michael P. Schlaile includes Spiral Dynamics in the "organizational memetics" section of his list of "enlightening examples of applied memetics." Schlaile also notes Said Dawlabani's SDi-based "MEMEnomics" as an alternative to his own "economemetics" in his chapter examining memetics and economics in the same book. Elza Maalouf argues that SDi provides a "memetic" interpretation of non-Western cultures that Western NGOs often lack, focusing attention on the "indigenous content" of the culture's value system.
One of the main applications of Spiral Dynamics is to inform more nuanced and holistic systems change strategies. Just like categories in any other framework, the various levels can be seen as memetic lenses to look at the world through in order to help those leading change take a bird's eye view in understanding the diverse perspectives on a singular topic. At best, Spiral Dynamics can help us to synthesize these perspectives and recognize the strength in having a diversity of worldviews and aim to create interventions that take into consideration the needs and values of individuals at every level of the spiral.
Spiral Dynamics continues to influence integral philosophy and spirituality, and the developmental branch of metamodern philosophy. Both integralists and metamodernists connect their philosophies to SD's Yellow VMEME. Integralism also identifies with Turquoise and eventually added further stages not found in SD or SDi, while metamodernism dismisses Turquoise as nonexistent.
SDi has also been referenced in the fields of education,
urban planning,
and cultural analysis.
Notes
Works cited
(Note on page ii: "This study was approved by Indiana University Institutional Review Board (IRB)." Note also that a previous report was published as: Nasser, Ilham (June 2020). "Mapping the Terrain of Education 2018–2019: A Summary Report". Journal of Education in Muslim Societies. Indiana University Press. 1 (2): 3–21. doi:10.2979/jems.1.2.08, but is not freely downloadable.)
Developmental psychology | 0.770273 | 0.992838 | 0.764756 |
Zitterbewegung | In physics, the zitterbewegung (, ) is the theoretical prediction of a rapid oscillatory motion of elementary particles that obey relativistic wave equations. This prediction was first discussed by Gregory Breit in 1928 and later by Erwin Schrödinger in 1930 as a result of analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces an apparent fluctuation (up to the speed of light) of the position of an electron around the median, with an angular frequency of , or approximately radians per second.
This apparent oscillatory motion is often interpreted as an artifact of using the Dirac equation in a single particle description and disappears when using quantum field theory. For the hydrogen atom, the zitterbewegung is related to the Darwin term, a small correction of the energy level of the s-orbitals.
Theory
Free spin-1/2 fermion
The time-dependent Dirac equation is written as
,
where is the reduced Planck constant, is the wave function (bispinor) of a fermionic particle spin-1/2, and is the Dirac Hamiltonian of a free particle:
,
where is the mass of the particle, is the speed of light, is the momentum operator, and and are matrices related to the Gamma matrices , as and .
In the Heisenberg picture, the time dependence of an arbitrary observable obeys the equation
In particular, the time-dependence of the position operator is given by
.
where is the position operator at time .
The above equation shows that the operator can be interpreted as the -th component of a "velocity operator".
Note that this implies that
,
as if the "root mean square speed" in every direction of space is the speed of light.
To add time-dependence to , one implements the Heisenberg picture, which says
.
The time-dependence of the velocity operator is given by
,
where
Now, because both and are time-independent, the above equation can easily be integrated twice to find the explicit time-dependence of the position operator.
First:
,
and finally
.
The resulting expression consists of an initial position, a motion proportional to time, and an oscillation term with an amplitude equal to the reduced Compton wavelength. That oscillation term is the so-called zitterbewegung.
Interpretation
In quantum mechanics, the zitterbewegung term vanishes on taking expectation values for wave-packets that are made up entirely of positive- (or entirely of negative-) energy waves. The standard relativistic velocity can be recovered by taking a Foldy–Wouthuysen transformation, when the positive and negative components are decoupled. Thus, we arrive at the interpretation of the zitterbewegung as being caused by interference between positive- and negative-energy wave components.
In quantum electrodynamics (QED) the negative-energy states are replaced by positron states, and the zitterbewegung is understood as the result of interaction of the electron with spontaneously forming and annihilating electron-positron pairs.
More recently, it has been noted that in the case of free particles it could just be an artifact of the simplified theory. Zitterbewegung appear as due to the "small components" of the Dirac 4-spinor, due to a little bit of antiparticle mixed up in the particle wavefunction for a nonrelativistic motion. It doesn't appear in the correct second quantized theory, or rather, it is resolved by using Feynman propagators and doing QED. Nevertheless, it is an interesting way to understand certain QED effects heuristically from the single particle picture.
Zigzag picture of fermions
An alternative perspective of the physical meaning of zitterbewegung was provided by Roger Penrose, by observing that the Dirac equation can be reformulated by splitting the four-component Dirac spinor into a pair of massless left-handed and right-handed two-component spinors (or zig and zag components), where each is the source term in the other's equation of motion, with a coupling constant proportional to the original particle's rest mass , as
.
The original massive Dirac particle can then be viewed as being composed of two massless components, each of which continually converts itself to the other. Since the components are massless they move at the speed of light, and their spin is constrained to be about the direction of motion, but each has opposite helicity: and since the spin remains constant, the direction of the velocity reverses, leading to the characteristic zigzag or zitterbewegung motion.
Experimental simulation
Zitterbewegung of a free relativistic particle has never been observed directly, although some authors believe they have found evidence in favor of its existence. It has also been simulated in atomic systems that provide analogues of a free Dirac particle. The first such example, in 2010, placed a trapped ion in an environment such that the non-relativistic Schrödinger equation for the ion had the same mathematical form as the Dirac equation (although the physical situation is different). Zitterbewegung-like oscillations of ultracold atoms in optical lattices were predicted in 2008. In 2013, zitterbewegung was simulated in a Bose–Einstein condensate of 50,000 atoms of 87Rb confined in an optical trap.
An optical analogue of zitterbewegung was demonstrated in a quantum cellular automaton implemented with orbital angular momentum states of light
Other proposals for condensed-matter analogues include semiconductor nanostructures, graphene and topological insulators.
See also
Casimir effect
Lamb shift
References
Further reading
External links
Zitterbewegung in New Scientist
Quantum field theory | 0.780331 | 0.979995 | 0.764721 |
Arrhenius equation | In physical chemistry, the Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that the van 't Hoff equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining the rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula. Currently, it is best seen as an empirical relationship. It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally induced processes and reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
Equation
The Arrhenius equation describes the exponential dependence of the rate constant of a chemical reaction on the absolute temperature as
where
is the rate constant (frequency of collisions resulting in a reaction),
is the absolute temperature,
is the pre-exponential factor or Arrhenius factor or frequency factor. Arrhenius originally considered A to be a temperature-independent constant for each chemical reaction. However more recent treatments include some temperature dependence – see below.
is the molar activation energy for the reaction,
is the universal gas constant.
Alternatively, the equation may be expressed as
where
is the activation energy for the reaction (in the same unit as kBT),
is the Boltzmann constant.
The only difference is the unit of : the former form uses energy per mole, which is common in chemistry, while the latter form uses energy per molecule directly, which is common in physics.
The different units are accounted for in using either the gas constant, , or the Boltzmann constant, , as the multiplier of temperature .
The unit of the pre-exponential factor are identical to those of the rate constant and will vary depending on the order of the reaction. If the reaction is first order it has the unit s−1, and for that reason it is often called the frequency factor or attempt frequency of the reaction. Most simply, is the number of collisions that result in a reaction per second, is the number of collisions (leading to a reaction or not) per second occurring with the proper orientation to react and is the probability that any given collision will result in a reaction. It can be seen that either increasing the temperature or decreasing the activation energy (for example through the use of catalysts) will result in an increase in rate of reaction.
Given the small temperature range of kinetic studies, it is reasonable to approximate the activation energy as being independent of the temperature. Similarly, under a wide range of practical conditions, the weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the factor ; except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable.
With this equation it can be roughly estimated that the rate of reaction increases by a factor of about 2 to 3 for every 10 °C rise in temperature, for common values of activation energy and temperature range.
The factor denotes the fraction of molecules with energy greater than or equal to .
Derivation
Van't Hoff argued that the temperature of a reaction and the standard equilibrium constant exhibit the relation:
where denotes the apposite standard internal energy change value.
Let and respectively denote the forward and backward reaction rates of the reaction of interest, then
, an equation from which naturally follows.
Substituting the expression for in eq.(), we obtain .
The preceding equation can be broken down into the following two equations:
and
where and are the activation energies associated with the forward and backward reactions respectively, with .
Experimental findings suggest that the constants in eq.() and eq.() can be treated as being equal to zero, so that and
Integrating these equations and taking the exponential yields the results and , where each pre-exponential factor or is mathematically the exponential of the constant of integration for the respective indefinite integral in question.
Arrhenius plot
Taking the natural logarithm of Arrhenius equation yields:
Rearranging yields:
This has the same form as an equation for a straight line:
where x is the reciprocal of T.
So, when a reaction has a rate constant obeying the Arrhenius equation, a plot of ln k versus T−1 gives a straight line, whose slope and intercept can be used to determine Ea and A respectively. This procedure is common in experimental chemical kinetics. The activation energy is simply obtained by multiplying by (−R) the slope of the straight line drawn from a plot of ln k versus (1/T):
Modified Arrhenius equation
The modified Arrhenius equation makes explicit the temperature dependence of the pre-exponential factor. The modified equation is usually of the form
The original Arrhenius expression above corresponds to . Fitted rate constants typically lie in the range . Theoretical analyses yield various predictions for n. It has been pointed out that "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T1/2 dependence of the pre-exponential factor is observed experimentally". However, if additional evidence is available, from theory and/or from experiment (such as density dependence), there is no obstacle to incisive tests of the Arrhenius law.
Another common modification is the stretched exponential form
where β is a dimensionless number of order 1. This is typically regarded as a purely empirical correction or fudge factor to make the model fit the data, but can have theoretical meaning, for example showing the presence of a range of activation energies or in special cases like the Mott variable range hopping.
Theoretical interpretation of the equation
Arrhenius's concept of activation energy
Arrhenius argued that for reactants to transform into products, they must first acquire a minimum amount of energy, called the activation energy Ea. At an absolute temperature T, the fraction of molecules that have a kinetic energy greater than Ea can be calculated from statistical mechanics. The concept of activation energy explains the exponential nature of the relationship, and in one way or another, it is present in all kinetic theories.
The calculations for reaction rate constants involve an energy averaging over a Maxwell–Boltzmann distribution with as lower bound and so are often of the type of incomplete gamma functions, which turn out to be proportional to .
Collision theory
One approach is the collision theory of chemical reactions, developed by Max Trautz and William Lewis in the years 1916–18. In this theory, molecules are supposed to react if they collide with a relative kinetic energy along their line of centers that exceeds Ea. The number of binary collisions between two unlike molecules per second per unit volume is found to be
where NA is the Avogadro constant, dAB is the average diameter of A and B, T is the temperature which is multiplied by the Boltzmann constant kB to convert to energy, and μAB is the reduced mass.
The rate constant is then calculated as , so that the collision theory predicts that the pre-exponential factor is equal to the collision number zAB. However for many reactions this agrees poorly with experiment, so the rate constant is written instead as . Here is an empirical steric factor, often much less than 1.00, which is interpreted as the fraction of sufficiently energetic collisions in which the two molecules have the correct mutual orientation to react.
Transition state theory
The Eyring equation, another Arrhenius-like expression, appears in the "transition state theory" of chemical reactions, formulated by Eugene Wigner, Henry Eyring, Michael Polanyi and M. G. Evans in the 1930s. The Eyring equation can be written:
where is the Gibbs energy of activation, is the entropy of activation, is the enthalpy of activation, is the Boltzmann constant, and is the Planck constant.
At first sight this looks like an exponential multiplied by a factor that is linear in temperature. However, free energy is itself a temperature dependent quantity. The free energy of activation is the difference of an enthalpy term and an entropy term multiplied by the absolute temperature. The pre-exponential factor depends primarily on the entropy of activation. The overall expression again takes the form of an Arrhenius exponential (of enthalpy rather than energy) multiplied by a slowly varying function of T. The precise form of the temperature dependence depends upon the reaction, and can be calculated using formulas from statistical mechanics involving the partition functions of the reactants and of the activated complex.
Limitations of the idea of Arrhenius activation energy
Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments are conducted under near-collisional conditions and this subject is often called molecular reaction dynamics.
Another situation where the explanation of the Arrhenius equation parameters falls short is in heterogeneous catalysis, especially for reactions that show Langmuir-Hinshelwood kinetics. Clearly, molecules on surfaces do not "collide" directly, and a simple molecular cross-section does not apply here. Instead, the pre-exponential factor reflects the travel across the surface towards the active site.
There are deviations from the Arrhenius law during the glass transition in all classes of glass-forming matter. The Arrhenius law predicts that the motion of the structural units (atoms, molecules, ions, etc.) should slow down at a slower rate through the glass transition than is experimentally observed. In other words, the structural units slow down at a faster rate than is predicted by the Arrhenius law. This observation is made reasonable assuming that the units must overcome an energy barrier by means of a thermal activation energy. The thermal energy must be high enough to allow for translational motion of the units which leads to viscous flow of the material.
See also
Accelerated aging
Eyring equation
Q10 (temperature coefficient)
Van 't Hoff equation
Clausius–Clapeyron relation
Gibbs–Helmholtz equation
Cherry blossom frontpredicted using the Arrhenius equation
References
Bibliography
External links
Carbon Dioxide solubility in Polyethylene – Using Arrhenius equation for calculating species solubility in polymers
Chemical kinetics
Eponymous equations of physics
Statistical mechanics | 0.766102 | 0.998176 | 0.764705 |
Orgone | Orgone is a pseudoscientific concept variously described as an esoteric energy or hypothetical universal life force. Originally proposed in the 1930s by Wilhelm Reich, and developed by Reich's student Charles Kelley after Reich's death in 1957, orgone was conceived as the anti-entropic principle of the universe, a creative substratum in all of nature comparable to Mesmer's animal magnetism (1779), to the Odic force (1845) of Carl Reichenbach and to Henri Bergson's élan vital (1907). Orgone was seen as a massless, omnipresent substance, similar to luminiferous aether, but more closely associated with living energy than with inert matter. It could allegedly coalesce to create organization on all scales, from the smallest microscopic units—called "bions" in orgone theory—to macroscopic structures like organisms, clouds, or even galaxies.
Reich argued that deficits or constrictions in bodily orgone were at the root of many diseases, most prominently cancer, much as deficits or constrictions in the libido could produce neuroses in Freudian theory. Reich founded the Orgone Institute ca. 1942
to pursue research into orgone energy after he immigrated to the US in 1939; he used it to publish literature and distribute material relating to the topic for over a decade. Reich designed special "orgone energy accumulators"—devices ostensibly collecting orgone energy from the environment—to enable the study of orgone energy and to be applied medically to improve general health and vitality. Ultimately, the U.S. Food and Drug Administration (FDA) obtained a federal injunction barring the interstate distribution of orgone-related materials because Reich and his associates were making false and misleading claims. A judge later ruled to jail Reich and ordered the banning and destruction of all orgone-related materials at the institute after an associate of Reich violated the injunction. Reich denied the assertion that orgone accumulators could improve sexual health by providing orgastic potency.
The National Center for Complementary and Integrative Health lists orgone as a type of "putative energy". After Reich's death, research into the concept of orgone passed to some of his students, such as Kelley, and later to a new generation of scientists in Germany keen to discover an empirical basis for the orgone hypothesis (the first positive results of which were provided in 1989 by Stefan Muschenich).
There is no empirical support for the concept of orgone in medicine or the physical sciences, and research into the concept concluded with the end of the institute. Founded in 1982, the Institute for Orgonomic Science in New York is dedicated to the continuation of Reich's work; it both publishes a digital journal on it and collects corresponding works.
History
The concept of orgone belongs to Reich's later work after he immigrated to the US. Reich's early work was based on the Freudian concept of the libido, though influenced by sociological understandings with which Freud disagreed but which were to some degree followed by other prominent theorists such as Herbert Marcuse and Carl Jung. While Freud had focused on a solipsistic conception of mind in which unconscious and inherently selfish primal drives (primarily the sexual drive, or libido) were suppressed or sublimated by internal representations (cathexes) of parental figures (the superego), for Reich libido was a life-affirming force repressed by society directly. For example, in one of his better-known analyses, Reich observes a workers' political rally, noting that participants were careful not to violate signs that prohibited walking on the grass; Reich saw this as the state co-opting unconscious responses to parental authority as a means of controlling behavior. He was expelled from the Institute of Psycho-analysis because of these disagreements over the nature of the libido and his increasingly political stance. He was forced to leave Germany soon after Hitler came to power.
Reich took an increasingly bioenergetic view of libido, perhaps influenced by his tutor Paul Kammerer and another biologist, Otto Heinrich Warburg. In the early 20th century, when molecular biology was in its infancy, developmental biology in particular still presented mysteries that made the idea of a specific life energy respectable, as was articulated by theorists such as Hans Driesch. As a psycho-analyst, Reich aligned such theories with the Freudian libido, while as a materialist, he believed such a life force must be susceptible to physical experiments.
He wrote in his best-known book, The Function of the Orgasm: "Between 1919 and 1921, I became familiar with Driesch's 'Philosophie des Organischen' and his 'Ordnungslehre'… Driesch's contention seemed incontestable to me. He argued that, in the sphere of the life function, the whole could be developed from a part, whereas a machine could not be made from a screw… However, I couldn't quite accept the transcendentalism of the life principle. Seventeen years later I was able to resolve the contradiction on the basis of a formula pertaining to the function of energy. Driesch's theory was always present in my mind when I thought about vitalism. The vague feeling I had about the irrational nature of his assumption turned out to be justified in the end. He landed among the spiritualists."
The concept of orgone resulted from this work in the psycho-physiology of libido. After Reich migrated to the US, he began to speculate about biological development and evolution and then branched into much broader speculations about the nature of the universe. This led him to the conception of "bions," self-luminescent sub-cellular vesicles that he believed were observable in decaying materials and presumably present universally. Initially, he thought of bions as electrodynamic or radioactive entities, as had the Russian biologist Alexander Gurwitsch, but later concluded that he had discovered an entirely unknown but measurable force, which he then named "orgone", a pseudo-Greek formation probably from org- "impulse, excitement" as in org-asm, plus -one as in ozone (the Greek neutral participle, virtually , gen.: ).
For Reich, neurosis became a physical manifestation he called "body armor"—deeply seated tensions and inhibitions in the physical body that were not separated from any mental effects that might be observed. He developed a therapeutic approach he called vegetotherapy that was aimed at opening and releasing this body armor so that free instinctive reflexes—which he considered a token of psychic well-being—could take over.
Evaluation
Orgone was closely associated with sexuality: Reich, following Freud, saw nascent sexuality as the primary energetic force of life. The term itself was chosen to share a root with the word orgasm, which both Reich and Freud took as a fundamental expression of psychological health. This focus on sexuality, while acceptable in the clinical perspective of Viennese psychoanalytic circles, scandalized the conservative American public even as it appealed to countercultural figures like William S. Burroughs and Jack Kerouac.
In some cases, Reich's experimental techniques do not appear to have been very careful or taken precautions to remove experimental bias. Reich was concerned with experimental verification from other scientists. Albert Einstein agreed to participate, but thought Reich's research lacked scientific detachment and experimental rigor; and concluded that the effect was simply due to the temperature gradient inside the room. "Through these experiments I regard the matter as completely solved," he wrote to Reich on 7 February 1941. Upon further correspondence from Reich, Einstein replied that he could not devote any additional time to the matter and asked that his name not be misused for advertising purposes.
Orgone and its related concepts were quickly denounced in the post-World War II American press. Reich and his students were seen as a "cult of sex and anarchy," at least in part because orgone was linked with the title of his book The Function of the Orgasm, and this led to numerous investigations as a communist and denunciation under a wide variety of other pretexts. The psychoanalytical community of the time saw his approach to healing diseases as quackery of the worst sort. In 1954, the U.S. Food and Drug Administration obtained an injunction to prevent Reich from making medical claims relating to orgone, which prevented him from shipping "orgone devices" across state lines, among other stipulations. Reich resisted the order to cease interstate distribution of orgone and was jailed, and the FDA destroyed Reich's books, research materials, and devices at his institute relating to orgone.
Some psychotherapists and psychologists practicing various kinds of Body Psychotherapy and Somatic Psychology have continued to use Reich's proposed emotional-release methods and character-analysis ideas.
Film influence
Dušan Makavejev opened his 1971 satirical film W.R.: Mysteries of the Organism with documentary coverage of Reich and his development of orgone accumulators, combining this with other imagery and a fictional sub-plot in a collage mocking sexual and political authorities. Scenes include one of only "ten or fifteen orgone boxes left in the country" at that time.
See also
Alexander Gurwitsch
Animal magnetism of Franz Anton Mesmer
Energy (spiritual)
Energy medicine
Fringe science
Integratron
List of ineffective cancer treatments
Odic force of Carl Reichenbach
Rupert Sheldrake
Scientific skepticism
Thetan
Vitalism
Vril
References
External links
Quackwatch article
Body psychotherapy
Energy (esotericism)
Orgonomy
Pseudoscience
Vitalism
Sexology | 0.766258 | 0.997971 | 0.764703 |
Cursorial | A cursorial organism is one that is adapted specifically to run. An animal can be considered cursorial if it has the ability to run fast (e.g. cheetah) or if it can keep a constant speed for a long distance (high endurance). "Cursorial" is often used to categorize a certain locomotor mode, which is helpful for biologists who examine behaviors of different animals and the way they move in their environment. Cursorial adaptations can be identified by morphological characteristics (e.g. loss of lateral digits as in ungulate species), physiological characteristics, maximum speed, and how often running is used in life. There is much debate over how to define a cursorial animal specifically. The most accepted definitions include that a cursorial organism could be considered adapted to long-distance running at high speeds or has the ability to accelerate quickly over short distances. Among vertebrates, animals under 1 kg of mass are rarely considered cursorial, and cursorial behaviors and morphology are thought to only occur at relatively large body masses in mammals. There are a few mammals that have been termed "micro-cursors" that are less than 1 kg in mass and have the ability to run faster than other small animals of similar sizes.
Some species of spiders are also considered cursorial, as they walk much of the day, looking for prey.
Cursorial adaptations
Terrestrial vertebrates
Adaptations for cursorial locomotion in terrestrial vertebrates include:
Increased stride length by:
Increased limb bone length
Adoption of digitigrade or unguligrade stance
Loss of clavicle in mammals, which allows the scapula to move forwards and backwards with the limb and thereby increase stride length.
Increased spinal flexion during galloping
Decreased distal limb weight (in order to minimize moment of inertia):
Increase in mass of proximal muscles with decrease in mass of distal muscles
Increase in length of distal limb bones (the manus and pes) rather than proximal ones (the brachium or thigh).
Longer tendons in distal limb
Decreased ability to move limbs outside of the sagittal plane, which increases stability.
Reduction or loss of digits.
Loss of ability to pronate and supinate the forearm (more specialized cursors)
Hooves, hoof-like claws, or blunt claws for traction (as opposed to sharp claws for prey-capture or climbing)
Typically, cursors will have long, slender limbs mostly due to the elongation of distal limb proportions (metatarsals/metacarpals) and loss or reduction of lateral digits with a digitigrade or unguligrade foot posture. These characters are understood to decrease weight in the distal portions of the limb which allows the individual to swing the limb faster (minimizing the moment of inertia). This gives the individual the ability to move their legs fast and is assumed to contribute to the ability to produce higher speeds. A larger concentration of muscles at the pectoral and pelvic girdles, with less muscle and more tendons as you move distally down the limb, is the typical configuration for quadrupedal cursors (e.g. cheetah, greyhound, horse). All ungulates are considered cursorial based on these criteria, but in fact there are some ungulates that do not habitually run. Elongation of the limbs does increase stride length, which has been suggested to be more correlated with larger home ranges and foraging patterns in ungulates. Stride length can also be lengthened by the mobility of the shoulder girdle. Some cursorial mammals have a reduced or absent clavicle, which allows the scapula to slide forward across the ribcage.
Cursorial animals tend to have increased elastic storage in their epaxial muscles, which allows them to store elastic energy while the spine flexes and extends in the dorso-ventral plane. Furthermore, limbs in cursorially adapted mammals will tend to stay in the dorso-ventral (or sagittal) plane to increase stability when moving forward at high speeds, but this hinders the amount of lateral flexibility that limbs can have. Some felids are special in that they can pronate and supinate their forearms and run fast, but this is not the case in most other quadrupedal cursors. Ungulates and canids have restricted motion in their limbs and therefore could be considered more specialized for cursorial locomotion. Several rodents are also considered cursorial (e.g. the mara, capybara, and agouti) and have similar characters to other cursorial mammals such as reduced digits, more muscles in the proximal portion than distal portion of the limb, and straight, sagittally oriented limbs. Some rodents are bipedal and can hop quickly to move around, which is called ricochetal or saltatorial instead of cursorial.
There are also bipedal cursors. Humans are bipedal and considered to be built for endurance running. Several species of birds are also cursorial, mainly those that have attained larger body sizes (ostrich, greater rhea, emu). Most of the stride length in birds comes from movements below the knee joint, because the femur is situated horizontally and the knee joint sits more towards the front of the body, placing the feet below the center of mass. Different birds will increase their speed in one of two ways: by increasing the frequency of footfalls or increasing the stride length. Several studies have also found that many theropod dinosaurs (specifically coelurosaurs) were also cursorial to an extent.
Spiders
Spiders maintain balance when walking, so that legs 1 and 3 on one side and 2 and 4 on the other side are moving, while the other four legs are on the surface. To run faster, spiders increase their stride frequency.
Cursorial taxa
Several notable taxa are cursorial, including some mammals (such as wolverines and wolves, ungulates, agoutis, and kangaroos) , as well as some dinosaurs (such as theropods, including birds like the ostrich). Several extinct archosaurs were also cursorial, including the crocodylomorphs Pristichampsus, Hesperosuchus, and several genera within Notosuchia.
Jumping spiders and other non-web based spiders generally walk throughout the day, so that they maximize their chances of a catch, and web-based spiders run away if threatened.
Many Blattodea have very sensitive cursorial legs, that can be so specialized they run away at the puff of wind, such as the American cockroach.
In evolutionary theory
The presumed cursorial nature of theropod dinosaurs is an important part of the ground-up theory of the evolution of bird flight (also called the Cursorial theory), a theory that contrasts with the idea that birds' pre-flight ancestors were arboreal species and puts forth that the flight apparatus may have been adapted to improve hunting by lengthening leaps and improving maneuverability.
See also
Arboreal
Fossorial
Cursorial hunting
References
Evolutionarily significant biological phenomena
Animal locomotion | 0.775344 | 0.986266 | 0.764696 |
Fine-tuned universe | The characterization of the universe as finely tuned intends to explain why the known constants of nature, such as the electron charge, the gravitational constant, and the like, have their measured values rather than some other arbitrary values. According to the "fine-tuned universe" hypothesis, if these constants' values were too different from what they are, "life as we know it" could not exist. In practice, this hypothesis is formulated in terms of dimensionless physical constants.
History
In 1913, the chemist Lawrence Joseph Henderson wrote The Fitness of the Environment, one of the first books to explore fine tuning in the universe. Henderson discusses the importance of water and the environment to living things, pointing out that life as it exists on Earth depends entirely on Earth's very specific environmental conditions, especially the prevalence and properties of water.
In 1961, physicist Robert H. Dicke claimed that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1983 book The Intelligent Universe. Hoyle wrote: "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive".
Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the Standard Model, such as supersymmetry, but by 2012 it had not produced evidence for supersymmetry at the energy scales it was able to probe.
Motivation
Physicist Paul Davies said: "There is now broad agreement among physicists and cosmologists that the Universe is in several respects 'fine-tuned' for life. But the conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires". He also said that anthropic' reasoning fails to distinguish between minimally biophilic universes, in which life is permitted, but only marginally possible, and optimally biophilic universes, in which life flourishes because biogenesis occurs frequently". Among scientists who find the evidence persuasive, a variety of natural explanations have been proposed, such as the existence of multiple universes introducing a survivorship bias under the anthropic principle.
The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. Stephen Hawking observed: "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life".
For example, if the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger) while the other constants were left unchanged, diprotons would be stable; according to Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The diproton's existence would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all the universe's hydrogen would be consumed in the first few minutes after the Big Bang. This "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.
The precise formulation of the idea is made difficult by the fact that it is not yet known how many independent physical constants there are. The standard model of particle physics has 25 freely adjustable parameters and general relativity has one more, the cosmological constant, which is known to be nonzero but profoundly small in value. Because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity.
Without knowledge of this more complete theory suspected to underlie the standard model, it is impossible to definitively count the number of truly independent physical constants. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics".
Examples
Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.
N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist. If it were large enough, they would repel them so violently that larger atoms would never be generated.
Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, a proton could not bond to a neutron, and only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial cosmic expansion rate, the universe would have collapsed before life could have evolved. If gravity were too weak, no stars would have formed.
Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, Λ is on the order of . This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. A slightly larger value of the cosmological constant would have caused space to expand rapidly enough that stars and other astronomical structures would not be able to form.
Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 spatial dimensions. Rees argues this does not preclude the existence of ten-dimensional strings.
Max Tegmark argued that if there is more than one time dimension, then physical systems' behavior could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, protons and electrons would be unstable and could decay into particles having greater mass than themselves. This is not a problem if the particles have a sufficiently low temperature.
Carbon and oxygen
An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. To explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.
Explanations
Some explanations of fine-tuning are naturalistic. First, the fine-tuning might be an illusion: more fundamental physics may explain the apparent fine-tuning in physical parameters in our current understanding by constraining the values those parameters are likely to take. As Lawrence Krauss put it, "certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don't seem to be so fine-tuned. We have to have some historical perspective". Some argue it is possible that a final fundamental theory of everything will explain the underlying causes of the apparent fine-tuning in every parameter.
Still, as modern cosmology developed, various hypotheses not presuming hidden order have been proposed. One is a multiverse, where fundamental physical constants are postulated to have different values outside of our own universe. On this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias, specifically survivorship bias. Only those universes with fundamental constants hospitable to life, such as on Earth, could contain life forms capable of observing the universe and contemplating the question of fine-tuning in the first place. Zhi-Wei Wang and Samuel L. Braunstein argue that the apparent fine-tuning of fundamental constants could be due to our lack of understanding of these constants.
Multiverse
If the universe is just one of many and possibly infinite universes, each with different physical phenomena and constants, it is unsurprising that there is a universe hospitable to intelligent life. Some versions of the multiverse hypothesis therefore provide a simple explanation for any fine-tuning, while the analysis of Wang and Braunstein challenges the view that our universe is unique in its ability to support life.
The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. Although there is no evidence for the existence of a multiverse, some versions of the theory make predictions of which some researchers studying M-theory and gravity leaks hope to see some evidence soon. According to Laura Mersini-Houghton, the WMAP cold spot could provide testable empirical evidence of a parallel universe. Variants of this approach include Lee Smolin's notion of cosmological natural selection, the ekpyrotic universe, and the bubble universe theory.
It has been suggested that invoking the multiverse to explain fine-tuning is a form of the inverse gambler's fallacy.
Top-down cosmology
Stephen Hawking and Thomas Hertog proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions seen today. According to their theory, the universe's "fine-tuned" physical constants are inevitable, because the universe "selects" only those histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why this universe allows matter and life without invoking the multiverse.
Carbon chauvinism
Some forms of fine-tuning arguments about the formation of life assume that only carbon-based life forms are possible, an assumption sometimes called carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.
Alien design
One hypothesis is that extra-universal aliens designed the universe. Some believe this would solve the problem of how a designer or design team capable of fine-tuning the universe could come to exist. Cosmologist Alan Guth believes humans will in time be able to generate new universes. By implication, previous intelligent entities may have generated our universe. This idea leads to the possibility that the extra-universal designer/designers are themselves the product of an evolutionary process in their own universe, which must therefore itself be able to sustain life. It also raises the question of where that universe came from, leading to an infinite regress. John Gribbin's Designer Universe theory suggests that an advanced civilization could have deliberately made the universe in another part of the multiverse, and that this civilization may have caused the Big Bang.
Simulation hypothesis
The simulation hypothesis holds that the universe is fine-tuned simply because the more technologically advanced simulation operator(s) programmed it that way.
No improbability
Graham Priest, Mark Colyvan, Jay L. Garfield, and others have argued against the presupposition that "the laws of physics or the boundary conditions of the universe could have been other than they are".
Religious apologetics
Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning. Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).
William Lane Craig, a philosopher and Christian apologist, cites this fine-tuning of the universe as evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe. Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability. Scientist and theologian Alister McGrath observed that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.
The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). [...] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.
Theoretical physicist and Anglican priest John Polkinghorne stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident". Theologian and philosopher Andrew Loke argues that there are only five possible categories of hypotheses concerning fine-tuning and order: (i) chance, (ii) regularity, (iii) combinations of regularity and chance, (iv) uncaused, and (v) design, and that only design gives an exclusively logical explanation of order in the universe. He argues that the Kalam Cosmological Argument strengthens the teleological argument by answering the question "Who designed the Designer?". Creationist Hugh Ross advances a number of fine-tuning hypotheses. One is the existence of what Ross calls "vital poisons", which are elemental nutrients that are harmful in large quantities but essential for animal life in smaller quantities.
Robin Collins argues that the universe is fine-tuned for scientific discoverability, and that this fine-tuning cannot be explained by the multiverse hypothesis. According to Collins, the universe's laws, fundamental parameters, and initial conditions must be just right for the universe to be as discoverable as ours. According to Collins, examples of fine-tuning for discoverability include:
The fine-structure constant is fine-tuned for energy usage. If it were stronger, there would be no practical way to harness energy. If it were weaker, fire would burn through wood too quickly and energy usage would be impractical.
The baryon-to-photon ratio allowed for the discovery of the big bang via the cosmic microwave background.
Many things in particle physics are within a narrow range required for discoverability, such as the mass of the Higgs boson.
See also
Fine-tuning (disambiguation)
God of the gaps
References
Further reading
John D. Barrow (2003). The Constants of Nature, Pantheon Books,
Bernard Carr, ed. (2007). Universe or Multiverse? Cambridge University Press.
Mark Colyvan, Jay L. Garfield, Graham Priest (2005). "Problems with the Argument from Fine Tuning". Synthese 145: 325–38.
Paul Davies (1982). The Accidental Universe, Cambridge University Press,
Paul Davies (2007). Cosmic Jackpot: Why Our Universe Is Just Right for Life, Houghton Mifflin Harcourt, . Reprinted as: The Goldilocks Enigma: Why Is the Universe Just Right for Life?, 2008, Mariner Books, .
Geraint F. Lewis and Luke A. Barnes (2016). A Fortunate Universe: Life in a finely tuned cosmos, Cambridge University Press.
Alister McGrath (2009). A Fine-Tuned Universe: The Quest for God in Science and Theology, Westminster John Knox Press, .
Timothy J. McGrew, Lydia McGrew, Eric Vestrup (2001). "Probabilities and the Fine-Tuning Argument: A Sceptical View". Mind 110: 1027–37.
Simon Conway Morris (2003). Life's Solution: Inevitable Humans in a Lonely Universe. Cambridge Univ. Press.
Martin Rees (1999). Just Six Numbers, HarperCollins Publishers, .
Victor J. Stenger (2011). The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. Prometheus Books. .
Peter Ward and Donald Brownlee (2000). Rare Earth: Why Complex Life is Uncommon in the Universe. Springer Verlag.
Jeffrey Koperski (2015). The Physics of Theism: God, Physics, and the Philosophy of Science, John Wiley & Sons
External links
Defense of fine-tuning
Anil Ananthaswamy: Is the Universe Fine-tuned for Life?
Francis Collins, Why I'm a man of science-and faith. National Geographic article.
Custom Universe, Documentary of fine-tuning with scientific experts.
Hugh Ross: Evidence for the Fine Tuning of the Universe
Interview with Charles Townes discussing science and religion.
Criticism of fine tuning
Bibliography of online Links to criticisms of the Fine-Tuning Argument. Secular Web.
Victor Stenger:
"A Case Against the Fine-Tuning of the Cosmos"
"Does the Cosmos Show Evidence of Purpose?"
"Is the Universe fine-tuned for us?"
Elliott Sober, "The Design Argument." An earlier version appeared in the Blackwell Companion to the Philosophy of Religion (2004).
Anthropic principle
Astronomical hypotheses
Fermi paradox
Intelligent design
Philosophical arguments
Physical cosmology | 0.767724 | 0.996037 | 0.764681 |
Entropy and life | Research concerning the relationship between the thermodynamic quantity entropy and both the origin and evolution of life began around the turn of the 20th century. In 1910 American historian Henry Adams printed and distributed to university libraries and history professors the small volume A Letter to American Teachers of History proposing a theory of history based on the second law of thermodynamics and on the principle of entropy.
The 1944 book What is Life? by Nobel-laureate physicist Erwin Schrödinger stimulated further research in the field. In his book, Schrödinger originally stated that life feeds on negative entropy, or negentropy as it is sometimes called, but in a later edition corrected himself in response to complaints and stated that the true source is free energy. More recent work has restricted the discussion to Gibbs free energy because biological processes on Earth normally occur at a constant temperature and pressure, such as in the atmosphere or at the bottom of the ocean, but not across both over short periods of time for individual organisms. The quantitative application of entropy balances and Gibbs energy considerations to individual cells is one of the underlying principles of growth and metabolism.
Ideas about the relationship between entropy and living organisms have inspired hypotheses and speculations in many contexts, including psychology, information theory, the origin of life, and the possibility of extraterrestrial life.
Early views
In 1863 Rudolf Clausius published his noted memoir On the Concentration of Rays of Heat and Light, and on the Limits of Its Action, wherein he outlined a preliminary relationship, based on his own work and that of William Thomson (Lord Kelvin), between living processes and his newly developed concept of entropy. Building on this, one of the first to speculate on a possible thermodynamic perspective of organic evolution was the Austrian physicist Ludwig Boltzmann. In 1875, building on the works of Clausius and Kelvin, Boltzmann reasoned:
In 1876 American civil engineer Richard Sears McCulloh, in his Treatise on the Mechanical Theory of Heat and its Application to the Steam-Engine, which was an early thermodynamics textbook, states, after speaking about the laws of the physical world, that "there are none that are established on a firmer basis than the two general propositions of Joule and Carnot; which constitute the fundamental laws of our subject." McCulloh then goes on to show that these two laws may be combined in a single expression as follows:
where
entropy
a differential amount of heat passed into a thermodynamic system
absolute temperature
McCulloh then declares that the applications of these two laws, i.e. what are currently known as the first law of thermodynamics and the second law of thermodynamics, are innumerable:
McCulloh gives a few of what he calls the "more interesting examples" of the application of these laws in extent and utility. His first example is physiology, wherein he states that "the body of an animal, not less than a steamer, or a locomotive, is truly a heat engine, and the consumption of food in the one is precisely analogous to the burning of fuel in the other; in both, the chemical process is the same: that called combustion." He then incorporates a discussion of Antoine Lavoisier's theory of respiration with cycles of digestion, excretion, and perspiration, but then contradicts Lavoisier with recent findings, such as internal heat generated by friction, according to the new theory of heat, which, according to McCulloh, states that the "heat of the body generally and uniformly is diffused instead of being concentrated in the chest". McCulloh then gives an example of the second law, where he states that friction, especially in the smaller blood vessels, must develop heat. Undoubtedly, some fraction of the heat generated by animals is produced in this way. He then asks: "but whence the expenditure of energy causing that friction, and which must be itself accounted for?"
To answer this question he turns to the mechanical theory of heat and goes on to loosely outline how the heart is what he calls a "force-pump", which receives blood and sends it to every part of the body, as discovered by William Harvey, and which "acts like the piston of an engine and is dependent upon and consequently due to the cycle of nutrition and excretion which sustains physical or organic life". It is likely that McCulloh modeled parts of this argument on that of the famous Carnot cycle. In conclusion, he summarizes his first and second law argument as such:
Negative entropy
In the 1944 book What is Life?, Austrian physicist Erwin Schrödinger, who in 1933 had won the Nobel Prize in Physics, theorized that life – contrary to the general tendency dictated by the second law of thermodynamics, which states that the entropy of an isolated system tends to increase – decreases or keeps constant its entropy by feeding on negative entropy. The problem of organization in living systems increasing despite the second law is known as the Schrödinger paradox. In his note to Chapter 6 of What is Life?, however, Schrödinger remarks on his usage of the term negative entropy:
This, Schrödinger argues, is what differentiates life from other forms of the organization of matter. In this direction, although life's dynamics may be argued to go against the tendency of the second law, life does not in any way conflict with or invalidate this law, because the principle that entropy can only increase or remain constant applies only to a closed system which is adiabatically isolated, meaning no heat can enter or leave, and the physical and chemical processes which make life possible do not occur in adiabatic isolation, i.e. living systems are open systems. Whenever a system can exchange either heat or matter with its environment, an entropy decrease of that system is entirely compatible with the second law.
Schrödinger asked the question: "How does the living organism avoid decay?" The obvious answer is: "By eating, drinking, breathing and (in the case of plants) assimilating." While energy from nutrients is necessary to sustain an organism's order, Schrödinger also presciently postulated the existence of other molecules equally necessary for creating the order observed in living organisms: "An organism's astonishing gift of concentrating a stream of order on itself and thus escaping the decay into atomic chaos – of drinking orderliness from a suitable environment – seems to be connected with the presence of the aperiodic solids..." We now know that this "aperiodic" crystal is DNA, and that its irregular arrangement is a form of information. "The DNA in the cell nucleus contains the master copy of the software, in duplicate. This software seems to control by specifying an algorithm, or set of instructions, for creating and maintaining the entire organism containing the cell."
DNA and other macromolecules determine an organism's life cycle: birth, growth, maturity, decline, and death. Nutrition is necessary but not sufficient to account for growth in size, as genetics is the governing factor. At some point, virtually all organisms normally decline and die even while remaining in environments that contain sufficient nutrients to sustain life. The controlling factor must be internal and not nutrients or sunlight acting as causal exogenous variables. Organisms inherit the ability to create unique and complex biological structures; it is unlikely for those capabilities to be reinvented or to be taught to each generation. Therefore, DNA must be operative as the prime cause in this characteristic as well. Applying Boltzmann's perspective of the second law, the change of state from a more probable, less ordered, and higher entropy arrangement to one of less probability, more order, and lower entropy (as is seen in biological ordering) calls for a function like that known of DNA. DNA's apparent information-processing function provides a resolution of the Schrödinger paradox posed by life and the entropy requirement of the second law.
Gibbs free energy and biological evolution
In recent years, the thermodynamic interpretation of evolution in relation to entropy has begun to use the concept of the Gibbs free energy, rather than entropy. This is because biological processes on Earth take place at roughly constant temperature and pressure, a situation in which the Gibbs free energy is an especially useful way to express the second law of thermodynamics. The Gibbs free energy is given by:
where
Gibbs free energy
enthalpy passed into a thermodynamic system
absolute temperature of the system
entropy
and exergy and Gibbs free energy are equivalent if the environment and system temperature are equivalent. Otherwise, Gibbs free energy will be less than the exergy (for systems with temperatures above ambient). The minimization of the Gibbs free energy is a form of the principle of minimum energy (minimum 'free' energy or exergy), which follows from the entropy maximization principle for closed systems. Moreover, the Gibbs free energy equation, in modified form, can be used for open systems, including situations where chemical potential terms are included in the energy balance equation. In a popular 1982 textbook, Principles of Biochemistry, noted American biochemist Albert Lehninger argued that the order produced within cells as they grow and divide is more than compensated for by the disorder they create in their surroundings in the course of growth and division. In short, according to Lehninger, "Living organisms preserve their internal order by taking from their surroundings free energy, in the form of nutrients or sunlight, and returning to their surroundings an equal amount of energy as heat and entropy."
Similarly, according to the chemist John Avery, from his 2003 book Information Theory and Evolution, we find a presentation in which the phenomenon of life, including its origin and evolution, as well as human cultural evolution, has its basis in the background of thermodynamics, statistical mechanics, and information theory. The (apparent) paradox between the second law of thermodynamics and the high degree of order and complexity produced by living systems, according to Avery, has its resolution "in the information content of the Gibbs free energy that enters the biosphere from outside sources." Assuming evolution drives organisms towards higher information content, it is postulated by Gregory Chaitin that life has properties of high mutual information, and by Tamvakis that life can be quantified using mutual information density metrics, a generalisation of the concept of Biodiversity.
In a study titled "Natural selection for least action" published in the Proceedings of the Royal Society A., Ville Kaila and Arto Annila of the University of Helsinki describe how the process of natural selection responsible for such local increase in order may be mathematically derived directly from the expression of the second law equation for connected non-equilibrium open systems. The second law of thermodynamics can be written as an equation of motion to describe evolution, showing how natural selection and the principle of least action can be connected by expressing natural selection in terms of chemical thermodynamics. In this view, evolution explores possible paths to level differences in energy densities and so increase entropy most rapidly. Thus, an organism serves as an energy transfer mechanism, and beneficial mutations allow successive organisms to transfer more energy within their environment.
Counteracting the second law tendency
Second-law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (or exergy content), understanding fundamental physical phenomena, improving performance evaluation and optimization, or in furthering our understanding of living systems.
The second law describes a universal tendency towards disorder and uniformity, or internal and external equilibrium. This means that real, non-ideal processes cause entropy production. Entropy can also be transferred to or from a system as well by the flow or transfer of matter and energy. As a result, entropy production does not necessarily cause the entropy of the system to increase. In fact the entropy or disorder in a system can spontaneously decrease, such as an aircraft gas turbine engine cooling down after shutdown, or like water in a cup left outside in sub-freezing winter temperatures. In the latter, a relatively unordered liquid cools and spontaneously freezes into a crystalized structure of reduced disorder as the molecules ‘stick’ together. Although the entropy of the system decreases, the system approaches uniformity with, or becomes more thermodynamically similar to its surroundings. This is a category III process, referring to the four combinations of either entropy (S) up or down, and uniformity (Y) - between system and its environment – either up or down.
The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where ‘exergy’ is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and ‘instruction or intelligence’, is understood in the context of, or characterized by, the set of processes that are within category IV.
Consider as an example of a category IV process, robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity.
As another example, consider the refrigeration of water in a warm environment. Due to refrigeration, heat is extracted or forced to flow from the water. As a result, the temperature and entropy of the water decreases, and the system moves further away from uniformity with its warm surroundings. The important point is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect.
Observation is the basis for the understanding that category IV processes require both a source of exergy as well as a source or form of intelligence or instruction. With respect to living systems, sunlight provides the source of exergy for virtually all life on Earth, i.e. sunlight directly (for flora) or indirectly in food (for fauna). Note that the work potential or exergy of sunlight, with a certain spectral and directional distribution, will have a specific value that can be expressed as a percentage of the energy flow or exergy content. Like the Earth as a whole, living things use this energy, converting the energy to other forms (the first law), while producing entropy (the second law), and thereby degrading the exergy or quality of the energy. Sustaining life, or the growth of a seed, for example, requires continual arranging of atoms and molecules into elaborate assemblies required to duplicate living cells. This assembly in living organisms decreases uniformity and disorder, counteracting the universal tendency towards disorder and uniformity described by the second law. In addition to a high quality energy source, counteracting this tendency requires a form of instruction or intelligence, which is contained primarily in the DNA/RNA.
In the absence of instruction or intelligence, high quality energy is not enough on its own to produce complex assemblies, such as a house. As an example of category I in contrast to IV, although having a lot of energy or exergy, a second tornado will never re-construct a town destroyed by a previous tornado, instead it increases disorder and uniformity (category I), the very tendency described by the second law. A related line of reasoning is that, even though improbable, over billions of years or trillions of chances, did life come about undirected, from non-living matter in the absence of any intelligence? Related questions someone can ask include; can humans with a supply of food (exergy) live without DNA/RNA, or can a house supplied with electricity be built in the forest without humans or a source of instruction or programming, or can a fridge run with electricity but without its functioning computer control boards?
The second law guarantees, that if we build a house it will, over time, have the tendency to fall apart or tend towards a state of disorder. On the other hand, if on walking through a forest we discover a house, we likely conclude that somebody built it, rather than concluding the order came about randomly. We know that living systems, such as the structure and function of a living cell, or the process of protein assembly/folding, are exceedingly complex. Could life have come about without being directed by a source of intelligence – consequently, over time, resulting in such things as the human brain and its intelligence, computers, cities, the quality of love and the creation of music or fine art? The second law tendency towards disorder and uniformity, and the distinction of category IV processes as counteracting this natural tendency, offers valuable insight for us to consider in our search to answer these questions.
Entropy of individual cells
Entropy balancing
An entropy balance for an open system, or the change in entropy over time for a system at steady state, can be written as:
Assuming a steady state system, roughly stable pressure-temperature conditions, and exchange through cell surfaces only, this expression can be rewritten to express entropy balance for an individual cell as:
Where
heat exchange with the environment
partial molar entropy of metabolite B
partial molar entropy of structures resulting from growth
rate of entropy production
and terms indicate rates of exchange with the environment.
This equation can be adapted to describe the entropy balance of a cell, which is useful in reconciling the spontaneity of cell growth with the intuition that the development of complex structures must overall decrease entropy within the cell. From the second law, ; due to internal organization resulting from growth, will be small. Metabolic processes force the sum of the remaining two terms to be less than zero through either a large rate of heat transfer or the export of high entropy waste products. Both mechanisms prevent excess entropy from building up inside the growing cell; the latter is what Schrödinger described as feeding on negative entropy, or "negentropy".
Implications for metabolism
In fact it is possible for this "negentropy" contribution to be large enough that growth is fully endothermic, or actually removes heat from the environment. This type of metabolism, in which acetate, methanol, or a number of other hydrocarbon compounds are converted to methane (a high entropy gas), is known as acetoclastic methanogenesis; one example is the metabolism of the anaerobic archaebacteria Methanosarcina barkeri. At the opposite extreme is the metabolism of anaerobic thermophile archaebacteria Methanobacterium thermoautotrophicum, for which the heat exported into the environment through CO_2 fixation is high (~3730 kJ/C-mol).
Generally, in metabolic processes, spontaneous catabolic processes that break down biomolecules provide the energy to drive non-spontaneous anabolic reactions that build organized biomass from high entropy reactants. Therefore, biomass yield is determined by the balance between coupled catabolic and anabolic processes, where the relationship between these processes can be described by:
where
total reaction driving force/ overall molar Gibbs energy
biomass produced
Gibbs energy of catabolic reactions (-)
Gibbs energy of anabolic reactions (+)
Organisms must maintain some optimal balance between and to both avoid thermodynamic equilibrium, at which biomass production would be theoretically maximized but metabolism would proceed at an infinitely slow rate, and the opposite limiting case at which growth is highly favorable, but biomass yields are prohibitively low. This relationship is best described in general terms, and will vary widely from organism to organism. Because the terms corresponding to catabolic and anabolic contributions would be roughly balanced in the former scenario, this case represents the maximum amount of organized matter that can be produced in accordance with the 2nd law of thermodynamics for a very generalized metabolic system.
Entropy and the origin of life
The second law of thermodynamics applied to the origin of life is a far more complicated issue than the further development of life, since there is no "standard model" of how the first biological lifeforms emerged, only a number of competing hypotheses. The problem is discussed within the context of abiogenesis, implying gradual pre-Darwinian chemical evolution.
Relationship to prebiotic chemistry
In 1924 Alexander Oparin suggested that sufficient energy for generating early life forms from non-living molecules was provided in a "primordial soup". The laws of thermodynamics impose some constraints on the earliest life-sustaining reactions that would have emerged and evolved from such a mixture. Essentially, to remain consistent with the second law of thermodynamics, self organizing systems that are characterized by lower entropy values than equilibrium must dissipate energy so as to increase entropy in the external environment. One consequence of this is that low entropy or high chemical potential chemical intermediates cannot build up to very high levels if the reaction leading to their formation is not coupled to another chemical reaction that releases energy. These reactions often take the form of redox couples, which must have been provided by the environment at the time of the origin of life. In today's biology, many of these reactions require catalysts (or enzymes) to proceed, which frequently contain transition metals. This means identifying both redox couples and metals that are readily available in a given candidate environment for abiogenesis is an important aspect of prebiotic chemistry.
The idea that processes that can occur naturally in the environment and act to locally decrease entropy must be identified has been applied in examinations of phosphate's role in the origin of life, where the relevant setting for abiogenesis is an early Earth lake environment. One such process is the ability of phosphate to concentrate reactants selectively due to its localized negative charge.
In the context of the alkaline hydrothermal vent (AHV) hypothesis for the origin of life, a framing of lifeforms as "entropy generators" has been suggested in an attempt to develop a framework for abiogenesis under alkaline deep sea conditions. Assuming life develops rapidly under certain conditions, experiments may be able to recreate the first metabolic pathway, as it would be the most energetically favorable and therefore likely to occur. In this case, iron sulfide compounds may have acted as the first catalysts. Therefore, within the larger framing of life as free energy converters, it would eventually be beneficial to characterize quantities such as entropy production and proton gradient dissipation rates quantitatively for origin of life relevant systems (particularly AHVs).
Other theories
The evolution of order, manifested as biological complexity, in living systems and the generation of order in certain non-living systems was proposed to obey a common fundamental principal called "the Darwinian dynamic". The Darwinian dynamic was formulated by first considering how microscopic order is generated in relatively simple non-biological systems that are far from thermodynamic equilibrium (e.g. tornadoes, hurricanes). Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes in the non-biological systems and in replicating RNA are basically similar. This approach helps clarify the relationship of thermodynamics to evolution as well as the empirical content of Darwin's theory.
In 2009 physicist Karo Michaelian published a thermodynamic dissipation theory for the origin of life in which the fundamental molecules of life; nucleic acids, amino acids, carbohydrates (sugars), and lipids are considered to have been originally produced as microscopic dissipative structures (through Prigogine's dissipative structuring) as pigments at the ocean surface to absorb and dissipate into heat the UVC flux of solar light arriving at Earth's surface during the Archean, just as do organic pigments in the visible region today. These UVC pigments were formed through photochemical dissipative structuring from more common and simpler precursor molecules like HCN and H2O under the UVC flux of solar light. The thermodynamic function of the original pigments (fundamental molecules of life) was to increase the entropy production of the incipient biosphere under the solar photon flux and this, in fact, remains as the most important thermodynamic function of the biosphere today, but now mainly in the visible region where photon intensities are higher and biosynthetic pathways are more complex, allowing pigments to be synthesized from lower energy visible light instead of UVC light which no longer reaches Earth's surface.
Jeremy England developed a hypothesis of the physics of the origins of life, that he calls 'dissipation-driven adaptation'. The hypothesis holds that random groups of molecules can self-organize to more efficiently absorb and dissipate heat from the environment. His hypothesis states that such self-organizing systems are an inherent part of the physical world.
Other types of entropy and their use in defining life
Like a thermodynamic system, an information system has an analogous concept to entropy called information entropy. Here, entropy is a measure of the increase or decrease in the novelty of information. Path flows of novel information show a familiar pattern. They tend to increase or decrease the number of possible outcomes in the same way that measures of thermodynamic entropy increase or decrease the state space. Like thermodynamic entropy, information entropy uses a logarithmic scale: –P(x) log P(x), where P is the probability of some outcome x. Reductions in information entropy are associated with a smaller number of possible outcomes in the information system.
In 1984 Brooks and Wiley introduced the concept of species entropy as a measure of the sum of entropy reduction within species populations in relation to free energy in the environment. Brooks-Wiley entropy looks at three categories of entropy changes: information, cohesion and metabolism. Information entropy here measures the efficiency of the genetic information in recording all the potential combinations of heredity which are present. Cohesion entropy looks at the sexual linkages within a population. Metabolic entropy is the familiar chemical entropy used to compare the population to its ecosystem. The sum of these three is a measure of nonequilibrium entropy that drives evolution at the population level.
A 2022 article by Helman in Acta Biotheoretica suggests identifying a divergence measure of these three types of entropies: thermodynamic entropy, information entropy and species entropy. Where these three are overdetermined, there will be a formal freedom that arises similar to how chirality arises from a minimum number of dimensions. Once there are at least four points for atoms, for example, in a molecule that has a central atom, left and right enantiomers are possible. By analogy, once a threshold of overdetermination in entropy is reached in living systems, there will be an internal state space that allows for ordering of systems operations. That internal ordering process is a threshold for distinguishing living from nonliving systems.
Entropy and the search for extraterrestrial life
In 1964 James Lovelock was among a group of scientists requested by NASA to make a theoretical life-detection system to look for life on Mars during the upcoming Viking missions. A significant challenge was determining how to construct a test that would reveal the presence of extraterrestrial life with significant differences from biology as we know it. In considering this problem, Lovelock asked two questions: "How can we be sure that the Martian way of life, if any, will reveal itself to tests based on Earth's life style?", as well as the more challenging underlying question: "What is life, and how should it be recognized?"
Because these ideas conflicted with more traditional approaches that assume biological signatures on other planets would look much like they do on Earth, in discussing this issue with some of his colleagues at the Jet Propulsion Laboratory, he was asked what he would do to look for life on Mars instead. To this, Lovelock replied "I'd look for an entropy reduction, since this must be a general characteristic of life." This idea was perhaps better phrased as a search for sustained chemical disequilibria associated with low entropy states resulting from biological processes, and through further collaboration developed into the hypothesis that biosignatures would be detectable through examining atmospheric compositions. Lovelock determined through studying the atmosphere of Earth that this metric would indeed have the potential to reveal the presence of life. This had the consequence of indicating that Mars was most likely lifeless, as its atmosphere lacks any such anomalous signature.
This work has been extended recently as a basis for biosignature detection in exoplanetary atmospheres. Essentially, the detection of multiple gases that are not typically in stable equilibrium with one another in a planetary atmosphere may indicate biotic production of one or more of them, in a way that does not require assumptions about the exact biochemical reactions extraterrestrial life might use or the specific products that would result. A terrestrial example is the coexistence of methane and oxygen, both of which would eventually deplete if not for continuous biogenic production. The amount of disequilibrium can be described by differencing observed and equilibrium state Gibbs energies for a given atmosphere composition; it can be shown that this quantity has been directly affected by the presence of life throughout Earth's history. Imaging of exoplanets by future ground and space based telescopes will provide observational constraints on exoplanet atmosphere compositions, to which this approach could be applied.
But there is a caveat related to the potential for chemical disequilibria to serve as an anti-biosignature depending on the context. In fact, there was probably a strong chemical disequilibrium present on early Earth before the origin of life due to a combination of the products of sustained volcanic outgassing and oceanic water vapor. In this case, the disequilibrium was the result of a lack of organisms present to metabolize the resulting compounds. This imbalance would actually be decreased by the presence of chemotrophic life, which would remove these atmospheric gases and create more thermodynamic equilibrium prior to the advent of photosynthetic ecosystems.
In 2013 Azua-Bustos and Vega argued that, disregarding the types of lifeforms that might be envisioned both on Earth and elsewhere in the Universe, all should share in common the attribute of decreasing their internal entropy at the expense of free energy obtained from their surroundings. As entropy allows the quantification of the degree of disorder in a system, any envisioned lifeform must have a higher degree of order than its immediate supporting environment. These authors showed that by using fractal mathematics analysis alone, they could readily quantify the degree of structural complexity difference (and thus entropy) of living processes as distinct entities separate from their similar abiotic surroundings. This approach may allow the future detection of unknown forms of life both in the Solar System and on recently discovered exoplanets based on nothing more than entropy differentials of complementary datasets (morphology, coloration, temperature, pH, isotopic composition, etc.).
Entropy in psychology
The notion of entropy as disorder has been transferred from thermodynamics to psychology by Polish psychiatrist Antoni Kępiński, who admitted being inspired by Erwin Schrödinger. In his theoretical framework devised to explain mental disorders (the information metabolism theory), the difference between living organisms and other systems was explained as the ability to maintain order. Contrary to inanimate matter, organisms maintain the particular order of their bodily structures and inner worlds which they impose onto their surroundings and forward to new generations. The life of an organism or the species ceases as soon as it loses that ability. Maintenance of that order requires continual exchange of information between the organism and its surroundings. In higher organisms, information is acquired mainly through sensory receptors and metabolised in the nervous system. The result is action – some form of motion, for example locomotion, speech, internal motion of organs, secretion of hormones, etc. The reactions of one organism become an informational signal to other organisms. Information metabolism, which allows living systems to maintain the order, is possible only if a hierarchy of value exists, as the signals coming to the organism must be structured. In humans that hierarchy has three levels, i.e. biological, emotional, and sociocultural. Kępiński explained how various mental disorders are caused by distortions of that hierarchy, and that the return to mental health is possible through its restoration.
The idea was continued by Struzik, who proposed that Kępiński's information metabolism theory may be seen as an extension of Léon Brillouin's negentropy principle of information. In 2011, the notion of "psychological entropy" was reintroduced to psychologists by Hirsh et al. Similarly to Kępiński, these authors noted that uncertainty management is a critical ability for any organism. Uncertainty, arising due to the conflict between competing perceptual and behavioral affordances, is experienced subjectively as anxiety. Hirsh and his collaborators proposed that both the perceptual and behavioral domains may be conceptualized as probability distributions and that the amount of uncertainty associated with a given perceptual or behavioral experience can be quantified in terms of Claude Shannon's entropy formula.
Objections
Entropy is well defined for equilibrium systems, so objections to the extension of the second law and of entropy to biological systems, especially as it pertains to its use to support or discredit the theory of evolution, have been stated. Living systems and indeed many other systems and processes in the universe operate far from equilibrium.
However, entropy is well defined much more broadly based on the probabilities of a system's states, whether or not the system is a dynamic one (for which equilibrium could be relevant). Even in those physical systems where equilibrium could be relevant, (1) living systems cannot persist in isolation, and (2) the second principle of thermodynamics does not require that free energy be transformed into entropy along the shortest path: living organisms absorb energy from sunlight or from energy-rich chemical compounds and finally return part of such energy to the environment as entropy (generally in the form of heat and low free-energy compounds such as water and carbon dioxide).
The Belgian scientist Ilya Prigogine has, throughout all his research, contributed to this line of study and attempted to solve those conceptual limits, winning the Nobel prize in 1977. One of his major contributions was the concept of the dissipative system, which describes the thermodynamics of open systems in non-equilibrium states.
See also
Abiogenesis
Adaptive system
Complex systems
Dissipative system
Ecological entropy – a measure of biodiversity in the study of biological ecology
Ectropy – a measure of the tendency of a dynamical system to do useful work and grow more organized
Entropy (order and disorder)
Extropy – a metaphorical term defining the extent of a living or organizational system's intelligence, functional order, vitality, energy, life, experience, and capacity and drive for improvement and growth
Negentropy – a shorthand colloquial phrase for negative entropy
Self-organization - In non-equilibrium thermodynamics, entropy and dissipative structures are connected to self-organization phenomenon (patterning, orderliness). Life systems and its subsystems are dissipative structures with some degree of self-organization.
References
Further reading
Schneider, E. and Sagan, D. (2005). Into the Cool: Energy Flow, Thermodynamics, and Life. University of Chicago Press, Chicago.
La Cerra, P. (2003). The First Law of Psychology is the Second Law of Thermodynamics: The Energetic Evolutionary Model of the Mind and the Generation of Human Psychological Phenomena, Human Nature Review 3: 440–447.
Moroz, A. (2011). The Common Extremalities in Biology and Physics. Elsevier Insights, NY.
John R. Woodward (2010). Artificial life, the second law of thermodynamics, and Kolmogorov Complexity. Artificial life, the second law of thermodynamics, and Kolmogorov Complexity. 2010 IEEE International Conference on Progress in Informatics and Computing. Vol. 2 Pages 1266–1269 IEEE
François Roddier (2012). The Thermodynamics of evolution. Paroles Editions.
External links
Thermodynamic Evolution of the Universe pi.physik.uni-bonn.de/~cristinz
Thermodynamic entropy
Biological evolution
Biophysics | 0.77081 | 0.992039 | 0.764674 |
Modern searches for Lorentz violation | Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity.
Lorentz violations concern the fundamental predictions of special relativity, such as the principle of relativity, the constancy of the speed of light in all inertial frames of reference, and time dilation, as well as the predictions of the standard model of particle physics. To assess and predict possible violations, test theories of special relativity and effective field theories (EFT) such as the Standard-Model Extension (SME) have been invented. These models introduce Lorentz and CPT violations through spontaneous symmetry breaking caused by hypothetical background fields, resulting in some sort of preferred frame effects. This could lead, for instance, to modifications of the dispersion relation, causing differences between the maximal attainable speed of matter and the speed of light.
Both terrestrial and astronomical experiments have been carried out, and new experimental techniques have been introduced. No Lorentz violations have been measured thus far, and exceptions in which positive results were reported have been refuted or lack further confirmations. For discussions of many experiments, see Mattingly (2005). For a detailed list of results of recent experimental searches, see Kostelecký and Russell (2008–2013). For a recent overview and history of Lorentz violating models, see Liberati (2013).
Assessing Lorentz invariance violations
Early models assessing the possibility of slight deviations from Lorentz invariance have been published between the 1960s and the 1990s. In addition, a series of test theories of special relativity and effective field theories (EFT) for the evaluation and assessment of many experiments have been developed, including:
The parameterized post-Newtonian formalism is widely used as a test theory for general relativity and alternatives to general relativity, and can also be used to describe Lorentz violating preferred frame effects.
The Robertson-Mansouri-Sexl framework (RMS) contains three parameters, indicating deviations in the speed of light with respect to a preferred frame of reference.
The c2 framework (a special case of the more general THεμ framework) introduces a modified dispersion relation and describes Lorentz violations in terms of a discrepancy between the speed of light and the maximal attainable speed of matter, in presence of a preferred frame.
Doubly special relativity (DSR) preserves the Planck length as an invariant minimum length-scale, yet without having a preferred reference frame.
Very special relativity describes space-time symmetries that are certain proper subgroups of the Poincaré group. It was shown that special relativity is only consistent with this scheme in the context of quantum field theory or CP conservation.
Noncommutative geometry (in connection with Noncommutative quantum field theory or the Noncommutative standard model) might lead to Lorentz violations.
Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory, Hořava–Lifshitz gravity.
However, the Standard-Model Extension (SME) in which Lorentz violating effects are introduced by spontaneous symmetry breaking, is used for most modern analyses of experimental results. It was introduced by Kostelecký and colleagues in 1997 and the following years, containing all possible Lorentz and CPT violating coefficients not violating gauge symmetry. It includes not only special relativity, but the standard model and general relativity as well. Models whose parameters can be related to SME and thus can be seen as special cases of it, include the older RMS and c2 models, the Coleman-Glashow model confining the SME coefficients to dimension 4 operators and rotation invariance, and the Gambini-Pullin model or the Myers-Pospelov model corresponding to dimension 5 or higher operators of SME.
Speed of light
Terrestrial
Many terrestrial experiments have been conducted, mostly with optical resonators or in particle accelerators, by which deviations from the isotropy of the speed of light are tested. Anisotropy parameters are given, for instance, by the Robertson-Mansouri-Sexl test theory (RMS). This allows for distinction between the relevant orientation and velocity dependent parameters. In modern variants of the Michelson–Morley experiment, the dependence of light speed on the orientation of the apparatus and the relation of longitudinal and transverse lengths of bodies in motion is analyzed. Also modern variants of the Kennedy–Thorndike experiment, by which the dependence of light speed on the velocity of the apparatus and the relation of time dilation and length contraction is analyzed, have been conducted; the recently reached limit for Kennedy-Thorndike test yields 7 10−12. The current precision, by which an anisotropy of the speed of light can be excluded, is at the 10−17 level. This is related to the relative velocity between the Solar System and the rest frame of the cosmic microwave background radiation of ~368 km/s (see also Resonator Michelson–Morley experiments).
In addition, the Standard-Model Extension (SME) can be used to obtain a larger number of isotropy coefficients in the photon sector. It uses the even- and odd-parity coefficients (3×3 matrices) , and . They can be interpreted as follows: represent anisotropic shifts in the two-way (forward and backwards) speed of light, represent anisotropic differences in the one-way speed of counterpropagating beams along an axis, and represent isotropic (orientation-independent) shifts in the one-way phase velocity of light. It was shown that such variations in the speed of light can be removed by suitable coordinate transformations and field redefinitions, though the corresponding Lorentz violations cannot be removed, because such redefinitions only transfer those violations from the photon sector to the matter sector of SME. While ordinary symmetric optical resonators are suitable for testing even-parity effects and provide only tiny constraints on odd-parity effects, also asymmetric resonators have been built for the detection of odd-parity effects. For additional coefficients in the photon sector leading to birefringence of light in vacuum, which cannot be redefined as the other photon effects, see .
Another type of test of the related one-way light speed isotropy in combination with the electron sector of the SME was conducted by Bocquet et al. (2010). They searched for fluctuations in the 3-momentum of photons during Earth's rotation, by measuring the Compton scattering of ultrarelativistic electrons on monochromatic laser photons in the frame of the cosmic microwave background radiation, as originally suggested by Vahe Gurzadyan and Amur Margarian (for details on that 'Compton Edge' method and analysis see, discussion e.g.).
Solar System
Besides terrestrial tests also astrometric tests using Lunar Laser Ranging (LLR), i.e. sending laser signals from Earth to Moon and back, have been conducted. They are ordinarily used to test general relativity and are evaluated using the Parameterized post-Newtonian formalism. However, since these measurements are based on the assumption that the speed of light is constant, they can also be used as tests of special relativity by analyzing potential distance and orbit oscillations. For instance, Zoltán Lajos Bay and White (1981) demonstrated the empirical foundations of the Lorentz group and thus special relativity by analyzing the planetary radar and LLR data.
In addition to the terrestrial Kennedy–Thorndike experiments mentioned above, Müller & Soffel (1995) and Müller et al. (1999) tested the RMS velocity dependence parameter by searching for anomalous distance oscillations using LLR. Since time dilation is already confirmed to high precision, a positive result would prove that light speed depends on the observer's velocity and length contraction is direction dependent (like in the other Kennedy–Thorndike experiments). However, no anomalous distance oscillations have been observed, with a RMS velocity dependence limit of , comparable to that of Hils and Hall (1990, see table above on the right).
Vacuum dispersion
Another effect often discussed in connection with quantum gravity (QG) is the possibility of dispersion of light in vacuum (i.e. the dependence of light speed on photon energy), due to Lorentz-violating dispersion relations. This effect should be strong at energy levels comparable to, or beyond the Planck energy GeV, while being extraordinarily weak at energies accessible in the laboratory or observed in astrophysical objects. In an attempt to observe a weak dependence of speed on energy, light from distant astrophysical sources such as gamma ray bursts and distant galaxies has been examined in many experiments. Especially the Fermi-LAT group was able show that no energy dependence and thus no observable Lorentz violation occurs in the photon sector even beyond the Planck energy, which excludes a large class of Lorentz-violating quantum gravity models.
Vacuum birefringence
Lorentz violating dispersion relations due to the presence of an anisotropic space might also lead to vacuum birefringence and parity violations. For instance, the polarization plane of photons might rotate due to velocity differences between left- and right-handed photons. In particular, gamma ray bursts, galactic radiation, and the cosmic microwave background radiation are examined. The SME coefficients and for Lorentz violation are given, 3 and 5 denote the mass dimensions employed. The latter corresponds to in the EFT of Meyers and Pospelov by , being the Planck mass.
Maximal attainable speed
Threshold constraints
Lorentz violations could lead to differences between the speed of light and the limiting or maximal attainable speed (MAS) of any particle, whereas in special relativity the speeds should be the same. One possibility is to investigate otherwise forbidden effects at threshold energy in connection with particles having a charge structure (protons, electrons, neutrinos). This is because the dispersion relation is assumed to be modified in Lorentz violating EFT models such as SME. Depending on which of these particles travels faster or slower than the speed of light, effects such as the following can occur:
Photon decay at superluminal speed. These (hypothetical) high-energy photons would quickly decay into other particles, which means that high energy light cannot propagate over long distances. So the mere existence of high energy light from astronomic sources constrains possible deviations from the limiting velocity.
Vacuum Cherenkov radiation at superluminal speed of any particle (protons, electrons, neutrinos) having a charge structure. In this case, emission of Bremsstrahlung can occur, until the particle falls below threshold and subluminal speed is reached again. This is similar to the known Cherenkov radiation in media, in which particles are traveling faster than the phase velocity of light in that medium. Deviations from the limiting velocity can be constrained by observing high energy particles of distant astronomic sources that reach Earth.
The rate of synchrotron radiation could be modified, if the limiting velocity between charged particles and photons is different.
The Greisen–Zatsepin–Kuzmin limit could be evaded by Lorentz violating effects. However, recent measurements indicate that this limit really exists.
Since astronomic measurements also contain additional assumptions – like the unknown conditions at the emission or along the path traversed by the particles, or the nature of the particles –, terrestrial measurements provide results of greater clarity, even though the bounds are wider (the following bounds describe maximal deviations between the speed of light and the limiting velocity of matter):
Clock comparison and spin coupling
By this kind of spectroscopy experiments – sometimes called Hughes–Drever experiments as well – violations of Lorentz invariance in the interactions of protons and neutrons are tested by studying the energy levels of those nucleons in order to find anisotropies in their frequencies ("clocks"). Using spin-polarized torsion balances, also anisotropies with respect to electrons can be examined. Methods used mostly focus on vector spin interactions and tensor interactions, and are often described in CPT odd/even SME terms (in particular parameters of bμ and cμν). Such experiments are currently the most sensitive terrestrial ones, because the precision by which Lorentz violations can be excluded lies at the 10−33 GeV level.
These tests can be used to constrain deviations between the maximal attainable speed of matter and the speed of light, in particular with respect to the parameters of cμν that are also used in the evaluations of the threshold effects mentioned above.
Time dilation
The classic time dilation experiments such as the Ives–Stilwell experiment, the Moessbauer rotor experiments, and the time dilation of moving particles, have been enhanced by modernized equipment. For example, the Doppler shift of lithium ions traveling at high speeds is evaluated by using saturated spectroscopy in heavy ion storage rings. For more information, see Modern Ives–Stilwell experiments.
The current precision with which time dilation is measured (using the RMS test theory), is at the ~10−8 level. It was shown, that Ives-Stilwell type experiments are also sensitive to the isotropic light speed coefficient of the SME, as introduced above. Chou et al. (2010) even managed to measure a frequency shift of ~10−16 due to time dilation, namely at everyday speeds such as 36 km/h.
CPT and antimatter tests
Another fundamental symmetry of nature is CPT symmetry. It was shown that CPT violations lead to Lorentz violations in quantum field theory (even though there are nonlocal exceptions). CPT symmetry requires, for instance, the equality of mass, and equality of decay rates between matter and antimatter.
Modern tests by which CPT symmetry has been confirmed are mainly conducted in the neutral meson sector. In large particle accelerators, direct measurements of mass differences between top- and antitop-quarks have been conducted as well.
Using SME, also additional consequences of CPT violation in the neutral meson sector can be formulated. Other SME related CPT tests have been performed as well:
Using Penning traps in which individual charged particles and their counterparts are trapped, Gabrielse et al. (1999) examined cyclotron frequencies in proton-antiproton measurements, and couldn't find any deviation down to 9·10−11.
Hans Dehmelt et al. tested the anomaly frequency, which plays a fundamental role in the measurement of the electron's gyromagnetic ratio. They searched for sidereal variations, and differences between electrons and positrons as well. Eventually they found no deviations, thereby establishing bounds of 10−24 GeV.
Hughes et al. (2001) examined muons for sidereal signals in the spectrum of muons, and found no Lorentz violation down to 10−23 GeV.
The "Muon g-2" collaboration of the Brookhaven National Laboratory searched for deviations in the anomaly frequency of muons and anti-muons, and for sidereal variations under consideration of Earth's orientation. Also here, no Lorentz violations could be found, with a precision of 10−24 GeV.
Other particles and interactions
Third generation particles have been examined for potential Lorentz violations using SME. For instance, Altschul (2007) placed upper limits on Lorentz violation of the tau of 10−8, by searching for anomalous absorption of high energy astrophysical radiation. In the BaBar experiment (2007), the D0 experiment (2015), and the LHCb experiment (2016), searches have been made for sidereal variations during Earth's rotation using B mesons (thus bottom quarks) and their antiparticles. No Lorentz and CPT violating signal were found with upper limits in the range 10−15 − 10−14 GeV.
Also top quark pairs have been examined in the D0 experiment (2012). They showed that the cross section production of these pairs doesn't depend on sidereal time during Earth's rotation.
Lorentz violation bounds on Bhabha scattering have been given by Charneski et al. (2012). They showed that differential cross sections for the vector and axial couplings in QED become direction dependent in the presence of Lorentz violation. They found no indication of such an effect, placing upper limits on Lorentz violations of .
Gravitation
The influence of Lorentz violation on gravitational fields and thus general relativity was analyzed as well. The standard framework for such investigations is the Parameterized post-Newtonian formalism (PPN), in which Lorentz violating preferred frame effects are described by the parameters (see the PPN article on observational bounds on these parameters). Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory or Hořava–Lifshitz gravity.
Also SME is suitable to analyze Lorentz violations in the gravitational sector. Bailey and Kostelecky (2006) constrained Lorentz violations down to by analyzing the perihelion shifts of Mercury and Earth, and down to in relation to solar spin precession. Battat et al. (2007) examined Lunar Laser Ranging data and found no oscillatory perturbations in the lunar orbit. Their strongest SME bound excluding Lorentz violation was . Iorio (2012) obtained bounds at the level by examining Keplerian orbital elements of a test particle acted upon by Lorentz-violating gravitomagnetic accelerations. Xie (2012) analyzed the advance of periastron of binary pulsars, setting limits on Lorentz violation at the level.
Neutrino tests
Neutrino oscillations
Although neutrino oscillations have been experimentally confirmed, the theoretical foundations are still controversial, as it can be seen in the discussion related to sterile neutrinos. This makes predictions of possible Lorentz violations very complicated. It is generally assumed that neutrino oscillations require a certain finite mass. However, oscillations could also occur as a consequence of Lorentz violations, so there are speculations as to how much those violations contribute to the mass of the neutrinos.
Additionally, a series of investigations have been published in which a sidereal dependence of the occurrence of neutrino oscillations was tested, which could arise when there were a preferred background field. This, possible CPT violations, and other coefficients of Lorentz violations in the framework of SME, have been tested. Here, some of the achieved GeV bounds for the validity of Lorentz invariance are stated:
Neutrino speed
Since the discovery of neutrino oscillations, it is assumed that their speed is slightly below the speed of light. Direct velocity measurements indicated an upper limit for relative speed differences between light and neutrinos of , see measurements of neutrino speed.
Also indirect constraints on neutrino velocity, on the basis of effective field theories such as SME, can be achieved by searching for threshold effects such as Vacuum Cherenkov radiation. For example, neutrinos should exhibit Bremsstrahlung in the form of electron-positron pair production. Another possibility in the same framework is the investigation of the decay of pions into muons and neutrinos. Superluminal neutrinos would considerably delay those decay processes. The absence of those effects indicate tight limits for velocity differences between light and neutrinos.
Velocity differences between neutrino flavors can be constrained as well. A comparison between muon- and electron-neutrinos by Coleman & Glashow (1998) gave a negative result, with bounds <6.
Reports of alleged Lorentz violations
Open reports
LSND, MiniBooNE
In 2001, the LSND experiment observed a 3.8σ excess of antineutrino interactions in neutrino oscillations, which contradicts the standard model. First results of the more recent MiniBooNE experiment appeared to exclude this data above an energy scale of 450 MeV, but they had checked neutrino interactions, not antineutrino ones. In 2008, however, they reported an excess of electron-like neutrino events between 200 and 475 MeV. And in 2010, when carried out with antineutrinos (as in LSND), the result was in agreement with the LSND result, that is, an excess at the energy scale from 450 to 1250 MeV was observed. Whether those anomalies can be explained by sterile neutrinos, or whether they indicate Lorentz violations, is still discussed and subject to further theoretical and experimental researches.
Solved reports
In 2011 the OPERA Collaboration published (in a non-peer reviewed arXiv preprint) the results of neutrino measurements, according to which neutrinos were traveling slightly faster than light. The neutrinos apparently arrived early by ~60 ns. The standard deviation was 6σ, clearly beyond the 5σ limit necessary for a significant result. However, in 2012 it was found that this result was due to measurement errors. The result was consistent with the speed of light; see Faster-than-light neutrino anomaly.
In 2010, MINOS reported differences between the disappearance (and thus the masses) of neutrinos and antineutrinos at the 2.3 sigma level. This would violate CPT symmetry and Lorentz symmetry. However, in 2011 MINOS updated their antineutrino results; after evaluating additional data, they reported that the difference is not as great as initially thought. In 2012, they published a paper in which they reported that the difference is now removed.
In 2007, the MAGIC Collaboration published a paper, in which they claimed a possible energy dependence of the speed of photons from the galaxy Markarian 501. They admitted, that also a possible energy-dependent emission effect could have cause this result as well.
However, the MAGIC result was superseded by the substantially more precise measurements of the Fermi-LAT group, which couldn't find any effect even beyond the Planck energy. For details, see section Dispersion.
In 1997, Nodland & Ralston claimed to have found a rotation of the polarization plane of light coming from distant radio galaxies. This would indicate an anisotropy of space.
This attracted some interest in the media. However, some criticisms immediately appeared, which disputed the interpretation of the data, and who alluded to errors in the publication.
More recent studies have not found any evidence for this effect (see section on Birefringence).
See also
Tests of special relativity
Phenomenological quantum gravity
References
External links
Kostelecký: Background information on Lorentz and CPT violation
Roberts, Schleif (2006); Relativity FAQ: What is the experimental basis of special relativity?
Physics experiments
Tests of special relativity | 0.788663 | 0.969578 | 0.76467 |
Thermalisation | In physics, thermalisation (or thermalization) is the process of physical bodies reaching thermal equilibrium through mutual interaction. In general, the natural tendency of a system is towards a state of equipartition of energy and uniform temperature that maximizes the system's entropy. Thermalisation, thermal equilibrium, and temperature are therefore important fundamental concepts within statistical physics, statistical mechanics, and thermodynamics; all of which are a basis for many other specific fields of scientific understanding and engineering application.
Examples of thermalisation include:
the achievement of equilibrium in a plasma.
the process undergone by high-energy neutrons as they lose energy by collision with a moderator.
the process of heat or phonon emission by charge carriers in a solar cell, after a photon that exceeds the semiconductor band gap energy is absorbed.
The hypothesis, foundational to most introductory textbooks treating quantum statistical mechanics, assumes that systems go to thermal equilibrium (thermalisation). The process of thermalisation erases local memory of the initial conditions. The eigenstate thermalisation hypothesis is a hypothesis about when quantum states will undergo thermalisation and why.
Not all quantum states undergo thermalisation. Some states have been discovered which do not (see below), and their reasons for not reaching thermal equilibrium are unclear .
Theoretical description
The process of equilibration can be described using the H-theorem or the relaxation theorem, see also entropy production.
Systems resisting thermalisation
Classical systems
Broadly-speaking, classical systems with non-chaotic behavior will not thermalise. Systems with many interacting constituents are generally expected to be chaotic, but this assumption sometimes fails. A notable counter example is the Fermi–Pasta–Ulam–Tsingou problem, which displays unexpected recurrence and will only thermalise over very long time scales. Non-chaotic systems which are pertubed by weak non-linearities will not thermalise for a set of initial conditions, with non-zero volume in the phase space, as stated by the KAM theorem, although the size of this set decreases exponentially with the number of degrees of freedom. Many-body integrable systems, which have an extensive number of conserved quantities, will not thermalise in the usual sense, but will equilibrate according to a generalized Gibbs ensemble.
Quantum systems
Some such phenomena resisting the tendency to thermalize include (see, e.g., a quantum scar):
Conventional quantum scars, which refer to eigenstates with enhanced probability density along unstable periodic orbits much higher than one would intuitively predict from classical mechanics.
Perturbation-induced quantum scarring: despite the similarity in appearance to conventional scarring, these scars have a novel underlying mechanism stemming from the combined effect of nearly-degenerate states and spatially localized perturbations, and they can be employed to propagate quantum wave packets in a disordered quantum dot with high fidelity.
Many-body quantum scars.
Many-body localisation (MBL), quantum many-body systems retaining memory of their initial condition in local observables for arbitrary amounts of time.
Other systems that resist thermalisation and are better understood are quantum integrable systems and systems with dynamical symmetries.
References
Thermodynamics | 0.790196 | 0.967694 | 0.764668 |
Gibbs free energy | In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol ) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure-volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed asWhere:
is the internal energy of the system
is the enthalpy of the system
is the entropy of the system
is the temperature of the system
is the volume of the system
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change , measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as , where is enthalpy, is absolute temperature, and is entropy.
Overview
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
Definitions
The Gibbs free energy is defined as
which is the same as
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
where:
μi is the chemical potential of the i-th chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the i-th chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
or more conveniently as its chemical potential:
In non-ideal systems, fugacity comes into play.
Derivation
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
The definition of G from above is
.
Taking the total differential, we have
Replacing dU with the result from the first law gives
The natural variables of G are then p, T, and {Ni}.
Homogeneous systems
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
This result shows that the chemical potential of a substance is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
Gibbs free energy of reactions
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is and an infinitesimal change in G, at constant temperature and pressure, yields
.
By the first law of thermodynamics, a change in the internal energy U is given by
where is energy added as heat, and is energy added as work. The work done on the system may be written as , where is the mechanical work of compression/expansion done on or by the system and is all other forms of work, which may include electrical, magnetic, etc. Then
and the infinitesimal change in G is
.
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath), and so it follows that
Assuming that only mechanical work is done, this simplifies to
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
In electrochemical thermodynamics
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf , an electrical work term appears in the expression for the change in Gibbs energy:
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
Useful identities to derive the Nernst equation
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
(see chemical equilibrium),
(for a system at chemical equilibrium),
(for a reversible electrochemical process at constant temperature and pressure),
(definition of ),
and rearranging gives
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
, Gibbs free energy change per mole of reaction,
, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298K, 100kPa, 1M of each reactant and product),
, gas constant,
, absolute temperature,
, natural logarithm,
, reaction quotient (unitless),
, equilibrium constant (unitless),
, electrical work in a reversible process (chemistry sign convention),
, number of moles of electrons transferred in the reaction,
, Faraday constant (charge per mole of electrons),
, cell potential,
, standard cell potential.
Moreover, we also have
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
and
Standard Gibbs energy change of formation
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
Graphical interpretation by Gibbs
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
See also
Bioenergetics
Calphad (CALculation of PHAse Diagrams)
Critical point (thermodynamics)
Electron equivalent
Enthalpy-entropy compensation
Free entropy
Gibbs–Helmholtz equation
Grand potential
Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients
Spinodal – Spinodal Curves (Hessian matrix)
Standard molar entropy
Thermodynamic free energy
UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients
Notes and references
External links
IUPAC definition (Gibbs energy)
Gibbs Free Energy – Georgia State University
Physical quantities
State functions
Thermodynamic free energy | 0.765548 | 0.998835 | 0.764656 |
Hodgkin–Huxley model | The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical engineering characteristics of excitable cells such as neurons and muscle cells. It is a continuous-time dynamical system.
Alan Hodgkin and Andrew Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. They received the 1963 Nobel Prize in Physiology or Medicine for this work.
Basic components
The typical Hodgkin–Huxley model treats each component of an excitable cell as an electrical element (as shown in the figure). The lipid bilayer is represented as a capacitance (Cm). Voltage-gated ion channels are represented by electrical conductances (gn, where n is the specific ion channel) that depend on both voltage and time. Leak channels are represented by linear conductances (gL). The electrochemical gradients driving the flow of ions are represented by voltage sources (En) whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species of interest. Finally, ion pumps are represented by current sources (Ip). The membrane potential is denoted by Vm.
Mathematically, the current flowing through the lipid bilayer is written as
and the current through a given ion channel is the product of that channel's conductance and the driving potential for the specific ion
where is the reversal potential of the specific ion channel.
Thus, for a cell with sodium and potassium channels, the total current through the membrane is given by:
where I is the total membrane current per unit area, Cm is the membrane capacitance per unit area, gK and gNa are the potassium and sodium conductances per unit area, respectively, VK and VNa are the potassium and sodium reversal potentials, respectively, and gl and Vl are the leak conductance per unit area and leak reversal potential, respectively. The time dependent elements of this equation are Vm, gNa, and gK, where the last two conductances depend explicitly on the membrane voltage (Vm) as well.
Ionic current characterization
In voltage-gated ion channels, the channel conductance is a function of both time and voltage ( in the figure), while in leak channels, , it is a constant ( in the figure). The current generated by ion pumps is dependent on the ionic species specific to that pump. The following sections will describe these formulations in more detail.
Voltage-gated ion channels
Using a series of voltage clamp experiments and by varying extracellular sodium and potassium concentrations, Hodgkin and Huxley developed a model in which the properties of an excitable cell are described by a set of four ordinary differential equations. Together with the equation for the total current mentioned above, these are:
where I is the current per unit area and and are rate constants for the i-th ion channel, which depend on voltage but not time. is the maximal value of the conductance. n, m, and h are dimensionless probabilities between 0 and 1 that are associated with potassium channel subunit activation, sodium channel subunit activation, and sodium channel subunit inactivation, respectively. For instance, given that potassium channels in squid giant axon are made up of four subunits which all need to be in the open state for the channel to allow the passage of potassium ions, the n needs to be raised to the fourth power. For , and take the form
and are the steady state values for activation and inactivation, respectively, and are usually represented by Boltzmann equations as functions of . In the original paper by Hodgkin and Huxley, the functions and are given by
where denotes the negative depolarization in mV.
In many current software programs
Hodgkin–Huxley type models generalize and to
In order to characterize voltage-gated channels, the equations can be fitted to voltage clamp data. For a derivation of the Hodgkin–Huxley equations under voltage-clamp, see. Briefly, when the membrane potential is held at a constant value (i.e., with a voltage clamp), for each value of the membrane potential the nonlinear gating equations reduce to equations of the form:
Thus, for every value of membrane potential the sodium and potassium currents can be described by
In order to arrive at the complete solution for a propagated action potential, one must write the current term I on the left-hand side of the first differential equation in terms of V, so that the equation becomes an equation for voltage alone. The relation between I and V can be derived from cable theory and is given by
where a is the radius of the axon, R is the specific resistance of the axoplasm, and x is the position along the nerve fiber. Substitution of this expression for I transforms the original set of equations into a set of partial differential equations, because the voltage becomes a function of both x and t.
The Levenberg–Marquardt algorithm is often used to fit these equations to voltage-clamp data.
While the original experiments involved only sodium and potassium channels, the Hodgkin–Huxley model can also be extended to account for other species of ion channels.
Leak channels
Leak channels account for the natural permeability of the membrane to ions and take the form of the equation for voltage-gated channels, where the conductance is a constant. Thus, the leak current due to passive leak ion channels in the Hodgkin-Huxley formalism is .
Pumps and exchangers
The membrane potential depends upon the maintenance of ionic concentration gradients across it. The maintenance of these concentration gradients requires active transport of ionic species. The sodium-potassium and sodium-calcium exchangers are the best known of these. Some of the basic properties of the Na/Ca exchanger have already been well-established: the stoichiometry of exchange is 3 Na+: 1 Ca2+ and the exchanger is electrogenic and voltage-sensitive. The Na/K exchanger has also been described in detail, with a 3 Na+: 2 K+ stoichiometry.
Mathematical properties
The Hodgkin–Huxley model can be thought of as a differential equation system with four state variables, , and , that change with respect to time . The system is difficult to study because it is a nonlinear system, cannot be solved analytically, and therefore has no closed-form solution. However, there are many numerical methods available to analyze the system. Certain properties and general behaviors, such as limit cycles, can be proven to exist.
Center manifold
Because there are four state variables, visualizing the path in phase space can be difficult. Usually two variables are chosen, voltage and the potassium gating variable , allowing one to visualize the limit cycle. However, one must be careful because this is an ad-hoc method of visualizing the 4-dimensional system. This does not prove the existence of the limit cycle.
A better projection can be constructed from a careful analysis of the Jacobian of the system, evaluated at the equilibrium point. Specifically, the eigenvalues of the Jacobian are indicative of the center manifold's existence. Likewise, the eigenvectors of the Jacobian reveal the center manifold's orientation. The Hodgkin–Huxley model has two negative eigenvalues and two complex eigenvalues with slightly positive real parts. The eigenvectors associated with the two negative eigenvalues will reduce to zero as time t increases. The remaining two complex eigenvectors define the center manifold. In other words, the 4-dimensional system collapses onto a 2-dimensional plane. Any solution starting off the center manifold will decay towards the center manifold. Furthermore, the limit cycle is contained on the center manifold.
Bifurcations
If the injected current were used as a bifurcation parameter, then the Hodgkin–Huxley model undergoes a Hopf bifurcation. As with most neuronal models, increasing the injected current will increase the firing rate of the neuron. One consequence of the Hopf bifurcation is that there is a minimum firing rate. This means that either the neuron is not firing at all (corresponding to zero frequency), or firing at the minimum firing rate. Because of the all-or-none principle, there is no smooth increase in action potential amplitude, but rather there is a sudden "jump" in amplitude. The resulting transition is known as a canard.
Improvements and alternative models
The Hodgkin–Huxley model is regarded as one of the great achievements of 20th-century biophysics. Nevertheless, modern Hodgkin–Huxley-type models have been extended in several important ways:
Additional ion channel populations have been incorporated based on experimental data.
The Hodgkin–Huxley model has been modified to incorporate transition state theory and produce thermodynamic Hodgkin–Huxley models.
Models often incorporate highly complex geometries of dendrites and axons, often based on microscopy data.
Conductance-based models similar to Hodgkin–Huxley model incorporate the knowledge about cell types defined by single cell transcriptomics.
Stochastic models of ion-channel behavior, leading to stochastic hybrid systems.
The Poisson–Nernst–Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential.
Several simplified neuronal models have also been developed (such as the FitzHugh–Nagumo model), facilitating efficient large-scale simulation of groups of neurons, as well as mathematical insight into dynamics of action potential generation.
See also
Anode break excitation
Autowave
Neural circuit
GHK flux equation
Goldman equation
Memristor
Neural accommodation
Reaction–diffusion
Theta model
Rulkov map
Chialvo map
References
Further reading
External links
Interactive Javascript simulation of the HH model Runs in any HTML5 – capable browser. Allows for changing the parameters of the model and current injection.
Interactive Java applet of the HH model Parameters of the model can be changed as well as excitation parameters and phase space plottings of all the variables is possible.
Direct link to Hodgkin–Huxley model and a Description in BioModels Database
Neural Impulses: The Action Potential In Action by Garrett Neske, The Wolfram Demonstrations Project
Interactive Hodgkin–Huxley model by Shimon Marom, The Wolfram Demonstrations Project
ModelDB A computational neuroscience source code database containing 4 versions (in different simulators) of the original Hodgkin–Huxley model and hundreds of models that apply the Hodgkin–Huxley model to other channels in many electrically excitable cell types.
Several articles about the stochastic version of the model and its link with the original one.
Nonlinear systems
Electrophysiology
Ion channels
Computational neuroscience | 0.768519 | 0.99494 | 0.76463 |
Eight disciplines problem solving | Eight Disciplines Methodology (8D) is a method or model developed at Ford Motor Company used to approach and to resolve problems, typically employed by quality engineers or other professionals. Focused on product and process improvement, its purpose is to identify, correct, and eliminate recurring problems. It establishes a permanent corrective action based on statistical analysis of the problem and on the origin of the problem by determining the root causes. Although it originally comprised eight stages, or 'disciplines', it was later augmented by an initial planning stage. 8D follows the logic of the PDCA cycle. The disciplines are:
D0: Preparation and Emergency Response Actions: Plan for solving the problem and determine the prerequisites. Provide emergency response actions.
D1: Use a Team: Establish a team of people with product/process knowledge. Teammates provide new perspectives and different ideas when it comes to problem solving.
D2: Describe the Problem: Specify the problem by identifying in quantifiable terms the who, what, where, when, why, how, and how many (5W2H) for the problem.
D3: Develop Interim Containment Plan: Define and implement containment actions to isolate the problem from any customer.
D4: Determine and Verify Root Causes and Escape Points: Identify all applicable causes that could explain why the problem has occurred. Also identify why the problem was not noticed at the time it occurred. All causes shall be verified or proved. One can use five whys or Ishikawa diagrams to map causes against the effect or problem identified.
D5: Verify Permanent Corrections (PCs) for Problem that will resolve the problem for the customer: Using pre-production programs, quantitatively confirm that the selected correction will resolve the problem. (Verify that the correction will actually solve the problem).
D6: Define and Implement Corrective Actions: Define and implement the best corrective actions. Also, validate corrective actions with empirical evidence of improvement.
D7: Prevent Recurrence / System Problems: Modify the management systems, operation systems, practices, and procedures to prevent recurrence of this and similar problems.
D8: Congratulate the Main Contributors to your Team: Recognize the collective efforts of the team. The team needs to be formally thanked by the organization.
8Ds has become a standard in the automotive, assembly, and other industries that require a thorough structured problem-solving process using a team approach.
Ford Motor Company's team-oriented problem solving
The executives of the Powertrain Organization (transmissions, chassis, engines) wanted a methodology where teams (design engineering, manufacturing engineering, and production) could work on recurring chronic problems. In 1986, the assignment was given to develop a manual and a subsequent course that would achieve a new approach to solving identified engineering design and manufacturing problems. The manual for this methodology was documented and defined in Team Oriented Problem Solving (TOPS), first published in 1987. The manual and subsequent course material were piloted at Ford World Headquarters in Dearborn, Michigan. Ford refers to their current variant as G8D (Global 8D). The Ford 8Ds manual is extensive and covers chapter by chapter how to go about addressing, quantifying, and resolving engineering issues. It begins with a cross-functional team and concludes with a successful demonstrated resolution of the problem. Containment actions may or may not be needed based on where the problem occurred in the life cycle of the product.
Usage
Many disciplines are typically involved in the "8Ds" methodology. The tools used can be found in textbooks and reference materials used by quality assurance professionals. For example, an "Is/Is Not" worksheet is a common tool employed at D2, and Ishikawa, or "fishbone," diagrams and "5-why analysis" are common tools employed at step D4.
In the late 1990s, Ford developed a revised version of the 8D process that they call "Global 8D" (G8D), which is the current global standard for Ford and many other companies in the automotive supply chain. The major revisions to the process are as follows:
Addition of a D0 (D-Zero) step as a gateway to the process. At D0, the team documents the symptoms that initiated the effort along with any emergency response actions (ERAs) that were taken before formal initiation of the G8D. D0 also incorporates standard assessing questions meant to determine whether a full G8D is required. The assessing questions are meant to ensure that in a world of limited problem-solving resources, the efforts required for a full team-based problem-solving effort are limited to those problems that warrant these resources.
Addition of the notion of escape points to D4 through D6. An 'escape point' is the earliest control point in the control system following the root cause of a problem that should have detected that problem but failed to do so. The idea here is to consider not only the root cause, but also what went wrong with the control system in allowing this problem to escape. Global 8D requires the team to identify and verify an escape point at D4. Then, through D5 and D6, the process requires the team to choose, verify, implement, and validate permanent corrective actions to address the escape point.
Recently, the 8D process has been employed significantly outside the auto industry. As part of lean initiatives and continuous-improvement processes it is employed extensively in the food manufacturing, health care, and high-tech manufacturing industries.
Benefits
The benefits of the 8D methodology include effective approaches to finding a root cause, developing proper actions to eliminate root causes, and implementing the permanent corrective action. The 8D methodology also helps to explore the control systems that allowed the problem to escape. The Escape Point is studied for the purpose of improving the ability of the Control System to detect the failure or cause when and if it should occur again.
Finally the Prevention Loop explores the systems that permitted the condition that allowed the Failure and Cause Mechanism to exist in the first place.
Prerequisites
Requires training in the 8D problem-solving process as well as appropriate data collection and analysis tools such as Pareto charts, fishbone diagrams, and process maps.
Problem solving tools
The following tools can be used within 8D:
Ishikawa diagrams also known as cause-and-effect or fishbone diagrams
Pareto charts or Pareto diagrams
5 Whys
5W and 2H (who, what, where, when, why, how, how many or how much)
Statistical process control
Scatter plots
Design of experiments
Check sheet
Histograms
FMEA
Flowcharts or process maps
Background of common corrective actions to dispose of nonconforming items
The 8D methodology was first described in a Ford manual in 1987. The manual describes the eight-step methodology to address chronic product and process problems. The 8Ds included several concepts of effective problem solving, including taking corrective actions and containing nonconforming items. These two steps have been very common in most manufacturing facilities, including government and military installations. In 1974, the U.S. Department of Defense (DOD) released “MIL-STD 1520 Corrective Action and Disposition System for Nonconforming Material”. This 13 page standard defines establishing some corrective actions and then taking containment actions on nonconforming material or items. It is focused on inspection for defects and disposing of them. The basic idea of corrective actions and containment of defectives was officially abolished in 1995, but these concepts were also common to Ford Motor Company, a major supplier to the government in World War II. Corrective actions and containment of poor quality parts were part of the manual and course for the automotive industry and are well known to many companies. Ford's 60 page manual covers details associated with each step in their 8D problem solving manual and the actions to take to deal with identified problems.
Military usage
The exact history of the 8D method remains disputed as many publications and websites state that it originates from the US military. Indeed, MIL-STD-1520C outlines a set of requirements for their contractors on how they should organize themselves with respect to non-conforming materials. Developed in 1974 and cancelled in February 1995 as part of the Perry memo, you can compare it best to the ISO 9001 standard that currently exists as it expresses the same philosophy. The aforementioned military standard does outline some aspects that are in the 8D method, however, it does not provide the same structure that the 8D methodology offers. Taking into account the fact that the Ford Motor Company played an instrumental role in producing army vehicles during the Second World War and in the decades after, it could very well be the case that the MIL-STD-1520C stood as a model for today's 8D method.
Relationship between 8D and FMEA
FMEA (failure mode and effect analysis) is a tool generally used in the planning of product or process design. The relationships between 8D and FMEA are outlined below:
The problem statements and descriptions are sometimes linked between both documents. An 8D can utilize pre-brainstormed information from a FMEA to assist in looking for potential problems.
Possible causes in a FMEA can immediately be used to jump start 8D Fishbone or Ishikawa diagrams. Brainstorming information that is already known is not a good use of time or resources.
Data and brainstorming collected during an 8D can be placed into a FMEA for future planning of new product or process quality. This allows a FMEA to consider actual failures, occurring as failure modes and causes, becoming more effective and complete.
The design or process controls in a FMEA can be used in verifying the root cause and Permanent Corrective Action in an 8D.
The FMEA and 8D should reconcile each failure and cause by cross documenting failure modes, problem statements and possible causes. Each FMEA can be used as a database of possible causes of failure as an 8D is developed.
See also
Complaint system
Corrective and preventive action
Failure mode and effects analysis
Fault tree analysis
Quality management system (QMS)
Eight dimensions of quality
Problem solving
References
External links
8-D Problem Solving Overview from the Ford Motor Company
Laurie Rambaud (2011), 8D Structured Problem Solving: A Guide to Creating High Quality 8D Reports, PHRED Solutions, Second Edition 978-0979055317
Society of Manufacturing Engineers: SME,
Chris S.P. Visser (2017), 8D Problem solving explained – Turning operational failures into knowledge to drive your strategic and competitive advantages,
Quality
Problem solving methods | 0.76952 | 0.993578 | 0.764578 |
Inexact differential | An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form. In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. Inexact differentials are primarily used in calculations involving heat and work because they are path functions, not state functions.
Definition
An inexact differential is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths such that , and
In this case, we denote the integrals as and respectively to make explicit the path dependence of the change of the quantity we are considering as .
More generally, an inexact differential is a differential form which is not an exact differential, i.e., for all functions ,
The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is a necessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function.
Notation
Thermodynamics
Instead of the differential symbol , the symbol is used, a convention which originated in the 19th century work of German mathematician Carl Gottfried Neumann, indicating that (heat) and (work) are path-dependent, while (internal energy) is not.
Statistical Mechanics
Within statistical mechanics, inexact differentials are often denoted with a bar through the differential operator, đ. In LaTeX the command "\rlap{\textrm{d}}{\bar{\phantom{w}}}" is an approximation or simply "\dj" for a dyet character, which needs the T1 encoding.
Mathematics
Within mathematics, inexact differentials are usually just referred more generally to as differential forms which are often written just as .
Examples
Total distance
When you walk from a point to a point along a line (without changing directions) your net displacement and total distance covered are both equal to the length of said line . If you then return to point (without changing directions) then your net displacement is zero while your total distance covered is . This example captures the essential idea behind the inexact differential in one dimension. Note that if we allowed ourselves to change directions, then we could take a step forward and then backward at any point in time in going from to and in-so-doing increase the overall distance covered to an arbitrarily large number while keeping the net displacement constant.
Reworking the above with differentials and taking to be along the -axis, the net distance differential is , an exact differential with antiderivative . On the other hand, the total distance differential is , which does not have an antiderivative. The path taken is where there exists a time such that is strictly increasing before and strictly decreasing afterward. Then is positive before and negative afterward, yielding the integrals,
exactly the results we expected from the verbal argument before.
First law of thermodynamics
Inexact differentials show up explicitly in the first law of thermodynamics,
where is the energy, is the differential change in heat and is the differential change in work. Based on the constants of the thermodynamic system, we are able to parameterize the average energy in several different ways. E.g., in the first stage of the Carnot cycle a gas is heated by a reservoir, giving us an isothermal expansion of that gas. Some differential amount of heat enters the gas. During the second stage, the gas is allowed to freely expand, outputting some differential amount of work . The third stage is similar to the first stage, except the heat is lost by contact with a cold reservoir, while the fourth cycle is like the second except work is done onto the system by the surroundings to compress the gas. Because the overall changes in heat and work are different over different parts of the cycle, there is a nonzero net change in the heat and work, indicating that the differentials and must be inexact differentials.
Internal energy is a state function, meaning its change can be inferred just by comparing two different states of the system (independently of its transition path), which we can therefore indicate with and .
Since we can go from state to state either by providing heat or work , such a change of state does not uniquely identify the amount of work done to the system or heat transferred, but only the change in internal energy .
Heat and work
A fire requires heat, fuel, and an oxidizing agent. The energy required to overcome the activation energy barrier for combustion is transferred as heat into the system, resulting in changes to the system's internal energy. In a process, the energy input to start a fire may comprise both work and heat, such as when one rubs tinder (work) and experiences friction (heat) to start a fire. The ensuing combustion is highly exothermic, which releases heat. The overall change in internal energy does not reveal the mode of energy transfer and quantifies only the net work and heat. The difference between initial and final states of the system's internal energy does not account for the extent of the energy interactions transpired. Therefore, internal energy is a state function (i.e. exact differential), while heat and work are path functions (i.e. inexact differentials) because integration must account for the path taken.
Integrating factors
It is sometimes possible to convert an inexact differential into an exact one by means of an integrating factor.
The most common example of this in thermodynamics is the definition of entropy:
In this case, is an inexact differential, because its effect on the state of the system can be compensated by .
However, when divided by the absolute temperature and when the exchange occurs at reversible conditions (therefore the rev subscript), it produces an exact differential: the entropy is also a state function.
Example
Consider the inexact differential form, This must be inexact by considering going to the point . If we first increase and then increase , then that corresponds to first integrating over and then over . Integrating over first contributes and then integrating over contributes . Thus, along the first path we get a value of 2. However, along the second path we get a value of . We can make an exact differential by multiplying it by , yielding . And so is an exact differential.
See also
Closed and exact differential forms for a higher-level treatment
Differential (mathematics)
Exact differential
Exact differential equation
Integrating factor for solving non-exact differential equations by making them exact
Conservative vector field
References
External links
Inexact Differential – from Wolfram MathWorld
Exact and Inexact Differentials – University of Arizona
Exact and Inexact Differentials – University of Texas
Exact Differential – from Wolfram MathWorld
Thermodynamics
Multivariable calculus | 0.780406 | 0.979708 | 0.764569 |
Proportional navigation | Proportional navigation (also known as PN or Pro-Nav) is a guidance law (analogous to proportional control) used in some form or another by most homing air target missiles. It is based on the fact that two vehicles are on a collision course when their direct line-of-sight does not change direction as the range closes. PN dictates that the missile velocity vector should rotate at a rate proportional to the rotation rate of the line of sight (Line-Of-Sight rate or LOS-rate), and in the same direction.
Where is the acceleration perpendicular to the missile's instantaneous velocity vector, is the proportionality constant generally having an integer value 3-5 (dimensionless), is the line of sight rate, and V is the closing velocity.
Since the line of sight is not in general co-linear with the missile velocity vector, the applied acceleration does not necessarily preserve the missile kinetic energy. In practice, in the absence of engine throttling capability, this type of control may not be possible.
Proportional navigation can also be achieved using an acceleration normal to the instantaneous velocity difference:
where is the rotation vector of the line of sight:
and is the target velocity relative to the missile and is the range from missile to target. This acceleration depends explicitly on the velocity difference vector, which may be difficult to obtain in practice. By contrast, in the expressions that follow, dependence is only on the change of the line of sight and the magnitude of the closing velocity. If acceleration normal to the instantaneous line of sight is desired (as in the initial description), then the following expression is valid:
If energy conserving control is required (as is the case when only using control surfaces), the following acceleration, which is orthogonal to the missile velocity, may be used:
A rather simple hardware implementation of this guidance law can be found in early AIM-9 Sidewinder missiles. These missiles use a rapidly rotating parabolic mirror as a seeker. Simple electronics detect the directional error the seeker has with its target (an IR source), and apply a moment to this gimballed mirror to keep it pointed at the target. Since the mirror is in fact a gyroscope it will keep pointing at the same direction if no external force or moment is applied, regardless of the movements of the missile. The voltage applied to the mirror while keeping it locked on the target is then also used (although amplified) to deflect the control surfaces that steer the missile, thereby making missile velocity vector rotation proportional to line of sight rotation. Although this does not result in a rotation rate that is always exactly proportional to the LOS-rate (which would require a constant airspeed), this implementation is equally effective.
The basis of proportional navigation was first discovered at sea, and was used by navigators on ships to avoid collisions. Commonly referred to as Constant Bearing Decreasing Range (CBDR), the concept continues to prove very useful for conning officers (the person in control of navigating the vessel at any point in time) because CBDR will result in a collision or near miss if action is not taken by one of the two vessels involved. Simply altering course until a change in bearing (obtained by compass sighting) occurs, will provide some assurance of avoidance of collision, obviously not foolproof: the conning officer of the vessel having made the course change must continually monitor bearing lest the other vessel does the same. Significant course change, rather than a modest alteration, is prudent. International Regulations for Preventing Collisions at Sea dictate which vessel must give way but they, of course, provide no guarantee that action will be taken by that vessel.
See also
Motion camouflage
Bibliography
Yanushevsky, Rafael. Modern Missile Guidance. CRC Press, 2007. .
References
Navigation
Missile guidance | 0.774193 | 0.987561 | 0.764563 |
Spontaneous process | In thermodynamics, a spontaneous process is a process which occurs without any external input to the system. A more technical definition is the time-evolution of a system in which it releases free energy and it moves to a lower, more thermodynamically stable energy state (closer to thermodynamic equilibrium). The sign convention for free energy change follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in the free energy of the system and a positive change in the free energy of the surroundings.
Depending on the nature of the process, the free energy is determined differently. For example, the Gibbs free energy change is used when considering processes that occur under constant pressure and temperature conditions, whereas the Helmholtz free energy change is used when considering processes that occur under constant volume and temperature conditions. The value and even the sign of both free energy changes can depend upon the temperature and pressure or volume.
Because spontaneous processes are characterized by a decrease in the system's free energy, they do not need to be driven by an outside source of energy.
For cases involving an isolated system where no energy is exchanged with the surroundings, spontaneous processes are characterized by an increase in entropy.
A spontaneous reaction is a chemical reaction which is a spontaneous process under the conditions of interest.
Overview
In general, the spontaneity of a process only determines whether or not a process can occur and makes no indication as to whether or not the process will occur. In other words, spontaneity is a necessary, but not sufficient, condition for a process to actually occur. Furthermore, spontaneity makes no implication as to the speed at which the spontaneous process may occur - just because a process is spontaneous does not mean it will happen quickly (or at all).
As an example, the conversion of a diamond into graphite is a spontaneous process at room temperature and pressure. Despite being spontaneous, this process does not occur since the energy to break the strong carbon-carbon bonds is larger than the release in free energy. Another way to explain this would be that even though the conversion of diamond into graphite is thermodynamically feasible and spontaneous even at room temperature, the high activation energy of this reaction renders it unspontaneous.
Using free energy to determine spontaneity
For a process that occurs at constant temperature and pressure, spontaneity can be determined using the change in Gibbs free energy, which is given by:
where the sign of ΔG depends on the signs of the changes in enthalpy (ΔH) and entropy (ΔS). If these two signs are the same (both positive or both negative), then the sign of ΔG will change from positive to negative (or vice versa) at the temperature
In cases where ΔG is:
negative, the process is spontaneous and may proceed in the forward direction as written.
positive, the process is non-spontaneous as written, but it may proceed spontaneously in the reverse direction.
zero, the process is at equilibrium, with no net change taking place over time.
This set of rules can be used to determine four distinct cases by examining the signs of the ΔS and ΔH.
When ΔS > 0 and ΔH < 0, the process is always spontaneous as written.
When ΔS < 0 and ΔH > 0, the process is never spontaneous, but the reverse process is always spontaneous.
When ΔS > 0 and ΔH > 0, the process will be spontaneous at high temperatures and non-spontaneous at low temperatures.
When ΔS < 0 and ΔH < 0, the process will be spontaneous at low temperatures and non-spontaneous at high temperatures.
For the latter two cases, the temperature at which the spontaneity changes will be determined by the relative magnitudes of ΔS and ΔH.
Using entropy to determine spontaneity
When using the entropy change of a process to assess spontaneity, it is important to carefully consider the definition of the system and surroundings. The second law of thermodynamics states that a process involving an isolated system will be spontaneous if the entropy of the system increases over time. For open or closed systems, however, the statement must be modified to say that the total entropy of the combined system and surroundings must increase, or,
This criterion can then be used to explain how it is possible for the entropy of an open or closed system to decrease during a spontaneous process. A decrease in system entropy can only occur spontaneously if the entropy change of the surroundings is both positive in sign and has a larger magnitude than the entropy change of the system:
and
In many processes, the increase in entropy of the surroundings is accomplished via heat transfer from the system to the surroundings (i.e. an exothermic process).
See also
Endergonic reaction reactions which are not spontaneous at standard temperature, pressure, and concentrations.
Diffusion spontaneous phenomenon that minimizes Gibbs free energy.
References
Thermodynamics
Chemical thermodynamics
Chemical processes | 0.774522 | 0.987111 | 0.764539 |
Traction (mechanics) | Traction, traction force or tractive force is a force used to generate motion between a body and a tangential surface, through the use of either dry friction or shear force.
It has important applications in vehicles, as in tractive effort.
Traction can also refer to the maximum tractive force between a body and a surface, as limited by available friction; when this is the case, traction is often expressed as the ratio of the maximum tractive force to the normal force and is termed the coefficient of traction (similar to coefficient of friction). It is the force which makes an object move over the surface by overcoming all the resisting forces like friction, normal loads(load acting on the tiers in negative 'Z' axis), air resistance, rolling resistance, etc.
Definitions
Traction can be defined as:
In vehicle dynamics, tractive force is closely related to the terms tractive effort and drawbar pull, though all three terms have different definitions.
Coefficient of traction
The coefficient of traction is defined as the usable force for traction divided by the weight on the running gear (wheels, tracks etc.) i.e.:
usable traction = coefficient of traction × normal force.
Factors affecting coefficient of traction
Traction between two surfaces depends on several factors:
Material composition of each surface.
Macroscopic and microscopic shape (texture; macrotexture and microtexture)
Normal force pressing contact surfaces together.
Contaminants at the material boundary including lubricants and adhesives.
Relative motion of tractive surfaces - a sliding object (one in kinetic friction) has less traction than a non-sliding object (one in static friction).
Direction of traction relative to some coordinate system - e.g., the available traction of a tire often differs between cornering, accelerating, and braking.
For low-friction surfaces, such as off-road or ice, traction can be increased by using traction devices that partially penetrate the surface; these devices use the shear strength of the underlying surface rather than relying solely on dry friction (e.g., aggressive off-road tread or snow chains)....
Traction coefficient in engineering design
In the design of wheeled or tracked vehicles, high traction between wheel and ground is more desirable than low traction, as it allows for higher acceleration (including cornering and braking) without wheel slippage. One notable exception is in the motorsport technique of drifting, in which rear-wheel traction is purposely lost during high speed cornering.
Other designs dramatically increase surface area to provide more traction than wheels can, for example in continuous track and half-track vehicles. A tank or similar tracked vehicle uses tracks to reduce the pressure on the areas of contact. A 70-ton M1A2 would sink to the point of high centering if it used round tires. The tracks spread the 70 tons over a much larger area of contact than tires would and allow the tank to travel over much softer land.
In some applications, there is a complicated set of trade-offs in choosing materials. For example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. Choices in material selection may have a dramatic effect. For example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. The truck tires have less traction and also thicker rubber.
Traction also varies with contaminants. A layer of water in the contact patch can cause a substantial loss of traction. This is one reason for grooves and siping of automotive tires.
The traction of trucks, agricultural tractors, wheeled military vehicles, etc. when driving on soft and/or slippery ground has been found to improve significantly by use of Tire Pressure Control Systems (TPCS). A TPCS makes it possible to reduce and later restore the tire pressure during continuous vehicle operation. Increasing traction by use of a TPCS also reduces tire wear and ride vibration.
See also
Anti-lock braking system
Equilibrium tide
Friction
Force (physics)
Karl A. Grosch
Rail adhesion
Road slipperiness
Sandbox (locomotive)
Tribology
Weight transfer
References
Force
Vehicle technology
Mechanics | 0.77574 | 0.985556 | 0.764535 |
Compton scattering | Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules.
The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize for Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles.
Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron").
This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur. This is known as inverse Compton scattering, in which the scattered photon increases in energy.
Introduction
In Compton's original experiment (see Fig. 1), the energy of the X ray photon (≈ 17 keV) was significantly larger than the binding energy of the atomic electron, so the electrons could be treated as being free after scattering. The amount by which the light's wavelength changes is called the Compton shift. Although nucleus Compton scattering exists, Compton scattering usually refers to the interaction involving only the electrons of an atom. The Compton effect was observed by Arthur Holly Compton in 1923 at Washington University in St. Louis and further verified by his graduate student Y. H. Woo in the years following. Compton was awarded the 1927 Nobel Prize in Physics for the discovery.
The effect is significant because it demonstrates that light cannot be explained purely as a wave phenomenon. Thomson scattering, the classical theory of an electromagnetic wave scattered by charged particles, cannot explain shifts in wavelength at low intensity: classically, light of sufficient intensity for the electric field to accelerate a charged particle to a relativistic speed will cause radiation-pressure recoil and an associated Doppler shift of the scattered light, but the effect would become arbitrarily small at sufficiently low light intensities regardless of wavelength. Thus, if we are to explain low-intensity Compton scattering, light must behave as if it consists of particles. Or the assumption that the electron can be treated as free is invalid resulting in the effectively infinite electron mass equal to the nuclear mass (see e.g. the comment below on elastic scattering of X-rays being from that effect). Compton's experiment convinced physicists that light can be treated as a stream of particle-like objects (quanta called photons), whose energy is proportional to the light wave's frequency.
As shown in Fig. 2, the interaction between an electron and a photon results in the electron being given part of the energy (making it recoil), and a photon of the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is also conserved. If the scattered photon still has enough energy, the process may be repeated. In this scenario, the electron is treated as free or loosely bound. Experimental verification of momentum conservation in individual Compton scattering processes by Bothe and Geiger as well as by Compton and Simon has been important in disproving the BKS theory.
Compton scattering is commonly described as inelastic scattering. This is because, unlike the more common Thomson scattering that happens at the low-energy limit, the energy in the scattered photon in Compton scattering is less than the energy of the incident photon. As the electron is typically weakly bound to the atom, the scattering can be viewed from either the perspective of an electron in a potential well, or as an atom with a small ionization energy. In the former perspective, energy of the incident photon is transferred to the recoil particle, but only as kinetic energy. The electron gains no internal energy, respective masses remain the same, the mark of an elastic collision. From this perspective, Compton scattering could be considered elastic because the internal state of the electron does not change during the scattering process. In the latter perspective, the atom's state is change, constituting an inelastic collision. Whether Compton scattering is considered elastic or inelastic depends on which perspective is being used, as well as the context.
Compton scattering is one of four competing processes when photons interact with matter. At energies of a few eV to a few keV, corresponding to visible light through soft X-rays, a photon can be completely absorbed and its energy can eject an electron from its host atom, a process known as the photoelectric effect. High-energy photons of and above may bombard the nucleus and cause an electron and a positron to be formed, a process called pair production; even-higher-energy photons (beyond a threshold energy of at least , depending on the nuclei involved), can eject a nucleon or alpha particle from the nucleus in a process called photodisintegration. Compton scattering is the most important interaction in the intervening energy region, at photon energies greater than those typical of the photoelectric effect but less than the pair-production threshold.
Description of the phenomenon
By the early 20th century, research into the interaction of X-rays with matter was well under way. It was observed that when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an angle and emerge at a different wavelength related to . Although classical electromagnetism predicted that the wavelength of scattered rays should be equal to the initial wavelength, multiple experiments had found that the wavelength of the scattered rays was longer (corresponding to lower energy) than the initial wavelength.
In 1923, Compton published a paper in the Physical Review that explained the X-ray shift by attributing particle-like momentum to light quanta (Albert Einstein had proposed light quanta in 1905 in explaining the photo-electric effect, but Compton did not build on Einstein's work). The energy of light quanta depends only on the frequency of the light. In his paper, Compton derived the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays by assuming that each scattered X-ray photon interacted with only one electron. His paper concludes by reporting on experiments which verified his derived relation:
where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.
The quantity is known as the Compton wavelength of the electron; it is equal to . The wavelength shift is at least zero (for ) and at most twice the Compton wavelength of the electron (for ).
Compton found that some X-rays experienced no wavelength shift despite being scattered through large angles; in each of these cases the photon failed to eject an electron. Thus the magnitude of the shift is related not to the Compton wavelength of the electron, but to the Compton wavelength of the entire atom, which can be upwards of 10000 times smaller. This is known as "coherent" scattering off the entire atom since the atom remains intact, gaining no internal excitation.
In Compton's original experiments the wavelength shift given above was the directly measurable observable. In modern experiments it is conventional to measure the energies, not the wavelengths, of the scattered photons. For a given incident energy , the outgoing final-state photon energy, , is given by
Derivation of the scattering formula
A photon with wavelength collides with an electron in an atom, which is treated as being at rest. The collision causes the electron to recoil, and a new photon ′ with wavelength ′ emerges at angle from the photon's incoming path. Let ′ denote the electron after the collision. Compton allowed for the possibility that the interaction would sometimes accelerate the electron to speeds sufficiently close to the velocity of light as to require the application of Einstein's special relativity theory to properly describe its energy and momentum.
At the conclusion of Compton's 1923 paper, he reported results of experiments confirming the predictions of his scattering formula, thus supporting the assumption that photons carry momentum as well as quantized energy. At the start of his derivation, he had postulated an expression for the momentum of a photon from equating Einstein's already established mass-energy relationship of to the quantized photon energies of , which Einstein had separately postulated. If , the equivalent photon mass must be . The photon's momentum is then simply this effective mass times the photon's frame-invariant velocity . For a photon, its momentum , and thus can be substituted for for all photon momentum terms which arise in course of the derivation below. The derivation which appears in Compton's paper is more terse, but follows the same logic in the same sequence as the following derivation.
The conservation of energy merely equates the sum of energies before and after scattering.
Compton postulated that photons carry momentum; thus from the conservation of momentum, the momenta of the particles should be similarly related by
in which is omitted on the assumption it is effectively zero.
The photon energies are related to the frequencies by
where h is the Planck constant.
Before the scattering event, the electron is treated as sufficiently close to being at rest that its total energy consists entirely of the mass-energy equivalence of its (rest) mass ,
After scattering, the possibility that the electron might be accelerated to a significant fraction of the speed of light, requires that its total energy be represented using the relativistic energy–momentum relation
Substituting these quantities into the expression for the conservation of energy gives
This expression can be used to find the magnitude of the momentum of the scattered electron,
Note that this magnitude of the momentum gained by the electron (formerly zero) exceeds the energy/c lost by the photon,
Equation (1) relates the various energies associated with the collision. The electron's momentum change involves a relativistic change in the energy of the electron, so it is not simply related to the change in energy occurring in classical physics. The change of the magnitude of the momentum of the photon is not just related to the change of its energy; it also involves a change in direction.
Solving the conservation of momentum expression for the scattered electron's momentum gives
Making use of the scalar product yields the square of its magnitude,
In anticipation of being replaced with , multiply both sides by ,
After replacing the photon momentum terms with , we get a second expression for the magnitude of the momentum of the scattered electron,
Equating the alternate expressions for this momentum gives
which, after evaluating the square and canceling and rearranging terms, further yields
Dividing both sides by yields
Finally, since = = ,
It can further be seen that the angle of the outgoing electron with the direction of the incoming photon is specified by
Applications
Compton scattering
Compton scattering is of prime importance to radiobiology, as it is the most probable interaction of gamma rays and high energy X-rays with atoms in living beings and is applied in radiation therapy.
Compton scattering is an important effect in gamma spectroscopy which gives rise to the Compton edge, as it is possible for the gamma rays to scatter out of the detectors used. Compton suppression is used to detect stray scatter gamma rays to counteract this effect.
Magnetic Compton scattering
Magnetic Compton scattering is an extension of the previously mentioned technique which involves the magnetisation of a crystal sample hit with high energy, circularly polarised photons. By measuring the scattered photons' energy and reversing the magnetisation of the sample, two different Compton profiles are generated (one for spin up momenta and one for spin down momenta). Taking the difference between these two profiles gives the magnetic Compton profile (MCP), given by – a one-dimensional projection of the electron spin density.
where is the number of spin-unpaired electrons in the system, and are the three-dimensional electron momentum distributions for the majority spin and minority spin electrons respectively.
Since this scattering process is incoherent (there is no phase relationship between the scattered photons), the MCP is representative of the bulk properties of the sample and is a probe of the ground state. This means that the MCP is ideal for comparison with theoretical techniques such as density functional theory.
The area under the MCP is directly proportional to the spin moment of the system and so, when combined with total moment measurements methods (such as SQUID magnetometry), can be used to isolate both the spin and orbital contributions to the total moment of a system.
The shape of the MCP also yields insight into the origin of the magnetism in the system.
Inverse Compton scattering
Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disk surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona. This is surmised to cause the power law component in the X-ray spectra (0.2–10 keV) of accreting black holes.
The effect is also observed when photons from the cosmic microwave background (CMB) move through the hot gas surrounding a galaxy cluster. The CMB photons are scattered to higher energies by the electrons in this gas, resulting in the Sunyaev–Zel'dovich effect. Observations of the Sunyaev–Zel'dovich effect provide a nearly redshift-independent means of detecting galaxy clusters.
Some synchrotron radiation facilities scatter laser light off the stored electron beam.
This Compton backscattering produces high energy photons in the MeV to GeV range subsequently used for nuclear physics experiments.
Non-linear inverse Compton scattering
Non-linear inverse Compton scattering (NICS) is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, such as an electron. It is also called non-linear Compton scattering and multiphoton Compton scattering. It is the non-linear version of inverse Compton scattering in which the conditions for multiphoton absorption by the charged particle are reached due to a very intense electromagnetic field, for example the one produced by a laser.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to the charged particle rest energy and higher. As a consequence NICS photons can be used to trigger other phenomena such as pair production, Compton scattering, nuclear reactions, and can be used to probe non-linear quantum effects and non-linear QED.
See also
References
Further reading
(the original 1923 paper on the APS website)
Stuewer, Roger H. (1975), The Compton Effect: Turning Point in Physics (New York: Science History Publications)
External links
Compton Scattering – Georgia State University
Compton Scattering Data – Georgia State University
Derivation of Compton shift equation
Astrophysics
Observational astronomy
Atomic physics
Foundational quantum physics
Quantum electrodynamics
X-ray scattering | 0.766159 | 0.997865 | 0.764523 |
John William Strutt, 3rd Baron Rayleigh | John William Strutt, 3rd Baron Rayleigh, (; 12 November 1842 – 30 June 1919) was an English mathematician and physicist who made extensive contributions to science. He spent all of his academic career at the University of Cambridge. Among many honours, he received the 1904 Nobel Prize in Physics "for his investigations of the densities of the most important gases and for his discovery of argon in connection with these studies." He served as president of the Royal Society from 1905 to 1908 and as chancellor of the University of Cambridge from 1908 to 1919.
Rayleigh provided the first theoretical treatment of the elastic scattering of light by particles much smaller than the light's wavelength, a phenomenon now known as "Rayleigh scattering", which notably explains why the sky is blue. He studied and described transverse surface waves in solids, now known as "Rayleigh waves". He contributed extensively to fluid dynamics, with concepts such as the Rayleigh number (a dimensionless number associated with natural convection), Rayleigh flow, the Rayleigh–Taylor instability, and Rayleigh's criterion for the stability of Taylor–Couette flow. He also formulated the circulation theory of aerodynamic lift. In optics, Rayleigh proposed a well-known criterion for angular resolution. His derivation of the Rayleigh–Jeans law for classical black-body radiation later played an important role in the birth of quantum mechanics (see ultraviolet catastrophe). Rayleigh's textbook The Theory of Sound (1877) is still used today by acousticians and engineers. He introduced the Rayleigh test for circular non-uniformity, of which the Rayleigh plot visualizes.
Early life and education
Strutt was born on 12 November 1842 at Langford Grove, Maypole Road in Maldon, Essex. In his early years he suffered from frailty and poor health. He attended Eton College and Harrow School (each for only a short period), before going on to the University of Cambridge in 1861 where he studied mathematics at Trinity College, Cambridge. He obtained a Bachelor of Arts degree (Senior Wrangler and 1st Smith's Prize) in 1865, and a Master of Arts in 1868. He was subsequently elected to a fellowship of Trinity. He held the post until his marriage to Evelyn Balfour, daughter of James Maitland Balfour, in 1871. He had three sons with her. In 1873, on the death of his father, John Strutt, 2nd Baron Rayleigh, he inherited the Barony of Rayleigh. Rayleigh was elected fellow of the Royal Society on 12 June 1873.
Career
Strutt was the second Cavendish Professor of Physics at the University of Cambridge (following James Clerk Maxwell), from 1879 to 1884. He first described dynamic soaring by seabirds in 1883, in the British journal Nature. From 1887 to 1905 he was professor of Natural Philosophy at the Royal Institution.
Around 1900 Rayleigh developed the duplex (combination of two) theory of human sound localisation using two binaural cues, interaural phase difference (IPD) and interaural level difference (ILD) (based on analysis of a spherical head with no external pinnae). The theory posits that we use two primary cues for sound lateralisation, using the difference in the phases of sinusoidal components of the sound and the difference in amplitude (level) between the two ears.
He received the degree of Doctor mathematicae (honoris causa) from the Royal Frederick University on 6 September 1902, when they celebrated the centennial of the birth of mathematician Niels Henrik Abel.
In 1904 he was awarded the Nobel Prize for Physics "for his investigations of the densities of the most important gases and for his discovery of argon in connection with these studies".
During the First World War, he was president of the government's Advisory Committee for Aeronautics, which was located at the National Physical Laboratory, and chaired by Richard Glazebrook.
In 1919, Rayleigh served as president of the Society for Psychical Research. As an advocate that simplicity and theory be part of the scientific method, Rayleigh argued for the principle of similitude.
Rayleigh served as president of the Royal Society from 1905 to 1908. From time to time he participated in the House of Lords; however, he spoke up only if politics attempted to become involved in science.
Personal life and death
Rayleigh married Evelyn Georgiana Mary (née Balfour). He died on 30 June 1919, at his home in Witham, Essex. He was succeeded, as the 4th Lord Rayleigh, by his son Robert John Strutt, another well-known physicist. Lord Rayleigh was buried in the graveyard of All Saints' Church in Terling in Essex.
Religious views
Rayleigh was an Anglican. Though he did not write about the relationship of science and religion, he retained a personal interest in spiritual matters. When his scientific papers were to be published in a collection by the Cambridge University Press, Strutt wanted to include a quotation from the Bible, but he was discouraged from doing so, as he later reported:
Still, he had his wish and the quotation was printed in the five-volume collection of scientific papers. In a letter to a family member, he wrote about his rejection of materialism and spoke of Jesus Christ as a moral teacher:
He held an interest in parapsychology and was an early member of the Society for Psychical Research (SPR). He was not convinced of spiritualism but remained open to the possibility of supernatural phenomena. Rayleigh was the president of the SPR in 1919. He gave a presidential address in the year of his death but did not come to any definite conclusions.
Honours and awards
The lunar crater Rayleigh as well as the Martian crater Rayleigh were named in his honour. The asteroid 22740 Rayleigh was named after him on 1 June 2007. A type of surface waves are known as Rayleigh waves, and the elastic scattering of electromagnetic waves is called Rayleigh scattering. The rayl, a unit of specific acoustic impedance, is also named for him. Rayleigh was also awarded with (in chronological order):
Smith's Prize (1864)
Royal Medal (1882)
Member of the American Philosophical Society (1886)
Matteucci Medal (1894)
Member of the Royal Swedish Academy of Sciences (1897)
Copley Medal (1899)
Nobel Prize in Physics (1904)
Elliott Cresson Medal (1913)
Rumford Medal (1914)
Lord Rayleigh was among the original recipients of the Order of Merit (OM) in the 1902 Coronation Honours list published on 26 June 1902, and received the order from King Edward VII at Buckingham Palace on 8 August 1902.
Sir William Ramsay, his co-worker in the investigation to discover argon described Rayleigh as "the greatest man alive" while speaking to Lady Ramsay during his last illness.
H. M. Hyndman said of Rayleigh that "no man ever showed less consciousness of great genius".
In honour of Lord Rayleigh, the Institute of Acoustics sponsors the Rayleigh Medal (established in 1970) and the Institute of Physics sponsors the John William Strutt, Lord Rayleigh Medal and Prize (established in 2008).
Many of the papers that he wrote on lubrication are now recognized as early classical contributions to the field of tribology. For these contributions, he was named as one of the 23 "Men of Tribology" by Duncan Dowson.
There is a memorial to him by Derwent Wood in St Andrew's Chapel at Westminster Abbey.
Bibliography
The Theory of Sound vol. I (London : Macmillan, 1877, 1894) (alternative link: Bibliothèque Nationale de France OR (Cambridge: University Press, reissued 2011, )
The Theory of Sound vol.II (London : Macmillan, 1878, 1896) (alternative link: Bibliothèque Nationale de France) OR (Cambridge: University Press, reissued 2011, )
Scientific papers (Vol. 1: 1869–1881) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 2: 1881–1887) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 3: 1887–1892) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 4: 1892–1901) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 5: 1902–1910) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 6: 1911–1919) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
See also
References
Further reading
Life of John William Strutt: Third Baron Rayleigh, O.M., F.R.S., (1924) Longmans, Green & Co.
A biography written by his son, Robert Strutt, 4th Baron Rayleigh
External links
About John William Strutt
Lord Rayleigh – the Last of the Great Victorian Polymaths, GEC Review, Volume 7, No. 3, 1992
1842 births
1919 deaths
20th-century British physicists
Acousticians
Alumni of Trinity College, Cambridge
Barons in the Peerage of the United Kingdom
British Nobel laureates
Chancellors of the University of Cambridge
De Morgan Medallists
Discoverers of chemical elements
English Anglicans
Experimental physicists
Optical physicists
Fluid dynamicists
Lord-lieutenants of Essex
Members of the Order of Merit
Nobel laureates in Physics
Fellows of the Royal Society
Fellows of the American Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Bavarian Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Hungarian Academy of Sciences
Members of the French Academy of Sciences
British parapsychologists
People educated at Eton College
People educated at Harrow School
People from Maldon, Essex
Presidents of the Physical Society
Presidents of the Royal Society
Recipients of the Copley Medal
Recipients of the Pour le Mérite (civil class)
Royal Medal winners
Senior Wranglers
John
Members of the Privy Council of the United Kingdom
Burials in Essex
Linear algebraists
Tribologists
Recipients of the Matteucci Medal
Members of the American Philosophical Society
Cavendish Professors of Physics
Members of the Royal Society of Sciences in Uppsala
Scientists of the National Physical Laboratory (United Kingdom) | 0.771839 | 0.990517 | 0.764519 |
Corpuscularianism | Corpuscularianism, also known as corpuscularism, is a set of theories that explain natural transformations as a result of the interaction of particles (minima naturalia, partes exiles, partes parvae, particulae, and semina). It differs from atomism in that corpuscles are usually endowed with a property of their own and are further divisible, while atoms are neither. Although often associated with the emergence of early modern mechanical philosophy, and especially with the names of Thomas Hobbes, René Descartes, Pierre Gassendi, Robert Boyle, Isaac Newton, and John Locke, corpuscularian theories can be found throughout the history of Western philosophy.
Overview
Corpuscles vs. atoms
Corpuscularianism is similar to the theory of atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure, a step on the way towards the production of gold by transmutation.
Perceived vs. real properties
Corpuscularianism was associated by its leading proponents with the idea that some of the apparent properties of objects are artifacts of the perceiving mind, that is, "secondary" qualities as distinguished from "primary" qualities. Corpuscles were thought to be unobservable and having a very limited number of basic properties, such as size, shape, and motion.
Thomas Hobbes
The philosopher Thomas Hobbes used corpuscularianism to justify his political theories in Leviathan. It was used by Newton in his development of the corpuscular theory of light, while Boyle used it to develop his mechanical corpuscular philosophy, which laid the foundations for the Chemical Revolution.
Robert Boyle
Corpuscularianism remained a dominant theory for centuries and was blended with alchemy by early scientists such as Robert Boyle and Isaac Newton in the 17th century. In his work The Sceptical Chymist (1661), Boyle abandoned the Aristotelian ideas of the classical elements—earth, water, air, and fire—in favor of corpuscularianism. In his later work, The Origin of Forms and Qualities (1666), Boyle used corpuscularianism to explain all of the major Aristotelian concepts, marking a departure from traditional Aristotelianism.
Light corpuscules
Alchemical corpuscularianism
William R. Newman traces the origins from the fourth book of Aristotle, Meteorology. The "dry" and "moist" exhalations of Aristotle became the alchemical 'sulfur' and 'mercury' of the eighth-century Islamic alchemist, Jābir ibn Hayyān (died c. 806–816). Pseudo-Geber's Summa perfectionis contains an alchemical theory in which unified sulfur and mercury corpuscles, differing in purity, size, and relative proportions, form the basis of a much more complicated process.
Importance to the development of modern scientific theory
Several of the principles which corpuscularianism proposed became tenets of modern chemistry.
The idea that compounds can have secondary properties that differ from the properties of the elements which are combined to make them became the basis of molecular chemistry.
The idea that the same elements can be predictably combined in different ratios using different methods to create compounds with radically different properties became the basis of stoichiometry, crystallography, and established studies of chemical synthesis.
The ability of chemical processes to alter the composition of an object without significantly altering its form is the basis of fossil theory via mineralization and the understanding of numerous metallurgical, biological, and geological processes.
See also
Atomic theory
Atomism
Classical element
History of chemistry
References
Bibliography
Further reading
Atomism
History of chemistry
13th century in science
Metaphysical theories
Particles | 0.779632 | 0.98061 | 0.764515 |
OpenAI o1 | OpenAI o1 is a generative pre-trained transformer released by OpenAI in September 2024. o1 spends time "thinking" before it answers, making it more efficient in complex reasoning tasks, science and programming.
History
Background
According to leaked information, o1 was formerly known within OpenAI as "Q*", and later as "Strawberry". The codename "Q*" first surfaced in November 2023, around the time of Sam Altman's ousting and subsequent reinstatement, with rumors suggesting that this experimental model had shown promising results on mathematical benchmarks. In July 2024, Reuters reported that OpenAI was developing a generative pre-trained transformer known as "Strawberry".
Release
"o1-preview" and "o1-mini" were released on September 12, 2024, for ChatGPT Plus and Team users. GitHub started testing the integration of o1-preview in its Copilot service the same day.
OpenAI noted that o1 is the first of a series of "reasoning" models, and that it was planning to add access to o1-mini to all ChatGPT free users. o1-preview's API is several times more expensive than GPT-4o.
Capabilities
According to OpenAI, o1 has been trained using a new optimization algorithm and a dataset specifically tailored to it. The training leverages reinforcement learning. OpenAI described o1 as a complement to GPT-4o rather than a successor.
o1 spends additional time thinking (generating a chain of thought) before generating an answer, which makes it more effective for complex reasoning tasks, particularly in science and mathematics. Compared to previous models, o1 has been trained to generate long "chains of thought" before returning a final answer. According to Mira Murati, this ability to think before responding represents a new, additional paradigm, which is improving model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power. OpenAI's test results suggest a correlation between accuracy and the logarithm of the amount of compute spent thinking before answering.
o1-preview performed approximately at a PhD level on benchmark tests related to physics, chemistry, and biology. On the American Invitational Mathematics Examination, it solved 83% (12.5/15) of the problems, compared to 13% (1.8/15) for GPT-4o. It also ranked in the 89th percentile in Codeforces coding competitions. o1-mini is faster and 80% cheaper than o1-preview. It is particularly suitable for programming and STEM-related tasks, but does not have the same "broad world knowledge" as o1-preview.
OpenAI noted that o1's reasoning capabilities make it better at adhering to safety rules provided in the prompt's context window. OpenAI reported that during a test, one instance of o1-preview exploited a misconfiguration to succeed at a task that should have been infeasible due to a bug. OpenAI also granted early access to the UK and US AI Safety Institutes for research, evaluation, and testing. Dan Hendrycks wrote that "The model already outperforms PhD scientists most of the time on answering questions related to bioweapons." He suggested that these concerning capabilities will continue to increase.
Limitations
o1 usually requires more computing time and power than other GPT models by OpenAI, because it generates long chains of thought before making the final response.
According to OpenAI, o1 may "fake alignment", that is, generate a response that is contrary to accuracy and its own chain of thought, in about 0.38% of cases.
OpenAI forbids users from trying to reveal o1's chain of thought, which is hidden by design and not trained to comply with the company's policies. Prompts are monitored, and users who intentionally or accidentally violate this are warned and may lose their access to o1. OpenAI cites AI safety and competitive advantage as reasons for the restriction, which has been described as a loss of transparency by developers who work with large language models (LLMs).
In October 2024, researchers at Apple submitted a preprint reporting that LLMs such as o1 may be replicating reasoning steps from their training data. By changing the numbers and names used in a math problem or simply running the same problem again, LLMs would perform somewhat worse than their best benchmark results. Adding extraneous but logically inconsequential information to the problems caused a much greater drop in performance, from −17.5% for o1-preview, −29.1% for o1-mini, to −65.7% for the worst model tested.
References
OpenAI
ChatGPT
Artificial intelligence | 0.769816 | 0.99307 | 0.764481 |
Subsets and Splits