title
stringlengths
3
69
text
stringlengths
776
102k
relevans
float64
0.76
0.82
popularity
float64
0.96
1
ranking
float64
0.76
0.81
Field (physics)
In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. A weather map, with the surface temperature described by assigning a number to each point on the map, is an example of a scalar field. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law). A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson. History To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object. The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "magnetic field". And Lord Kelvin provided a formal definition for a field in 1851. The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past. Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities. In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics). Classical fields There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point. Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation A classical field theory describing gravity is Newtonian gravitation, which describes the gravitational force as a mutual interaction between two masses. Any body with mass M is associated with a gravitational field g which describes its influence on other bodies with mass. The gravitational field of M at a point r in space corresponds to the ratio between force F that M exerts on a small or negligible test mass m located at r and the test mass itself: Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M. According to Newton's law of universal gravitation, F(r) is given by where is a unit vector lying along the line joining M and m and pointing from M to m. Therefore, the gravitational field of M is The experimental observation that inertial mass and gravitational mass are equal to an unprecedented level of accuracy leads to the identity that gravitational field strength is identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. Because the gravitational force F is conservative, the gravitational field g can be rewritten in terms of the gradient of a scalar function, the gravitational potential Φ(r): Electromagnetism Michael Faraday first realized the importance of a field as a physical quantity, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy. These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern version of these equations is called Maxwell's equations. Electrostatics A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that . Using this and Coulomb's law tells us that the electric field due to a single charged particle is The electric field is conservative, and hence can be described by a scalar potential, V(r): Magnetostatics A steady current I flowing along a path ℓ will create a field B, that exerts a force on nearby moving charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is where B(r) is the magnetic field, which is determined from I by the Biot–Savart law: The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): Electrodynamics In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime. Gravitation in general relativity Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation. Waves as fields Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set . They are also subject to the inverse-square law. For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice though, the field theories of optics are superseded by the electromagnetic field theory of Maxwell Gravity waves are waves in the surface of water, defined by a height field. Fluid dynamics Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid, if the density , pressure , deviatoric stress tensor of the fluid, as well as external body forces b, are all given. The flow velocity u is the vector field to solve for. Elasticity Linear elasticity is defined in terms of constitutive equations between tensor fields, where are the components of the 3x3 Cauchy stress tensor, the components of the 3x3 infinitesimal strain and is the elasticity tensor, a fourth-rank tensor with 81 components (usually 21 independent components). Thermodynamics and transport equations Assuming that the temperature T is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (a scalar field), i.e., that , then the temperature gradient is a vector field defined as . In thermal conduction, the temperature field appears in Fourier's law, where q is the heat flux field and k the thermal conductivity. Temperature and pressure gradients are also important for meteorology. Quantum fields It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory. In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges. These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory. In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds. As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization. Field theory Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories. The dynamics of a classical field are usually specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle. It is possible to construct simple fields without any prior knowledge of physics using only mathematics from multivariable calculus, potential theory and partial differential equations (PDEs). For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus for vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. The scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense. In a general setting, classical fields are described by sections of fiber bundles and their dynamics is formulated in the terms of jet manifolds (covariant classical field theory). In modern physics, the most often studied fields are those that model the four fundamental forces which one day may lead to the Unified Field Theory. Symmetries of fields A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types: Spacetime symmetries Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are: scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space. vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves contravariantly under rotations in space. Similarly, a dual (or co-) vector field attaches a dual vector to each point of space, and the components of each dual vector transform covariantly. tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. Under rotations in space, the components of the tensor transform in a more general way which depends on the number of covariant indices and contravariant indices. spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin which transform like vectors except for one of their components; in other words, when one rotates a vector field 360 degrees around a specific axis, the vector field turns to itself; however, spinors would turn to their negatives in the same case. Internal symmetries Fields may have internal symmetries in addition to spacetime symmetries. In many situations, one needs fields which are a list of spacetime scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry, that of the strong interaction. Other examples are isospin, weak isospin, strangeness and any other flavour symmetry. If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries. Statistical field theory Statistical field theory attempts to extend the field-theoretic paradigm toward many-body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument. Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory. Continuous random fields Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution. We can think about a continuous random field, in a (very) rough way, as an ordinary function that is almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers. See also Conformal field theory Covariant Hamiltonian field theory Field strength Lagrangian and Eulerian specification of a field Scalar field theory Velocity field Notes References Further reading Landau, Lev D. and Lifshitz, Evgeny M. (1971). Classical Theory of Fields (3rd ed.). London: Pergamon. . Vol. 2 of the Course of Theoretical Physics. External links Particle and Polymer Field Theories Mathematical physics Physical quantities
0.772685
0.994753
0.76863
Derek Muller
Derek Alexander Muller (born 9 November 1982) is a South African–Australian science communicator and media personality, best known for his YouTube channel Veritasium, which has over 16 million subscribers and 2.8 billion views as of October 2024. Early life and education Muller was born to South African parents in Traralgon, Victoria, Australia. His family moved to Vancouver, British Columbia, Canada, when he was 18 months old. In 2000, Muller graduated from West Vancouver Secondary School. In 2004, Muller graduated from Queen's University in Kingston, Ontario, with a Bachelor of Applied Science in Engineering Physics. Muller moved to Australia to study film-making; however, he instead enrolled for a Ph.D. in physics education research from the University of Sydney, which he completed in 2008 with a thesis: Designing Effective Multimedia for Physics Education. Career Muller has been listed as a team member of the ABC's television program Catalyst since 2008. During his Ph.D. program, he taught at a tutoring company, where he became the full-time Science Head after completing his Ph.D. in 2008. He quit the job at the end of 2010. In 2011, Muller created his YouTube channel "Veritasium" (see section below), which became his main source of livelihood within a few years. Since 2011, Muller has continued to appear on Catalyst, reporting scientific stories from around the globe, and on Australian television network Ten as the 'Why Guy' on the Breakfast program. In May 2012, he gave a TEDxSydney talk using the subject of his thesis. He presented the documentary Uranium – Twisting the Dragon's Tail, which aired in July–August 2015 on several public television stations around the world and won the Eureka Prize for Science Journalism. On 21 September 2015, Muller hosted the Google Science Fair Awards Celebration for that year. Muller has also won the Australian Department of Innovation Nanotechnology Film Competition and the 2013 Australian Webstream Award for "Best Educational & Lifestyle Series". Starting in April 2017, he appeared as a correspondent on the Netflix series Bill Nye Saves the World. Muller presented in film Vitamania: The Sense and Nonsense of Vitamins, a documentary by Genepool Productions, released in August 2018. The film answers questions about vitamins and the use of dietary vitamin supplements. Muller's works have been featured in Scientific American, Wired, Gizmodo, and i09. Veritasium and other YouTube channels In January 2011, Muller created the educational science channel Veritasium on YouTube, the focus of which is "addressing counter-intuitive concepts in science, usually beginning by discussing ideas with members of the public". The videos range in style from interviews with experts, such as 2011 Physics Nobel Laureate Brian Schmidt, to science experiments, dramatisations, songs, anda hallmark of the channelinterviews with the public to uncover misconceptions about science. The name Veritasium is a combination of the Latin word for truth, Veritas, and the suffix common to many elements, -ium. This creates Veritasium, an "element of truth", a play on the popular phrase and a reference to chemical elements. In its logo, which has been a registered trade mark since 2016, the number "42.0" resembles an element on the periodic table. The number was chosen because it is "The Answer to the Ultimate Question of Life, The Universe, and Everything" in Douglas Adams' famous novel The Hitchhiker's Guide to the Galaxy. In July 2012, Muller created a second YouTube channel, 2veritasium. Muller used the new platform to produce editorial videos that discuss such topics as filmmaking, showcasing behind-the-scenes footage, and for viewer reactions to popular Veritasium videos. In 2017, Muller began uploading videos on his newest channel, Sciencium, which is dedicated to videos on recent and historical discoveries in science. In 2021, Muller hosted Pindrop, a YouTube Original series exploring unusual places around the world, as seen from Google Earth. Only one episode exploring potash evaporation ponds in Utah was released before YouTube cancelled all original production in 2022. Reception Veritasium videos have received critical acclaim. Two early successful Veritasium videos demonstrate the physics of a falling Slinky toy. At 2012 Science Online, the video "Mission Possible: Graphene" won the Cyberscreen Science Film Festival and was therefore featured on Scientific American as the video of the week. A video debunking the common misconception that the moon is closer than it is, was picked up by CBS News. After a video was posted in which Muller is shown driving a wind-powered car, equipped with a huge spinning propeller, faster than the wind, UCLA physics professor Alexander Kusenko disagreed with the claim that sailing downwind faster than wind was possible within the laws of physics, and made a $10,000 bet with Muller that he could not demonstrate that the apparent greater speed was not due to other, incidental factors. Muller took up the bet, and the signing of a wager agreement by the parties was witnessed by Bill Nye and Neil deGrasse Tyson. In a subsequent video, Muller demonstrated the effect with a model cart under conditions ruling out extraneous effects, but Muller did admit he could have done a better job at explaining the phenomenon in the first video. Kusenko conceded the bet of $10,000, which was then donated to charity. Personal life and family After Derek Muller's parents, Anthony and Shirley, married in South Africa, they moved to Vancouver, British Columbia, Canada, where his two sisters were born (Kirstie and Marilouise). The family moved to Australia, where he was born, after his father got a job in Traralgon at a pulp and paper mill. When he was 18 months old, the family moved back to Vancouver. After Muller moved to Los Angeles, United States, he met Raquel Nuno, a planetary science Ph.D. student whom he married. They have three children (2021). Footnotes References External links Australian video bloggers Online edutainment Living people 1982 births Australian emigrants to Canada People from Traralgon YouTubers from Vancouver Queen's University at Kingston alumni University of Sydney alumni Australian Internet celebrities Education-related YouTube channels Science-related YouTube channels Canadian people of South African descent Australian people of South African descent Australian YouTubers Canadian YouTubers Canadian video bloggers
0.77018
0.997971
0.768618
Relativistic speed
Relativistic speed refers to speed at which relativistic effects become significant to the desired accuracy of measurement of the phenomenon being observed. Relativistic effects are those discrepancies between values calculated by models considering and not considering relativity. Related words are velocity, rapidity, and celerity which is proper velocity. Speed is a scalar, being the magnitude of the velocity vector which in relativity is the four-velocity and in three-dimension Euclidean space a three-velocity. Speed is empirically measured as average speed, although current devices in common use can estimate speed over very small intervals and closely approximate instantaneous speed. Non-relativistic discrepancies include cosine error which occurs in speed detection devices when only one scalar component of the three-velocity is measured and the Doppler effect which may affect observations of wavelength and frequency. Relativistic effects are highly non-linear and for everyday purposes are insignificant because the Newtonian model closely approximates the relativity model. In special relativity the Lorentz factor is a measure of time dilation, length contraction and the relativistic mass increase of a moving object. See also Lorentz factor Relative velocity Relativistic beaming Relativistic jet Relativistic mass Relativistic particle Relativistic plasma Relativistic wave equations Special relativity Ultrarelativistic limit References Speed Velocity
0.780014
0.985362
0.768596
Volcanism
Volcanism, vulcanism, volcanicity, or volcanic activity is the phenomenon where solids, liquids, gases, and their mixtures erupt to the surface of a solid-surface astronomical body such as a planet or a moon. It is caused by the presence of a heat source, usually internally generated, inside the body; the heat is generated by various processes, such as radioactive decay or tidal heating. This heat partially melts solid material in the body or turns material into gas. The mobilized material rises through the body's interior and may break through the solid surface. Cause of volcanism For volcanism to occur, the temperature of the mantle must have risen to about half its melting point. At this point, the mantle's viscosity will have dropped to about 1021 Pascal-seconds. When large scale melting occurs, the viscosity rapidly falls to 103 Pascal-seconds or even less, increasing the heat transport rate a million-fold. The occurrence of volcanism is partially due to the fact that melted material tends to be more mobile and less dense than the materials from which they were produced, which can cause it to rise to the surface. Heat source There are multiple ways to generate the heat needed for volcanism. Volcanism on outer solar system moons is powered mainly by tidal heating. Tidal heating caused by the deformation of a body's shape due to mutual gravitational attraction, which generates heat. Earth experiences tidal heating from the Moon, deforming by up to 1 metre (3 feet), but this does not make up a major portion of Earth's total heat. During a planet's formation, it would have experienced heating from impacts from planetesimals, which would have dwarfed even the asteroid impact that caused the extinction of dinosaurs. This heating could trigger differentiation, further heating the planet. The larger a body is, the slower it loses heat. In larger bodies, for example Earth, this heat, known as primordial heat, still makes up much of the body's internal heat, but the Moon, which is smaller than Earth, has lost most of this heat. Another heat source is radiogenic heat, caused by radioactive decay. The decay of Aluminium-26 would have significantly heated planetary embryos, but due to its short half-life (less than a million years), any traces of it have long since vanished. There are small traces of unstable isotopes in common minerals, and all the terrestrial planets, and the Moon, experience some of this heating. The icy bodies of the outer solar system experience much less of this heat because they tend to not be very dense and not have much silicate material (radioactive elements concentrate in silicates). On Neptune's moon Triton, and possibly on Mars, cryogeyser activity takes place. The source of heat is external (heat from the Sun) rather than internal. Melting methods Decompression melting Decompression melting happens when solid material from deep beneath the body rises upwards. Pressure decreases as the material rises upwards, and so does the melting point. So, a rock that is solid at a given pressure and temperature can become liquid if the pressure, and thus melting point, decreases even if the temperature stays constant. However, in the case of water, increasing pressure decreases melting point until a pressure of 0.208 GPa is reached, after which the melting point increases with pressure. Flux melting Flux melting occurs when the melting point is lowered by the addition of volatiles, for example, water or carbon dioxide. Like decompression melting, it is not caused by an increase in temperature, but rather by a decrease in melting point. Formation of cryomagma reservoirs Cryovolcanism, instead of originating in a uniform subsurface ocean, may instead take place from discrete liquid reservoirs. The first way these can form is a plume of warm ice welling up and then sinking back down, forming a convection current. A model developed to investigate the effects of this on Europa found that energy from tidal heating became focused in these plumes, allowing melting to occur in these shallow depths as the plume spreads laterally (horizontally). The next is a switch from vertical to horizontal propagation of a fluid filled crack. Another mechanism is heating of ice from release of stress through lateral motion of fractures in the ice shell penetrating it from the surface, and even heating from large impacts can create such reservoirs. Ascent of melts Diapirs When material of a planetary body begins to melt, the melting first occurs in small pockets in certain high energy locations, for example grain boundary intersections and where different crystals react to form eutectic liquid, that initially remain isolated from one another, trapped inside rock. If the contact angle of the melted material allows the melt to wet crystal faces and run along grain boundaries, the melted material will accumulate into larger quantities. On the other hand, if the angle is greater than about 60 degrees, much more melt must form before it can separate from its parental rock. Studies of rocks on Earth suggest that melt in hot rocks quickly collects into pockets and veins that are much larger than the grain size, in contrast to the model of rigid melt percolation. Melt, instead of uniformly flowing out of source rock, flows out through rivulets which join to create larger veins. Under the influence of buoyancy, the melt rises. Diapirs may also form in non-silicate bodies, playing a similar role in moving warm material towards the surface. Dikes A dike is a vertical fluid-filled crack, from a mechanical standpoint it is a water filled crevasse turned upside down. As magma rises into the vertical crack, the low density of the magma compared to the wall rock means that the pressure falls less rapidly than in the surrounding denser rock. If the average pressure of the magma and the surrounding rock are equal, the pressure in the dike exceeds that of the enclosing rock at the top of the dike, and the pressure of the rock is greater than that of the dike at its bottom. So the magma thus pushes the crack upwards at its top, but the crack is squeezed closed at its bottom due to an elastic reaction (similar to the bulge next to a person sitting down on a springy sofa). Eventually, the tail gets so narrow it nearly pinches off, and no more new magma will rise into the crack. The crack continues to ascend as an independent pod of magma. Standpipe model This model of volcanic eruption posits that magma rises through a rigid open channel, in the lithosphere and settles at the level of hydrostatic equilibrium. Despite how it explains observations well (which newer models cannot), such as an apparent concordance of the elevation of volcanoes near each other, it cannot be correct and is now discredited, because the lithosphere thickness derived from it is too large for the assumption of a rigid open channel to hold. Cryovolcanic melt ascent Unlike silicate volcanism, where melt can rise by its own buoyancy until it reaches the shallow crust, in cryovolcanism, the water (cryomagmas tend to be water based) is denser than the ice above it. One way to allow cryomagma to reach the surface is to make the water buoyant, by making the water less dense, either through the presence of other compounds that reverse negative buoyancy, or with the addition of exsolved gas bubbles in the cryomagma that were previously dissolved into it (that makes the cryomagma less dense), or with the presence of a densifying agent in the ice shell. Another is to pressurise the fluid to overcome negative buoyancy and make it reach the surface. When the ice shell above a subsurface ocean thickens, it can pressurise the entire ocean (in cryovolcanism, frozen water or brine is less dense than in liquid form). When a reservoir of liquid partially freezes, the remaining liquid is pressurised in the same way. For a crack in the ice shell to propagate upwards, the fluid in it must have positive buoyancy or external stresses must be strong enough to break through the ice. External stresses could include those from tides or from overpressure due to freezing as explained above. There is yet another possible mechanism for ascent of cryovolcanic melts. If a fracture with water in it reaches an ocean or subsurface fluid reservoir, the water would rise to its level of hydrostatic equilibrium, at about nine-tenths of the way to the surface. Tides which induce compression and tension in the ice shell may pump the water farther up. A 1988 article proposed a possibility for fractures propagating upwards from the subsurface ocean of Jupiter's moon Europa. It proposed that a fracture propagating upwards would possess a low pressure zone at its tip, allowing volatiles dissolved within the water to exsolve into gas. The elastic nature of the ice shell would likely prevent the fracture reaching the surface, and the crack would instead pinch off, enclosing the gas and liquid. The gas would increase buoyancy and could allow the crack to reach the surface. Even impacts can create conditions that allow for enhanced ascent of magma. An impact may remove the top few kilometres of crust, and pressure differences caused by the difference in height between the basin and the height of the surrounding terrain could allow eruption of magma which otherwise would have stayed beneath the surface. A 2011 article showed that there would be zones of enhanced magma ascent at the margins of an impact basin. Not all of these mechanisms, and maybe even none, operate on a given body. Types of volcanism Silicate volcanism Silicate volcanism occurs where silicate materials are erupted. Silicate lava flows, like those found on Earth, solidify at about 1000 degrees Celsius. Mud volcanoes A mud volcano is formed when fluids and gases under pressure erupt to the surface, bringing mud with them. This pressure can be caused by the weight of overlying sediments over the fluid which pushes down on the fluid, preventing it from escaping, by fluid being trapped in the sediment, migrating from deeper sediment into other sediment or being made from chemical reactions in the sediment. They often erupt quietly, but sometimes they erupt flammable gases like methane. Cryovolcanism Cryovolcanism is the eruption of volatiles into an environment below their freezing point. The processes behind it are different to silicate volcanism because the cryomagma (which is usually water-based) is normally denser than its surroundings, meaning it cannot rise by its own buoyancy. Sulfur Sulfur lavas have a different behaviour to silicate ones. First, sulfur has a low melting point of about 120 degrees Celsius. Also, after cooling down to about 175 degrees Celsius the lava rapidly loses viscosity, unlike silicate lavas like those found found on Earth. Lava types When magma erupts onto a planet's surface, it is termed lava. Viscous lavas form short, stubby glass-rich flows. These usually have a wavy solidified surface texture. More fluid lavas have solidified surface textures that volcanologists classify into four types. Pillow lava forms when a trigger, often lava making contact with water, causes a lava flow to cool rapidly. This splinters the surface of the lava, and the magma then collects into sacks that often pile up in front of the flow, forming a structure called a pillow. A’a lava has a rough, spiny surface made of clasts of lava called clinkers. Block lava is another type of lava, with less jagged fragments than in a’a lava. Pahoehoe lava is by far the most common lava type, both on Earth and probably the other terrestrial planets. It has a smooth surface, with mounds, hollows and folds. Gentle/explosive activity A volcanic eruption could just be a simple outpouring of material onto the surface of a planet, but they usually involve a complex mixture of solids, liquids and gases which behave in equally complex ways. Some types of explosive eruptions can release energy a quarter that of an equivalent mass of TNT. Causes of explosive activity Exsolution of volatiles Volcanic eruptions on Earth have been consistently observed to progress from erupting gas rich material to gas depleted material, although an eruption may alternate between erupting gas rich to gas depleted material and vice versa multiple times. This can be explained by the enrichment of magma at the top of a dike by gas which is released when the dike breaches the surface, followed by magma from lower down than did not get enriched with gas. The reason the dissolved gas in the magma separates from it when the magma nears the surface is due to the effects of temperature and pressure on gas solubility. Pressure increases gas solubility, and if a liquid with dissolved gas in it depressurises, the gas will tend to exsolve (or separate) from the liquid. An example of this is what happens when a bottle of carbonated drink is quickly opened: when the seal is opened, pressure decreases and bubbles of carbon dioxide gas appear throughout the liquid. Fluid magmas erupt quietly. Any gas that has exsolved from the magma easily escapes even before it reaches the surface. However, in viscous magmas, gases remain trapped in the magma even after they have exsolved, forming bubbles inside the magma. These bubbles enlarge as the magma nears the surface due to the dropping pressure, and the magma grows substantially. This fact gives volcanoes erupting such material a tendency to ‘explode’, although instead of the pressure increase associated with an explosion, pressure always decreases in a volcanic eruption. Generally, explosive cryovolcanism is driven by exsolution of volatiles that were previously dissolved into the cryomagma, similar to what happens in explosive silicate volcanism as seen on Earth, which is what is mainly covered below. Physics of a volatile-driven explosive eruption Silica-rich magmas cool beneath the surface before they erupt. As they do this, bubbles exsolve from the magma. As the magma nears the surface, the bubbles and thus the magma increase in volume. The resulting pressure eventually breaks through the surface, and the release of pressure causes more gas to exsolve, doing so explosively. The gas may expand at hundreds of metres per second, expanding upward and outward. As the eruption progresses, a chain reaction causes the magma to be ejected at higher and higher speeds. Volcanic ash formation The violently expanding gas disperses and breaks up magma, forming a colloid of gas and magma called volcanic ash. The cooling of the gas in the ash as it expands chills the magma fragments, often forming tiny glass shards recognisable as portions of the walls of former liquid bubbles. In more fluid magmas the bubble walls may have time to reform into spherical liquid droplets. The final state of the colloids depends strongly on the ratio of liquid to gas. Gas-poor magmas end up cooling into rocks with small cavities, becoming vesicular lava. Gas-rich magmas cool to form rocks with cavities that nearly touch, with an average density less than that of water, forming pumice. Meanwhile, other material can be accelerated with the gas, becoming volcanic bombs. These can travel with so much energy that large ones can create craters when they hit the ground. Pyroclastic flows A colloid of volcanic gas and magma can form as a density current called a pyroclastic flow. This occurs when erupted material falls back to the surface. The colloid is somewhat fluidised by the gas, allowing it to spread. Pyroclastic flows can often climb over obstacles, and devastate human life. Pyroclastic flows are a common feature at explosive volcanoes on Earth. Pyroclastic flows have been found on Venus, for example at the Dione Regio volcanoes. Phreatic eruption A phreatic eruption can occur when hot water under pressure is depressurised. Depressurisation reduces the boiling point of the water, so when depressurised the water suddenly boils. Or it may happen when groundwater is suddenly heated, flashing to steam suddenly. When water turns into steam in a phreatic eruption, it expands at supersonic speeds, up to 1,700 times its original volume. This can be enough to shatter solid rock, and hurl rock fragments hundreds of metres. Phreatomagmatic eruption A phreatomagmatic eruption occurs when hot magma makes contact with water, creating an explosion. Clathrate hydrates One mechanism for explosive cryovolcanism is cryomagma making contact with clathrate hydrates. Clathrate hydrates, if exposed to warm temperatures, readily decompose. A 1982 article pointed out the possibility that the production of pressurised gas upon destabilisation of clathrate hydrates making contact with warm rising magma could produce an explosion that breaks through the surface, resulting in explosive cryovolcanism. Water vapor in a vacuum If a fracture reaches the surface of an icy body and the column of rising water is exposed to the near-vacuum of the surface of most icy bodies, it will immediately start to boil, because its vapor pressure is much more than the ambient pressure. Not only that, but any volatiles in the water will exsolve. The combination of these processes will release droplets and vapor, which can rise up the fracture, creating a plume. This is thought to be partially responsible for Enceladus's ice plumes. Occurrence Earth On Earth, volcanoes are most often found where tectonic plates are diverging or converging, and because most of Earth's plate boundaries are underwater, most volcanoes are found underwater. For example, a mid-ocean ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates whereas the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates. Volcanoes can also form where there is stretching and thinning of the crust's plates, such as in the East African Rift and the Wells Gray-Clearwater volcanic field and Rio Grande rift in North America. Volcanism away from plate boundaries has been postulated to arise from upwelling diapirs from the core–mantle boundary, deep within Earth. This results in hotspot volcanism, of which the Hawaiian hotspot is an example. Volcanoes are usually not created where two tectonic plates slide past one another. In 1912–1952, in the Northern Hemisphere, studies show that within this time, winters were warmer due to no massive eruptions that had taken place. These studies demonstrate how these eruptions can cause changes within the Earth's atmosphere. Large eruptions can affect atmospheric temperature as ash and droplets of sulfuric acid obscure the Sun and cool Earth's troposphere. Historically, large volcanic eruptions have been followed by volcanic winters which have caused catastrophic famines. Moon Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the Moon), rilles and domes. Venus The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank. However, the interpretation of the flows as ash flows has been questioned. Mars There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well. Moons of Jupiter Io Jupiter's moon Io is the most volcanically active object in the Solar System because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. There are only two planets in the solar system where volcanoes can be easily seen due to their high activity, Earth and Io. Its lavas are the hottest known anywhere in the Solar System, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the Solar System occurred on Io. Europa Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the Solar System. Moons of Saturn and Neptune In 1989, the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, ammonia, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar. Exoplanets A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, suggested that tidal heating from the host star very close to the planet and neighboring planets could generate intense volcanic activity similar to that found on Io. See also 29P/Schwassmann–Wachmann 4 Vesta Bimodal volcanism Extraterrestrial liquid water Fumarole Gas laws Geology of Ceres Geology of Mercury Geology of Pluto Geyser Glaciovolcanism Hotspot Hydrothermal vent Igneous rock Intraplate volcanism Lava planet Magma ocean Magmatism Mantle plume Plate tectonics Prediction of volcanic activity Seafloor spreading Volcanic arc Volcanic rock Volcanism on Io Volcanism on Mars Volcanism on the Moon Volcanism on Venus Volcanology References External links Further reading Volcanic Diversity throughout the Solar System Cosmic-solar radiation as the cause of earthquakes and volcanic eruptions Melting behaviours of the candidate materials for planetary models Explosive volcanic eruptions triggered by cosmic rays: Volcano as a bubble chamber Thermodynamics of gas and steam-blast eruptions Prerequisites for explosive cryovolcanism on dwarf planet-class Kuiper belt objects Phreatomagmatic and Related Eruption Styles
0.774135
0.992813
0.768572
Normal mode
A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. These fixed frequencies of the normal modes of a system are known as its natural frequencies or resonant frequencies. A physical object, such as a building, bridge, or molecule, has a set of normal modes and their natural frequencies that depend on its structure, materials and boundary conditions. The most general motion of a linear system is a superposition of its normal modes. The modes are normal in the sense that they can move independently, that is to say that an excitation of one mode will never cause motion of a different mode. In mathematical terms, normal modes are orthogonal to each other. General definitions Mode In the wave theory of physics and engineering, a mode in a dynamical system is a standing wave state of excitation, in which all the components of the system will be affected sinusoidally at a fixed frequency associated with that mode. Because no real system can perfectly fit under the standing wave framework, the mode concept is taken as a general characterization of specific states of oscillation, thus treating the dynamic system in a linear fashion, in which linear superposition of states can be performed. Typical examples include: In a mechanical dynamical system, a vibrating rope is the most clear example of a mode, in which the rope is the medium, the stress on the rope is the excitation, and the displacement of the rope with respect to its static state is the modal variable. In an acoustic dynamical system, a single sound pitch is a mode, in which the air is the medium, the sound pressure in the air is the excitation, and the displacement of the air molecules is the modal variable. In a structural dynamical system, a high tall building oscillating under its most flexural axis is a mode, in which all the material of the building -under the proper numerical simplifications- is the medium, the seismic/wind/environmental solicitations are the excitations and the displacements are the modal variable. In an electrical dynamical system, a resonant cavity made of thin metal walls, enclosing a hollow space, for a particle accelerator is a pure standing wave system, and thus an example of a mode, in which the hollow space of the cavity is the medium, the RF source (a Klystron or another RF source) is the excitation and the electromagnetic field is the modal variable. When relating to music, normal modes of vibrating instruments (strings, air pipes, drums, etc.) are called "overtones". The concept of normal modes also finds application in other dynamical systems, such as optics, quantum mechanics, atmospheric dynamics and molecular dynamics. Most dynamical systems can be excited in several modes, possibly simultaneously. Each mode is characterized by one or several frequencies, according to the modal variable field. For example, a vibrating rope in 2D space is defined by a single-frequency (1D axial displacement), but a vibrating rope in 3D space is defined by two frequencies (2D axial displacement). For a given amplitude on the modal variable, each mode will store a specific amount of energy because of the sinusoidal excitation. The normal or dominant mode of a system with multiple modes will be the mode storing the minimum amount of energy for a given amplitude of the modal variable, or, equivalently, for a given stored amount of energy, the dominant mode will be the mode imposing the maximum amplitude of the modal variable. Mode numbers A mode of vibration is characterized by a modal frequency and a mode shape. It is numbered according to the number of half waves in the vibration. For example, if a vibrating beam with both ends pinned displayed a mode shape of half of a sine wave (one peak on the vibrating beam) it would be vibrating in mode 1. If it had a full sine wave (one peak and one trough) it would be vibrating in mode 2. In a system with two or more dimensions, such as the pictured disk, each dimension is given a mode number. Using polar coordinates, we have a radial coordinate and an angular coordinate. If one measured from the center outward along the radial coordinate one would encounter a full wave, so the mode number in the radial direction is 2. The other direction is trickier, because only half of the disk is considered due to the anti-symmetric (also called skew-symmetry) nature of a disk's vibration in the angular direction. Thus, measuring 180° along the angular direction you would encounter a half wave, so the mode number in the angular direction is 1. So the mode number of the system is 2–1 or 1–2, depending on which coordinate is considered the "first" and which is considered the "second" coordinate (so it is important to always indicate which mode number matches with each coordinate direction). In linear systems each mode is entirely independent of all other modes. In general all modes have different frequencies (with lower modes having lower frequencies) and different mode shapes. Nodes In a one-dimensional system at a given mode the vibration will have nodes, or places where the displacement is always zero. These nodes correspond to points in the mode shape where the mode shape is zero. Since the vibration of a system is given by the mode shape multiplied by a time function, the displacement of the node points remain zero at all times. When expanded to a two dimensional system, these nodes become lines where the displacement is always zero. If you watch the animation above you will see two circles (one about halfway between the edge and center, and the other on the edge itself) and a straight line bisecting the disk, where the displacement is close to zero. In an idealized system these lines equal zero exactly, as shown to the right. In mechanical systems In the analysis of conservative systems with small displacements from equilibrium, important in acoustics, molecular spectra, and electrical circuits, the system can be transformed to new coordinates called normal coordinates. Each normal coordinate corresponds to a single vibrational frequency of the system and the corresponding motion of the system is called the normal mode of vibration. Coupled oscillators Consider two equal bodies (not affected by gravity), each of mass , attached to three springs, each with spring constant . They are attached in the following manner, forming a system that is physically symmetric: where the edge points are fixed and cannot move. Let denote the horizontal displacement of the left mass, and denote the displacement of the right mass. Denoting acceleration (the second derivative of with respect to time) as the equations of motion are: Since we expect oscillatory motion of a normal mode (where is the same for both masses), we try: Substituting these into the equations of motion gives us: Omitting the exponential factor (because it is common to all terms) and simplifying yields: And in matrix representation: If the matrix on the left is invertible, the unique solution is the trivial solution . The non trivial solutions are to be found for those values of whereby the matrix on the left is singular; i.e. is not invertible. It follows that the determinant of the matrix must be equal to 0, so: Solving for , the two positive solutions are: Substituting into the matrix and solving for , yields . Substituting results in . (These vectors are eigenvectors, and the frequencies are eigenvalues.) The first normal mode is: Which corresponds to both masses moving in the same direction at the same time. This mode is called antisymmetric. The second normal mode is: This corresponds to the masses moving in the opposite directions, while the center of mass remains stationary. This mode is called symmetric. The general solution is a superposition of the normal modes where , , , and are determined by the initial conditions of the problem. The process demonstrated here can be generalized and formulated using the formalism of Lagrangian mechanics or Hamiltonian mechanics. Standing waves A standing wave is a continuous form of normal mode. In a standing wave, all the space elements (i.e. coordinates) are oscillating in the same frequency and in phase (reaching the equilibrium point together), but each has a different amplitude. The general form of a standing wave is: where represents the dependence of amplitude on location and the cosine/sine are the oscillations in time. Physically, standing waves are formed by the interference (superposition) of waves and their reflections (although one may also say the opposite; that a moving wave is a superposition of standing waves). The geometric shape of the medium determines what would be the interference pattern, thus determines the form of the standing wave. This space-dependence is called a normal mode. Usually, for problems with continuous dependence on there is no single or finite number of normal modes, but there are infinitely many normal modes. If the problem is bounded (i.e. it is defined on a finite section of space) there are countably many normal modes (usually numbered ). If the problem is not bounded, there is a continuous spectrum of normal modes. Elastic solids In any solid at any temperature, the primary particles (e.g. atoms or molecules) are not stationary, but rather vibrate about mean positions. In insulators the capacity of the solid to store thermal energy is due almost entirely to these vibrations. Many physical properties of the solid (e.g. modulus of elasticity) can be predicted given knowledge of the frequencies with which the particles vibrate. The simplest assumption (by Einstein) is that all the particles oscillate about their mean positions with the same natural frequency . This is equivalent to the assumption that all atoms vibrate independently with a frequency . Einstein also assumed that the allowed energy states of these oscillations are harmonics, or integral multiples of . The spectrum of waveforms can be described mathematically using a Fourier series of sinusoidal density fluctuations (or thermal phonons). Debye subsequently recognized that each oscillator is intimately coupled to its neighboring oscillators at all times. Thus, by replacing Einstein's identical uncoupled oscillators with the same number of coupled oscillators, Debye correlated the elastic vibrations of a one-dimensional solid with the number of mathematically special modes of vibration of a stretched string (see figure). The pure tone of lowest pitch or frequency is referred to as the fundamental and the multiples of that frequency are called its harmonic overtones. He assigned to one of the oscillators the frequency of the fundamental vibration of the whole block of solid. He assigned to the remaining oscillators the frequencies of the harmonics of that fundamental, with the highest of all these frequencies being limited by the motion of the smallest primary unit. The normal modes of vibration of a crystal are in general superpositions of many overtones, each with an appropriate amplitude and phase. Longer wavelength (low frequency) phonons are exactly those acoustical vibrations which are considered in the theory of sound. Both longitudinal and transverse waves can be propagated through a solid, while, in general, only longitudinal waves are supported by fluids. In the longitudinal mode, the displacement of particles from their positions of equilibrium coincides with the propagation direction of the wave. Mechanical longitudinal waves have been also referred to as . For transverse modes, individual particles move perpendicular to the propagation of the wave. According to quantum theory, the mean energy of a normal vibrational mode of a crystalline solid with characteristic frequency is: The term represents the "zero-point energy", or the energy which an oscillator will have at absolute zero. tends to the classic value at high temperatures By knowing the thermodynamic formula, the entropy per normal mode is: The free energy is: which, for , tends to: In order to calculate the internal energy and the specific heat, we must know the number of normal vibrational modes a frequency between the values and . Allow this number to be . Since the total number of normal modes is , the function is given by: The integration is performed over all frequencies of the crystal. Then the internal energy will be given by: In quantum mechanics Bound states in quantum mechanics are analogous to modes. The waves in quantum systems are oscillations in probability amplitude rather than material displacement. The frequency of oscillation, , relates to the mode energy by where is the Planck constant. Thus a system like an atom consists of a linear combination of modes of definite energy. These energies are characteristic of the particular atom. The (complex) square of the probability amplitude at a point in space gives the probability of measuring an electron at that location. The spatial distribution of this probability is characteristic of the atom. In seismology Normal modes are generated in the Earth from long wavelength seismic waves from large earthquakes interfering to form standing waves. For an elastic, isotropic, homogeneous sphere, spheroidal, toroidal and radial (or breathing) modes arise. Spheroidal modes only involve P and SV waves (like Rayleigh waves) and depend on overtone number and angular order but have degeneracy of azimuthal order . Increasing concentrates fundamental branch closer to surface and at large this tends to Rayleigh waves. Toroidal modes only involve SH waves (like Love waves) and do not exist in fluid outer core. Radial modes are just a subset of spheroidal modes with . The degeneracy does not exist on Earth as it is broken by rotation, ellipticity and 3D heterogeneous velocity and density structure. It may be assumed that each mode can be isolated, the self-coupling approximation, or that many modes close in frequency resonate, the cross-coupling approximation. Self-coupling will solely change the phase velocity and not the number of waves around a great circle, resulting in a stretching or shrinking of standing wave pattern. Modal cross-coupling occurs due to the rotation of the Earth, from aspherical elastic structure, or due to Earth's ellipticity and leads to a mixing of fundamental spheroidal and toroidal modes. See also Antiresonance Critical speed Harmonic oscillator Harmonic series (music) Infrared spectroscopy Leaky mode Mechanical resonance Modal analysis Mode (electromagnetism) Quasinormal mode Sturm–Liouville theory Torsional vibration Vibrations of a circular membrane References Further reading External links Harvard lecture notes on normal modes Ordinary differential equations Classical mechanics Quantum mechanics Spectroscopy Singular value decomposition Articles containing video clips
0.773977
0.992991
0.768552
Tennis racket theorem
The tennis racket theorem or intermediate axis theorem, is a kinetic phenomenon of classical mechanics which describes the movement of a rigid body with three distinct principal moments of inertia. It has also dubbed the Dzhanibekov effect, after Soviet cosmonaut Vladimir Dzhanibekov, who noticed one of the theorem's logical consequences whilst in space in 1985. The effect was known for at least 150 years prior, having been described by Louis Poinsot in 1834 and included in standard physics textbooks such as Classical Mechanics by Herbert Goldstein throughout the 20th century. The theorem describes the following effect: rotation of an object around its first and third principal axes is stable, whereas rotation around its second principal axis (or intermediate axis) is not. This can be demonstrated by the following experiment: hold a tennis racket at its handle, with its face being horizontal, and throw it in the air such that it performs a full rotation around its horizontal axis perpendicular to the handle (ê2 in the diagram), and then catch the handle. In almost all cases, during that rotation the face will also have completed a half rotation, so that the other face is now up. By contrast, it is easy to throw the racket so that it will rotate around the handle axis (ê1) without accompanying half-rotation around another axis; it is also possible to make it rotate around the vertical axis perpendicular to the handle (ê3) without any accompanying half-rotation. The experiment can be performed with any object that has three different moments of inertia, for instance with a book, remote control, or smartphone. The effect occurs whenever the axis of rotation differs only slightly from the object's second principal axis; air resistance or gravity are not necessary. Theory The tennis racket theorem can be qualitatively analysed with the help of Euler's equations. Under torque–free conditions, they take the following form: Here denote the object's principal moments of inertia, and we assume . The angular velocities around the object's three principal axes are and their time derivatives are denoted by . Stable rotation around the first and third principal axis Consider the situation when the object is rotating around axis with moment of inertia . To determine the nature of equilibrium, assume small initial angular velocities along the other two axes. As a result, according to equation (1), is very small. Therefore, the time dependence of may be neglected. Now, differentiating equation (2) and substituting from equation (3), because and . Note that is being opposed and so rotation around this axis is stable for the object. Similar reasoning gives that rotation around axis with moment of inertia is also stable. Unstable rotation around the second principal axis Now apply the same analysis to axis with moment of inertia This time is very small. Therefore, the time dependence of may be neglected. Now, differentiating equation (1) and substituting from equation (3), Note that is not opposed (and therefore will grow) and so rotation around the second axis is unstable. Therefore, even a small disturbance, in the form of a very small initial value of or , causes the object to 'flip'. Matrix analysis If the object is mostly rotating along its third axis, so , we can assume does not vary much, and write the equations of motion as a matrix equation:which has zero trace and positive determinant, implying the motion of is a stable rotation around the origin—a neutral equilibrium point. Similarly, the point is a neutral equilibrium point, but is a saddle point. Geometric analysis During motion, both the energy and angular momentum-squared are conserved, thus we have two conserved quantities:and so for any initial condition , the trajectory of must stay on the intersection curve between two ellipsoids defined by This is shown on the animation to the left. By inspecting Euler's equations, we see that implies that two components of are zero—that is, the object is exactly spinning around one of the principal axes. In all other situations, must remain in motion. By Euler's equations, if is a solution, then so is for any constant . In particular, the motion of the body in free space (obtained by integrating ) is exactly the same, just completed faster by a ratio of . Consequently, we can analyze the geometry of motion with a fixed value of , and vary on the fixed ellipsoid of constant squared angular momentum. As varies, the value of also varies—thus giving us a varying ellipsoid of constant energy. This is shown in the animation as a fixed orange ellipsoid and increasing blue ellipsoid. For concreteness, consider , then the angular momentum ellipsoid's major axes are in ratios of , and the energy ellipsoid's major axes are in ratios of . Thus the angular momentum ellipsoid is both flatter and sharper, as visible in the animation. In general, the angular momentum ellipsoid is always more "exaggerated" than the energy ellipsoid. Now inscribe on a fixed ellipsoid of its intersection curves with the ellipsoid of , as increases from zero to infinity. We can see that the curves evolve as follows: For small energy, there is no intersection, since we need a minimum of energy to stay on the angular momentum ellipsoid. The energy ellipsoid first intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the largest moment of inertia. They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle. They intersect at two "diagonal" curves that intersects at the points , when . If starts anywhere on the diagonal curves, it would approach one of the points, distance exponentially decreasing, but never actually reach the point. In other words, we have 4 heteroclinic orbits between the two saddle points. They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle. The energy ellipsoid last intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the smallest moment of inertia. The tennis racket effect occurs when is very close to a saddle point. The body would linger near the saddle point, then rapidly move to the other saddle point, near , linger again for a long time, and so on. The motion repeats with period . The above analysis is all done in the perspective of an observer which is rotating with the body. An observer watching the body's motion in free space would see its angular momentum vector conserved, while both its angular velocity vector and its moment of inertia undergoing complicated motions in space. At the beginning, the observer would see both mostly aligned with the second major axis of . After a while, the body performs a complicated motion and ends up with , and again both are mostly aligned with the second major axis of . Consequently, there are two possibilities: either the rigid body's second major axis is in the same direction, or it has reversed direction. If it is still in the same direction, then viewed in the rigid body's reference frame are also mostly in the same direction. However, we have just seen that and are near opposite saddle points . Contradiction. Qualitatively, then, this is what an observer watching in free space would observe: The body rotates around its second major axis for a while. The body rapidly undergoes a complicated motion, until its second major axis has reversed direction. The body rotates around its second major axis again for a while. Repeat. This can be easily seen in the video demonstration in microgravity. With dissipation When the body is not exactly rigid, but can flex and bend or contain liquid that sloshes around, it can dissipate energy through its internal degrees of freedom. In this case, the body still has constant angular momentum, but its energy would decrease, until it reaches the minimal point. As analyzed geometrically above, this happens when the body's angular velocity is exactly aligned with its axis of maximal moment of inertia. This happened to Explorer 1, the first satellite launched by the United States in 1958. The elongated body of the spacecraft had been designed to spin about its long (least-inertia) axis but refused to do so, and instead started precessing due to energy dissipation from flexible structural elements. In general, celestial bodies large or small would converge to a constant rotation around its axis of maximal moment of inertia. Whenever a celestial body is found in a complex rotational state, it is either due to a recent impact or tidal interaction, or is a fragment of a recently disrupted progenitor. See also References External links on Mir International Space Station Louis Poinsot, Théorie nouvelle de la rotation des corps, Paris, Bachelier, 1834, 170 p. : historically, the first mathematical description of this effect. - intuitive video explanation by Matt Parker The "Dzhanibekov effect" - an exercise in mechanics or fiction? Explain mathematically a video from a space station, The Bizarre Behavior of Rotating Bodies, Veritasium Classical mechanics Physics theorems Juggling
0.774346
0.992494
0.768534
Fermi problem
A Fermi problem (or Fermi quiz, Fermi question, Fermi estimate), also known as an order-of-magnitude problem (or order-of-magnitude estimate, order estimation), is an estimation problem in physics or engineering education, designed to teach dimensional analysis or approximation of extreme scientific calculations. Fermi problems are usually back-of-the-envelope calculations. The estimation technique is named after physicist Enrico Fermi as he was known for his ability to make good approximate calculations with little or no actual data. Fermi problems typically involve making justified guesses about quantities and their variance or lower and upper bounds. In some cases, order-of-magnitude estimates can also be derived using dimensional analysis. Historical background An example is Enrico Fermi's estimate of the strength of the atomic bomb that detonated at the Trinity test, based on the distance traveled by pieces of paper he dropped from his hand during the blast. Fermi's estimate of 10 kilotons of TNT was well within an order of magnitude of the now-accepted value of 21 kilotons. Examples Fermi questions are often extreme in nature, and cannot usually be solved using common mathematical or scientific information. Example questions given by the official Fermi Competition: Possibly the most famous Fermi Question is the Drake equation, which seeks to estimate the number of intelligent civilizations in the galaxy. The basic question of why, if there were a significant number of such civilizations, human civilization has never encountered any others is called the Fermi paradox. Advantages and scope Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results. While the estimate is almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be expected to be the answer. As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an answer within the same scale as the correct result, and if not gives a base for understanding why this is the case. For example, suppose a person was asked to determine the number of piano tuners in Chicago. If their initial estimate told them there should be a hundred or so, but the precise answer tells them there are many thousands, then they know they need to find out why there is this divergence from the expected result. First looking for errors, then for factors the estimation did not take account of – does Chicago have a number of music schools or other places with a disproportionately high ratio of pianos to people? Whether close or very far from the observed results, the context the estimation provides gives useful information both about the process of calculation and the assumptions that have been used to look at problems. Fermi estimates are also useful in approaching problems where the optimal choice of calculation method depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the internal stresses of a structure are low enough that it can be accurately described by linear elasticity; or if the estimate already bears significant relationship in scale relative to some other value, for example, if a structure will be over-engineered to withstand loads several times greater than the estimate. Although Fermi calculations are often not accurate, as there may be many problems with their assumptions, this sort of analysis does inform one what to look for to get a better answer. For the above example, one might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical day, or look up an accurate number for the population of Chicago. It also gives a rough estimate that may be good enough for some purposes: if a person wants to start a store in Chicago that sells piano tuning equipment, and calculates that they need 10,000 potential customers to stay in business, they can reasonably assume that the above estimate is far enough below 10,000 that they should consider a different business plan (and, with a little more work, they could compute a rough upper bound on the number of piano tuners by considering the most extreme reasonable values that could appear in each of their assumptions). Explanation Fermi estimates generally work because the estimations of the individual terms are often close to correct, and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano tuners in Chicago) will probably be more accurate than might be first supposed. In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort of Wiener process or random walk on the logarithmic scale, which diffuses as (in number of terms n). In discrete terms, the number of overestimates minus underestimates will have a binomial distribution. In continuous terms, if one makes a Fermi estimate of n steps, with standard deviation σ units on the log scale from the actual value, then the overall estimate will have standard deviation , since the standard deviation of a sum scales as in the number of summands. For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will have grown by a logarithmic factor of , so 23 = 8. Thus one will expect to be within to 8 times the correct value – within an order of magnitude, and much less than the worst case of erring by a factor of 29 = 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the overall estimate will be correspondingly better. See also Guesstimate Dead reckoning Handwaving Heuristic Order of approximation Stein's example Spherical cow References Further reading The following books contain many examples of Fermi problems with solutions: John Harte, Consider a Spherical Cow: A Course in Environmental Problem Solving University Science Books. 1988. . John Harte, Consider a Cylindrical Cow: More Adventures in Environmental Problem Solving University Science Books. 2001. . Clifford Swartz, Back-of-the-Envelope Physics Johns Hopkins University Press. 2003. . . Lawrence Weinstein & John A. Adam, Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin Princeton University Press. 2008. . . A textbook on Fermi problems. Aaron Santos, How Many Licks?: Or, How to Estimate Damn Near Anything. Running Press. 2009. . . Sanjoy Mahajan, Street-Fighting Mathematics: The Art of Educated Guessing and Opportunistic Problem Solving MIT Press. 2010. . . Göran Grimvall, Quantify! A Crash Course in Smart Thinking Johns Hopkins University Press. 2010. . . Lawrence Weinstein, Guesstimation 2.0: Solving Today's Problems on the Back of a Napkin Princeton University Press. 2012. . Sanjoy Mahajan, The Art of Insight in Science and Engineering MIT Press. 2014. . Dmitry Budker, Alexander O. Sushkov, Physics on your feet. Berkeley Graduate Exam Questions Oxford University Press. 2015. . Rob Eastaway, Maths on the Back of an Envelope: Clever ways to (roughly) calculate anything HarperCollins. 2019. . External links The University of Maryland Physics Education Group maintains a collection of Fermi problems. Fermi Questions: A Guide for Teachers, Students, and Event Supervisors by Lloyd Abrams. "What if? Paint the Earth" from the book What if? Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe. An example of a Fermi Problem relating to total gasoline consumed by cars since the invention of cars and comparison to the output of the energy released by the sun. "Introduction to Fermi estimates" by Nuño Sempere, which has a proof sketch of why Fermi-style decompositions produce better estimates. "How should mathematics be taught to non-mathematicians?" by Timothy Gowers. There are or have been a number of university-level courses devoted to estimation and the solution of Fermi problems. The materials for these courses are a good source for additional Fermi problem examples and material about solution strategies: 6.055J / 2.038J The Art of Approximation in Science and Engineering taught by Sanjoy Mahajan at the Massachusetts Institute of Technology (MIT). Physics on the Back of an Envelope taught by Lawrence Weinstein at Old Dominion University. Order of Magnitude Physics taught by Sterl Phinney at the California Institute of Technology. Order of Magnitude Estimation taught by Patrick Chuang at the University of California, Santa Cruz. Order of Magnitude Problem Solving taught by Linda Strubbe at the University of Toronto. Order of Magnitude Physics taught by Eugene Chiang at the University of California, Berkeley. Chapter 2: Discoveries on the Back of an Envelope from Frontiers of Science: Scientific Habits of Mind taught by David Helfand at Columbia University. Physics education Dimensional analysis Problem
0.771871
0.995674
0.768532
Three-body problem
In physics, specifically classical mechanics, the three-body problem is to take the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation. Unlike the two-body problem, the three-body problem has no general closed-form solution, meaning there is no equation that always solves it. When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions. Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods. The three-body problem is a special case of the -body problem. Historically, the first specific three-body problem to receive extended study was the one involving the Earth, the Moon, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles. Mathematical description The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions of three gravitationally interacting bodies with masses : where is the gravitational constant. As astronomer Juhan Frank describes, "These three second-order vector differential equations are equivalent to 18 first order scalar differential equations." As June Barrow-Green notes with regard to an alternative presentation, if represent three particles with masses , distances = , and coordinates (i,j = 1,2,3) in an inertial coordinate system ... the problem is described by nine second-order differntial equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions and momenta : where is the Hamiltonian: In this case, is simply the total energy of the system, gravitational plus kinetic. Restricted three-body problem In the restricted three-body problem formulation, in the description of Barrow-Green,two... bodies revolve around their centre of mass in circular orbits under the influence of their mutual gravitational attraction, and... form a two body system... [whose] motion is known. A third body (generally known as a planetoid), assumed massless with respect to the other two, moves in the plane defined by the two revolving bodies and, while being gravitationally influenced by them, exerts no influence of its own. Per Barrow-Green, "[t]he problem is then to ascertain the motion of the third body." That is to say, this two-body motion is taken to consist of circular orbits around the center of mass, and the planetoid is assumed to move in the plane defined by the circular orbits. (That is, it is useful to consider the effective potential.) With respect to a rotating reference frame, the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points, or move around them, for instance on a horseshoe orbit. The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth–Moon–Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem. Mathematically, the problem is stated as follows. Let be the masses of the two massive bodies, with (planar) coordinates and , and let be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to . Then, the motion of the planetoid is given by: where . In this form the equations of motion carry an explicit time dependence through the coordinates ; however, this time dependence can be removed through a transformation to a rotating reference frame, which simplifies any subsequent analysis. Solutions General solution There is no general closed-form solution to the three-body problem. In other words, it does not have a general solution that can be expressed in terms of a finite number of standard mathematical operations. Moreover, the motion of three bodies is generally non-repeating, except in special cases. However, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists an analytic solution to the three-body problem in the form of a Puiseux series, specifically a power series in terms of powers of . This series converges for all real , except for initial conditions corresponding to zero angular momentum. In practice, the latter restriction is insignificant since initial conditions with zero angular momentum are rare, having Lebesgue measure zero. An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the three-body problems. As is briefly discussed below, the only singularities in the three-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant). Collisions of any number are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. But there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps: Using an appropriate change of variables to continue analyzing the solution beyond the binary collision, in a process known as regularization. Proving that triple collisions only occur when the angular momentum vanishes. By restricting the initial data to , he removed all real singularities from the transformed equations for the three-body problem. Showing that if , then not only can there be no triple collision, but the system is strictly bounded away from a triple collision. This implies, by Cauchy's existence theorem for differential equations, that there are no complex singularities in a strip (depending on the value of ) in the complex plane centered around the real axis (related to the Cauchy–Kovalevskaya theorem). Find a conformal transformation that maps this strip into the unit disc. For example, if (the new variable after the regularization) and if , then this map is given by This finishes the proof of Sundman's theorem. The corresponding series converges extremely slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 10 terms. Special-case solutions In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant. In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses. These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem, these solutions, viewed in a frame rotating with the primaries, become points called Lagrangian points and labeled L1, L2, L3, L4, and L5, with L4 and L5 being symmetric instances of Lagrange's solution. In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem. In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle, with the heaviest body at the right angle and the lightest at the smaller acute angle. Burrau further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape of the lightest body for this problem using numerical integration, while at the same time finding a nearby periodic solution. In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Hénon–Hadjidemetriou family. In this family, the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions, two of the bodies follow the same path. In 1993, physicist Cris Moore at the Santa Fe Institute found a zero angular momentum solution with three equal masses moving around a figure-eight shape. In 2000, mathematicians Alain Chenciner and Richard Montgomery proved its formal existence. The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which makes it possible for such orbits to be observed in the physical universe. But it has been argued that this is unlikely since the domain of stability is small. For instance, the probability of a binary–binary scattering event resulting in a figure-8 orbit has been estimated to be a small fraction of a percent. In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. This was followed in 2018 by an additional 1,223 new solutions for a zero-angular-momentum system of unequal masses. In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three-body problem. The free-fall formulation starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forward and backward along an open "track". In 2023, Ivan Hristov, Radoslava Hristova, Dmitrašinović and Kiyotaka Tanikawa published a search for "periodic free-fall orbits" three-body problem, limited to the equal-mass case, and found 12,409 distinct solutions. Numerical approaches Using a computer, the problem may be solved to arbitrarily high precision using numerical integration although high precision requires a large amount of CPU time. There have been attempts of creating computer programs that numerically solve the three-body problem (and by extension, the n-body problem) involving both electromagnetic and gravitational interactions, and incorporating modern theories of physics such as special relativity. In addition, using the theory of random walks, an approximate probability of different outcomes may be computed. History The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his Philosophiæ Naturalis Principia Mathematica, in which Newton attempted to figure out if any long term stability is possible especially for such a system like that of our Earth, the Moon, and the Sun. Guided by major Renaissance astronomers Nicolaus Copernicus, Tycho Brahe and Johannes Kepler he introduced later generations to the beginning of the gravitational three-body problem. In Proposition 66 of Book 1 of the Principia, and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory, the motion of the Moon under the gravitational influence of Earth and the Sun. Later, this problem was also applied to other planets' interactions with the Earth and the Sun. The physical problem was first addressed by Amerigo Vespucci and subsequently by Galileo Galilei, as well as Simon Stevin, but they did not realize what they contributed. Though Galileo determined that the speed of fall of all bodies changes uniformly and in the same way, he did not apply it to planetary motions. Whereas in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea, solved in practice by John Harrison's invention of the marine chronometer. However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around Earth. Jean le Rond d'Alembert and Alexis Clairaut, who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747. From the end of the 19th century to early 20th century, the approach to solve the three-body problem with the usage of short-range attractive two-body forces was developed by scientists, which offered P.F. Bedaque, H.-W. Hammer and U. van Kolck an idea to renormalize the short-range three-body problem, providing scientists a rare example of a renormalization group limit cycle at the beginning of the 21st century. George William Hill worked on the restricted problem in the late 19th century with an application of motion of Venus and Mercury. At the beginning of the 20th century, Karl Sundman approached the problem mathematically and systematically by providing a functional theoretical proof to the problem valid for all values of time. It was the first time scientists theoretically solved the three-body problem. However, because there was not a qualitative enough solution of this system, and it was too slow for scientists to practically apply it, this solution still left some issues unresolved. In the 1970s, implication to three-body from two-body forces had been discovered by V. Efimov, which was named the Efimov effect. In 2017, Shijun Liao and Xiaoming Li applied a new strategy of numerical simulation for chaotic systems called the clean numerical simulation (CNS), with the use of a national supercomputer, to successfully gain 695 families of periodic solutions of the three-body system with equal mass. In 2019, Breen et al. announced a fast neural network solver for the three-body problem, trained using a numerical integrator. In September 2023, several possible solutions have been found to the problem according to reports. Other problems involving three bodies The term "three-body problem" is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies. A quantum-mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom, in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction. Like the gravitational three-body problem, the helium atom cannot be solved exactly. In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force that do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom. Within the point vortex model, the motion of vortices in a two-dimensional ideal fluid is described by equations of motion that contain only first-order time derivatives. I.e. in contrast to Newtonian mechanics, it is the velocity and not the acceleration that is determined by their relative positions. As a consequence, the three-vortex problem is still integrable, while at least four vortices are required to obtain chaotic behavior. One can draw parallels between the motion of a passive tracer particle in the velocity field of three vortices and the restricted three-body problem of Newtonian mechanics. The gravitational three-body problem has also been studied using general relativity. Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole. However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required. Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity. -body problem The three-body problem is a special case of the -body problem, which describes how objects move under one of the physical forces, such as gravity. These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for and by Qiudong Wang for (see -body problem for details). However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see -body simulation). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum -body problem. Among classical physical systems, the -body problem usually refers to a galaxy or to a cluster of galaxies; planetary systems, such as stars, planets, and their satellites, can also be treated as -body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory. See also Few-body systems Galaxy formation and evolution Gravity assist Lagrange point Low-energy transfer Michael Minovitch -body simulation Symplectic integrator Sitnikov problem Two-body problem Synodic reference frame Triple star system The Three-Body Problem (novel) 3 Body Problem (TV series) References Further reading External links Physicists Discover a Whopping 13 New Solutions to Three-Body Problem (Science) 3body simulator – an example of a computer program that solves the three-body problem numerically Chaotic maps Classical mechanics Dynamical systems Mathematical physics Orbits Equations of astronomy
0.768611
0.999856
0.7685
Xeelee Sequence
The Xeelee Sequence (; ) is a series of hard science fiction novels, novellas, and short stories written by British science fiction author Stephen Baxter. The series spans billions of years of fictional history, centering on humanity's future expansion into the universe, its intergalactic war with an enigmatic and supremely powerful Kardashev Type V alien civilization called the Xeelee (eldritch symbiotes composed of spacetime defects, Bose-Einstein condensates, and baryonic matter), and the Xeelee's own cosmos-spanning war with dark matter entities called Photino Birds. The series features many other species and civilizations that play a prominent role, including the Squeem (a species of group-mind aquatics), the Qax (beings whose biology is based on the complex interactions of convection cells), and the Silver Ghosts (colonies of symbiotic organisms encased in reflective skins). Several stories in the Sequence also deal with humans and posthumans living in extreme conditions, such as at the heart of a neutron star (Flux), in a separate universe with considerably stronger gravity (Raft), and within eusocial hive societies (Coalescent). The Xeelee Sequence deals with many concepts stemming from the fringe of theoretical physics and futurology, such as artificial wormholes, time travel, exotic-matter physics, naked singularities, closed timelike curves, multiple universes, hyperadvanced computing and artificial intelligence, faster-than-light travel, spacetime engineering, quantum wave function beings, and the upper echelons of the Kardashev scale. Thematically, the series deals heavily with certain existential and social philosophical issues, such as striving for survival and relevance in a harsh and unknowable universe, the effects of war and militarism on society, and the effects that come from a long and unpredictable future for humanity with strange technologies. As of August 2018, the series is composed of 9 novels and 53 short pieces (short stories and novellas, with most collected in 3 anthologies), all of which fit into a fictional timeline stretching from the Big Bang's singularity of the past to the eventual heat death of the universe and Timelike Infinitys singularity of the future. An omnibus edition of the first four Xeelee novels (Raft, Timelike Infinity, Flux, and Ring), entitled Xeelee: An Omnibus, was released in January 2010. In August 2016, the entire series of all novels and stories (up to that date) was released as one volume in e-book format entitled Xeelee Sequence: The Complete Series. Baxter's Destiny's Children series is part of the Xeelee Sequence. Conception Baxter first conceived of the Xeelee while hobby writing a short story in the summer of 1986 (eventually published in Interzone as "The Xeelee Flower" the following year). He incorporated powerful off-stage aliens to explain the story's titular artifact, and in pondering the backstory began to flesh out the basics of what would later become the main players and setting of the Sequence: a universe full of intelligent species that live in the shadow of the incomprehensible and god-like Xeelee. Plot overview The overarching plot of the Xeelee Sequence involves an intergalactic war between humanity and the Xeelee, and a cosmic war between the Xeelee and the Photino Birds, with the latter two being alien species that originated in the early universe. The technologically advanced Xeelee primarily inhabit supermassive black holes, manipulating their event horizons to create preferable living environments, construction materials, tools, and computing devices. The Photino Birds are a dark matter-based species that live in the gravity wells of stars, who are likely not aware of baryonic life forms due to dark matter's weak interactions with normal matter. Due to the inevitable risk of their habitats being destroyed by supernovae and other consequences of stellar evolution, the Photino Birds work to halt nuclear fusion in the cores of stars, prematurely aging them into stable white dwarfs. The resulting dwarfs provide them with suitable habitats for billions of times longer than other types of stars could, but at the expense of other forms of life on nearby planets. The Photino Birds' activities also effectively stop the formation of new black holes due to a lack of Type II supernovae, threatening the existence of the Xeelee and their cosmic projects. After overcoming a series of brutal occupations by extraterrestrial civilizations, humanity expands into the galaxy with an extremely xenophobic and militaristic outlook, with aims to exterminate other species they encounter. Humans eventually become the second-most advanced and widespread civilization in the Milky Way galaxy, after the Xeelee. Unaware of the Photino–Xeelee war and the existential ramifications of what is at stake, humanity come to the (unwarranted) conclusion that the Xeelee are a sinister and destructive threat to their hegemony and security. Through a bitter war of attrition, humans end up containing the Xeelee to the galactic core. Both humans and the Xeelee gain strategic intelligence by using time travel as a war tactic, through the use of closed timelike curves, resulting in a stalemate for thousands of years. Eventually, humanity develops defensive, movable pocket universes to compartmentalize and process information, and an exotic weapon able to damage the ecological stability of the core's supermassive black hole. Minutes after the first successful strike, the Xeelee withdraw from the galaxy, effectively ceding the Milky Way to fully human control. Humanity continued to advance technologically for a hundred thousand years afterwards, then attacked the Xeelee across the Local Group of galaxies. However, despite having annoyed the Xeelee enough to give up activities in the Milky Way, humans, having become an extremely powerful Type III civilization themselves at this point, prove only to be a minor distraction to the Xeelee on the whole, being ultimately unable to meaningfully challenge their dominance across the universe. Although the Xeelee are masters of space and time capable of influencing their own evolution, they are ultimately unsuccessful in stopping the Photino Birds. They instead utilize cosmic strings to build an enormous ring-like structure (which comes to be known as Bolder's Ring, or simply the Ring) to permit easy travel to other universes, allowing them and other species to escape the Photino Birds' destruction of the universe. The Xeelee, despite their unapproachable aloofness and transcendent superiority, appear to be compassionate and charitable toward the younger and less advanced species that inhabit the universe, demonstrating this by doing such things as constructing a specially made universe suited to the Silver Ghosts, who humans had nearly driven to extinction. Humans are likewise shown compassion by them and allowed to use the Ring to escape, despite their relentless long war against the Xeelee. Books Xeelee Sequence main novels Destiny's Children sub-series novels Collections of short stories and novellas Currently uncollected stories Chronology and reading order The novels in chronological order (as opposed to publication order) are given below. Some of the novels contain elements occurring at different points in the timeline. The story anthologies (Vacuum Diagrams, Resplendent, and Xeelee: Endurance) each contain stories taking place across the entire chronology. In 2009, Baxter posted a detailed chronology of the Xeelee Sequence explaining the proper chronological reading order of all the novels, novellas, and short stories up to that year. The timeline was updated in September 2015. When asked directly for a suggested reading order, the author wrote: "I hope that all the books and indeed the stories can be read stand-alone. I'm not a great fan of books that end with cliff-hangers. So you could go in anywhere. One way would be to start with Vacuum Diagrams, a collection that sets out the overall story of the universe. Then Timelike Infinity and Ring which tell the story of Michael Poole, then Raft and Flux which are really incidents against the wider background, and finally Destiny's Children." Reception Science fiction author Paul J. McAuley has praised Baxter and the series, saying: See also Stephen Baxter (author) Hard science fiction Great Attractor Kardashev scale Notes Baxter cites the pronunciation "ch-ee-lee" in Xeelee: Vengeance. It is unclear why, given the history of the author himself pronouncing it as "zee-lee", but one possible reason is that it reflects how the name came to be pronounced in-universe due to language change, especially considering Baxter's prior references to glottochronology in the series. Winner of the BSFA Award for Best Short Fiction, 2004 References External links Stephen Baxter's official website. The complete (as of September 2015) timeline for the Xeelee Sequence of novels and stories, hosted on Baxter's official website. Book series introduced in 1991 Fictional universes Novels about extraterrestrial life Stephen Baxter series Transhumanism in fiction Fiction about artificial intelligence Fiction about wormholes Fiction about consciousness transfer Fiction about immortality Fiction about the Solar System Fiction about time travel Fiction set in the 7th millennium or beyond Quantum fiction novels Dystopian novels Cosmic horror
0.771384
0.996261
0.768499
Escape velocity
In celestial mechanics, escape velocity or escape speed is the minimum speed needed for an object to escape from contact with or orbit of a primary body, assuming: Ballistic trajectory - no other forces are acting on the object, including propulsion and friction No other gravity-producing objects exist Although the term escape velocity is common, it is more accurately described as a speed than a velocity because it is independent of direction. Because gravitational force between two objects depends on their combined mass, the escape speed also depends on mass. For artificial satellites and small natural objects, the mass of the object makes a negligible contribution to the combined mass, and so is often ignored. Escape speed varies with distance from the center of the primary body, as does the velocity of an object traveling under the gravitational influence of the primary. If an object is in a circular or elliptical orbit, its speed is always less than the escape speed at its current distance. In contrast if it is on a hyperbolic trajectory its speed will always be higher than the escape speed at its current distance. (It will slow down as it gets to greater distance, but do so asymptotically approaching a positive speed.) An object on a parabolic trajectory will always be traveling exactly the escape speed at its current distance. It has precisely balanced positive kinetic energy and negative gravitational potential energy; it will always be slowing down, asymptotically approaching zero speed, but never quite stop. Escape velocity calculations are typically used to determine whether an object will remain in the gravitational sphere of influence of a given body. For example, in solar system exploration it is useful to know whether a probe will continue to orbit the Earth or escape to a heliocentric orbit. It is also useful to know how much a probe will need to slow down in order to be gravitationally captured by its destination body. Rockets do not have to reach escape velocity in a single maneuver, and objects can also use a gravity assist to siphon kinetic energy away from large bodies. Precise trajectory calculations require taking into account small forces like atmospheric drag, radiation pressure, and solar wind. A rocket under continuous or intermittent thrust (or an object climbing a space elevator) can attain escape at any non-zero speed, but the minimum amount of energy required to do so is always the same. Calculation Escape speed at a distance d from the center of a spherically symmetric primary body (such as a star or a planet) with mass M is given by the formula where: G is the universal gravitational constant g = GM/d2 is the local gravitational acceleration (or the surface gravity, when d = r). The value GM is called the standard gravitational parameter, or μ, and is often known more accurately than either G or M separately. When given an initial speed greater than the escape speed the object will asymptotically approach the hyperbolic excess speed satisfying the equation: For example, with the definitional value for standard gravity of , the escape velocity is . Energy required For an object of mass the energy required to escape the Earth's gravitational field is GMm / r, a function of the object's mass (where r is radius of the Earth, nominally 6,371 kilometres (3,959 mi), G is the gravitational constant, and M is the mass of the Earth, ). A related quantity is the specific orbital energy which is essentially the sum of the kinetic and potential energy divided by the mass. An object has reached escape velocity when the specific orbital energy is greater than or equal to zero. Conservation of energy The existence of escape velocity can be thought of as a consequence of conservation of energy and an energy field of finite depth. For an object with a given total energy, which is moving subject to conservative forces (such as a static gravity field) it is only possible for the object to reach combinations of locations and speeds which have that total energy; places which have a higher potential energy than this cannot be reached at all. Adding speed (kinetic energy) to an object expands the region of locations it can reach, until, with enough energy, everywhere to infinity becomes accessible. The formula for escape velocity can be derived from the principle of conservation of energy. For the sake of simplicity, unless stated otherwise, we assume that an object will escape the gravitational field of a uniform spherical planet by moving away from it and that the only significant force acting on the moving object is the planet's gravity. Imagine that a spaceship of mass m is initially at a distance r from the center of mass of the planet, whose mass is M, and its initial speed is equal to its escape velocity, . At its final state, it will be an infinite distance away from the planet, and its speed will be negligibly small. Kinetic energy K and gravitational potential energy Ug are the only types of energy that we will deal with (we will ignore the drag of the atmosphere), so by the conservation of energy, We can set Kfinal = 0 because final velocity is arbitrarily small, and = 0 because final gravitational potential energy is defined to be zero a long distance away from a planet, so Relativistic The same result is obtained by a relativistic calculation, in which case the variable r represents the radial coordinate or reduced circumference of the Schwarzschild metric. Scenarios From the surface of a body An alternative expression for the escape velocity particularly useful at the surface on the body is: where r is the distance between the center of the body and the point at which escape velocity is being calculated and g is the gravitational acceleration at that distance (i.e., the surface gravity). For a body with a spherically symmetric distribution of mass, the escape velocity from the surface is proportional to the radius assuming constant density, and proportional to the square root of the average density ρ. where This escape velocity is relative to a non-rotating frame of reference, not relative to the moving surface of the planet or moon, as explained below. From a rotating body The escape velocity relative to the surface of a rotating body depends on direction in which the escaping body travels. For example, as the Earth's rotational velocity is 465 m/s at the equator, a rocket launched tangentially from the Earth's equator to the east requires an initial velocity of about 10.735 km/s relative to the moving surface at the point of launch to escape whereas a rocket launched tangentially from the Earth's equator to the west requires an initial velocity of about 11.665 km/s relative to that moving surface. The surface velocity decreases with the cosine of the geographic latitude, so space launch facilities are often located as close to the equator as feasible, e.g. the American Cape Canaveral (latitude 28°28′ N) and the French Guiana Space Centre (latitude 5°14′ N). Practical considerations In most situations it is impractical to achieve escape velocity almost instantly, because of the acceleration implied, and also because if there is an atmosphere, the hypersonic speeds involved (on Earth a speed of 11.2 km/s, or 40,320 km/h) would cause most objects to burn up due to aerodynamic heating or be torn apart by atmospheric drag. For an actual escape orbit, a spacecraft will accelerate steadily out of the atmosphere until it reaches the escape velocity appropriate for its altitude (which will be less than on the surface). In many cases, the spacecraft may be first placed in a parking orbit (e.g. a low Earth orbit at 160–2,000 km) and then accelerated to the escape velocity at that altitude, which will be slightly lower (about 11.0 km/s at a low Earth orbit of 200 km). The required additional change in speed, however, is far less because the spacecraft already has a significant orbital speed (in low Earth orbit speed is approximately 7.8 km/s, or 28,080 km/h). From an orbiting body The escape velocity at a given height is times the speed in a circular orbit at the same height, (compare this with the velocity equation in circular orbit). This corresponds to the fact that the potential energy with respect to infinity of an object in such an orbit is minus two times its kinetic energy, while to escape the sum of potential and kinetic energy needs to be at least zero. The velocity corresponding to the circular orbit is sometimes called the first cosmic velocity, whereas in this context the escape velocity is referred to as the second cosmic velocity. For a body in an elliptical orbit wishing to accelerate to an escape orbit the required speed will vary, and will be greatest at periapsis when the body is closest to the central body. However, the orbital speed of the body will also be at its highest at this point, and the change in velocity required will be at its lowest, as explained by the Oberth effect. Barycentric escape velocity Escape velocity can either be measured as relative to the other, central body or relative to center of mass or barycenter of the system of bodies. Thus for systems of two bodies, the term escape velocity can be ambiguous, but it is usually intended to mean the barycentric escape velocity of the less massive body. Escape velocity usually refers to the escape velocity of zero mass test particles. For zero mass test particles we have that the 'relative to the other' and the 'barycentric' escape velocities are the same, namely . But when we can't neglect the smaller mass (say ) we arrive at slightly different formulas. Because the system has to obey the law of conservation of momentum we see that both the larger and the smaller mass must be accelerated in the gravitational field. Relative to the center of mass the velocity of the larger mass ( , for planet) can be expressed in terms of the velocity of the smaller mass (, for rocket). We get . The 'barycentric' escape velocity now becomes : while the 'relative to the other' escape velocity becomes : . Height of lower-velocity trajectories Ignoring all factors other than the gravitational force between the body and the object, an object projected vertically at speed from the surface of a spherical body with escape velocity and radius will attain a maximum height satisfying the equation which, solving for h results in where is the ratio of the original speed to the escape velocity Unlike escape velocity, the direction (vertically up) is important to achieve maximum height. Trajectory If an object attains exactly escape velocity, but is not directed straight away from the planet, then it will follow a curved path or trajectory. Although this trajectory does not form a closed shape, it can be referred to as an orbit. Assuming that gravity is the only significant force in the system, this object's speed at any point in the trajectory will be equal to the escape velocity at that point due to the conservation of energy, its total energy must always be 0, which implies that it always has escape velocity; see the derivation above. The shape of the trajectory will be a parabola whose focus is located at the center of mass of the planet. An actual escape requires a course with a trajectory that does not intersect with the planet, or its atmosphere, since this would cause the object to crash. When moving away from the source, this path is called an escape orbit. Escape orbits are known as C3 = 0 orbits. C3 is the characteristic energy, = −GM/2a, where a is the semi-major axis, which is infinite for parabolic trajectories. If the body has a velocity greater than escape velocity then its path will form a hyperbolic trajectory and it will have an excess hyperbolic velocity, equivalent to the extra energy the body has. A relatively small extra delta-v above that needed to accelerate to the escape speed can result in a relatively large speed at infinity. Some orbital manoeuvres make use of this fact. For example, at a place where escape speed is 11.2 km/s, the addition of 0.4 km/s yields a hyperbolic excess speed of 3.02 km/s: If a body in circular orbit (or at the periapsis of an elliptical orbit) accelerates along its direction of travel to escape velocity, the point of acceleration will form the periapsis of the escape trajectory. The eventual direction of travel will be at 90 degrees to the direction at the point of acceleration. If the body accelerates to beyond escape velocity the eventual direction of travel will be at a smaller angle, and indicated by one of the asymptotes of the hyperbolic trajectory it is now taking. This means the timing of the acceleration is critical if the intention is to escape in a particular direction. If the speed at periapsis is , then the eccentricity of the trajectory is given by: This is valid for elliptical, parabolic, and hyperbolic trajectories. If the trajectory is hyperbolic or parabolic, it will asymptotically approach an angle from the direction at periapsis, with The speed will asymptotically approach List of escape velocities In this table, the left-hand half gives the escape velocity from the visible surface (which may be gaseous as with Jupiter for example), relative to the centre of the planet or moon (that is, not relative to its moving surface). In the right-hand half, Ve refers to the speed relative to the central body (for example the sun), whereas Vte is the speed (at the visible surface of the smaller body) relative to the smaller body (planet or moon). The last two columns will depend precisely where in orbit escape velocity is reached, as the orbits are not exactly circular (particularly Mercury and Pluto). Deriving escape velocity using calculus Let G be the gravitational constant and let M be the mass of the earth (or other gravitating body) and m be the mass of the escaping body or projectile. At a distance r from the centre of gravitation the body feels an attractive force The work needed to move the body over a small distance dr against this force is therefore given by The total work needed to move the body from the surface r0 of the gravitating body to infinity is then In order to do this work to reach infinity, the body's minimal kinetic energy at departure must match this work, so the escape velocity v0 satisfies which results in See also Black hole – an object with an escape velocity greater than the speed of light Characteristic energy (C3) Delta-v budget – speed needed to perform maneuvers. Gravitational slingshot – a technique for changing trajectory Gravity well List of artificial objects in heliocentric orbit List of artificial objects leaving the Solar System Newton's cannonball Oberth effect – burning propellant deep in a gravity field gives higher change in kinetic energy Two-body problem Notes References External links Escape velocity calculator Web-based numerical escape velocity calculator Astrodynamics Orbits Articles containing video clips
0.769658
0.998461
0.768474
Froude number
In continuum mechanics, the Froude number (, after William Froude, ) is a dimensionless number defined as the ratio of the flow inertia to the external force field (the latter in many applications simply due to gravity). The Froude number is based on the speed–length ratio which he defined as: where is the local flow velocity (in m/s), is the local gravity field (in m/s2), and is a characteristic length (in m). The Froude number has some analogy with the Mach number. In theoretical fluid dynamics the Froude number is not frequently considered since usually the equations are considered in the high Froude limit of negligible external field, leading to homogeneous equations that preserve the mathematical aspects. For example, homogeneous Euler equations are conservation equations. However, in naval architecture the Froude number is a significant figure used to determine the resistance of a partially submerged object moving through water. Origins In open channel flows, introduced first the ratio of the flow velocity to the square root of the gravity acceleration times the flow depth. When the ratio was less than unity, the flow behaved like a fluvial motion (i.e., subcritical flow), and like a torrential flow motion when the ratio was greater than unity. Quantifying resistance of floating objects is generally credited to William Froude, who used a series of scale models to measure the resistance each model offered when towed at a given speed. The naval constructor Frederic Reech had put forward the concept much earlier in 1852 for testing ships and propellers but Froude was unaware of it. Speed–length ratio was originally defined by Froude in his Law of Comparison in 1868 in dimensional terms as: where: = flow speed = length of waterline The term was converted into non-dimensional terms and was given Froude's name in recognition of the work he did. In France, it is sometimes called Reech–Froude number after Frederic Reech. Definition and main application To show how the Froude number is linked to general continuum mechanics and not only to hydrodynamics we start from the Cauchy momentum equation in its dimensionless (nondimensional) form. Cauchy momentum equation In order to make the equations dimensionless, a characteristic length r0, and a characteristic velocity u0, need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: Substitution of these inverse relations in the Euler momentum equations, and definition of the Froude number: and the Euler number: the equations are finally expressed (with the material derivative and now omitting the indexes): Cauchy-type equations in the high Froude limit (corresponding to negligible external field) are named free equations. On the other hand, in the low Euler limit (corresponding to negligible stress) general Cauchy momentum equation becomes an inhomogeneous Burgers equation (here we make explicit the material derivative): This is an inhomogeneous pure advection equation, as much as the Stokes equation is a pure diffusion equation. Euler momentum equation Euler momentum equation is a Cauchy momentum equation with the Pascal law being the stress constitutive relation: in nondimensional Lagrangian form is: Free Euler equations are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory. Incompressible Navier–Stokes momentum equation Incompressible Navier–Stokes momentum equation is a Cauchy momentum equation with the Pascal law and Stokes's law being the stress constitutive relations: in nondimensional convective form it is: where is the Reynolds number. Free Navier–Stokes equations are dissipative (non conservative). Other applications Ship hydrodynamics In marine hydrodynamic applications, the Froude number is usually referenced with the notation and is defined as: where is the relative flow velocity between the sea and ship, is in particular the acceleration due to gravity, and is the length of the ship at the water line level, or in some notations. It is an important parameter with respect to the ship's drag, or resistance, especially in terms of wave making resistance. In the case of planing craft, where the waterline length is too speed-dependent to be meaningful, the Froude number is best defined as displacement Froude number and the reference length is taken as the cubic root of the volumetric displacement of the hull: Shallow water waves For shallow water waves, such as tsunamis and hydraulic jumps, the characteristic velocity is the average flow velocity, averaged over the cross-section perpendicular to the flow direction. The wave velocity, termed celerity , is equal to the square root of gravitational acceleration , times cross-sectional area , divided by free-surface width : so the Froude number in shallow water is: For rectangular cross-sections with uniform depth , the Froude number can be simplified to: For the flow is called a subcritical flow, further for the flow is characterised as supercritical flow. When the flow is denoted as critical flow. Wind engineering When considering wind effects on dynamically sensitive structures such as suspension bridges it is sometimes necessary to simulate the combined effect of the vibrating mass of the structure with the fluctuating force of the wind. In such cases, the Froude number should be respected. Similarly, when simulating hot smoke plumes combined with natural wind, Froude number scaling is necessary to maintain the correct balance between buoyancy forces and the momentum of the wind. Allometry The Froude number has also been applied in allometry to studying the locomotion of terrestrial animals, including antelope and dinosaurs. Extended Froude number Geophysical mass flows such as avalanches and debris flows take place on inclined slopes which then merge into gentle and flat run-out zones. So, these flows are associated with the elevation of the topographic slopes that induce the gravity potential energy together with the pressure potential energy during the flow. Therefore, the classical Froude number should include this additional effect. For such a situation, Froude number needs to be re-defined. The extended Froude number is defined as the ratio between the kinetic and the potential energy: where is the mean flow velocity, , ( is the earth pressure coefficient, is the slope), , is the channel downslope position and is the distance from the point of the mass release along the channel to the point where the flow hits the horizontal reference datum; and are the pressure potential and gravity potential energies, respectively. In the classical definition of the shallow-water or granular flow Froude number, the potential energy associated with the surface elevation, , is not considered. The extended Froude number differs substantially from the classical Froude number for higher surface elevations. The term emerges from the change of the geometry of the moving mass along the slope. Dimensional analysis suggests that for shallow flows , while and are both of order unity. If the mass is shallow with a virtually bed-parallel free-surface, then can be disregarded. In this situation, if the gravity potential is not taken into account, then is unbounded even though the kinetic energy is bounded. So, formally considering the additional contribution due to the gravitational potential energy, the singularity in Fr is removed. Stirred tanks In the study of stirred tanks, the Froude number governs the formation of surface vortices. Since the impeller tip velocity is (circular motion), where is the impeller frequency (usually in rpm) and is the impeller radius (in engineering the diameter is much more frequently employed), the Froude number then takes the following form: The Froude number finds also a similar application in powder mixers. It will indeed be used to determine in which mixing regime the blender is working. If Fr<1, the particles are just stirred, but if Fr>1, centrifugal forces applied to the powder overcome gravity and the bed of particles becomes fluidized, at least in some part of the blender, promoting mixing Densimetric Froude number When used in the context of the Boussinesq approximation the densimetric Froude number is defined as where is the reduced gravity: The densimetric Froude number is usually preferred by modellers who wish to nondimensionalize a speed preference to the Richardson number which is more commonly encountered when considering stratified shear layers. For example, the leading edge of a gravity current moves with a front Froude number of about unity. Walking Froude number The Froude number may be used to study trends in animal gait patterns. In analyses of the dynamics of legged locomotion, a walking limb is often modeled as an inverted pendulum, where the center of mass goes through a circular arc centered at the foot. The Froude number is the ratio of the centripetal force around the center of motion, the foot, and the weight of the animal walking: where is the mass, is the characteristic length, is the acceleration due to gravity and is the velocity. The characteristic length may be chosen to suit the study at hand. For instance, some studies have used the vertical distance of the hip joint from the ground, while others have used total leg length. The Froude number may also be calculated from the stride frequency as follows: If total leg length is used as the characteristic length, then the theoretical maximum speed of walking has a Froude number of 1.0 since any higher value would result in takeoff and the foot missing the ground. The typical transition speed from bipedal walking to running occurs with . R. M. Alexander found that animals of different sizes and masses travelling at different speeds, but with the same Froude number, consistently exhibit similar gaits. This study found that animals typically switch from an amble to a symmetric running gait (e.g., a trot or pace) around a Froude number of 1.0. A preference for asymmetric gaits (e.g., a canter, transverse gallop, rotary gallop, bound, or pronk) was observed at Froude numbers between 2.0 and 3.0. Usage The Froude number is used to compare the wave making resistance between bodies of various sizes and shapes. In free-surface flow, the nature of the flow (supercritical or subcritical) depends upon whether the Froude number is greater than or less than unity. One can easily see the line of "critical" flow in a kitchen or bathroom sink. Leave it unplugged and let the faucet run. Near the place where the stream of water hits the sink, the flow is supercritical. It 'hugs' the surface and moves quickly. On the outer edge of the flow pattern the flow is subcritical. This flow is thicker and moves more slowly. The boundary between the two areas is called a "hydraulic jump". The jump starts where the flow is just critical and Froude number is equal to 1.0. The Froude number has been used to study trends in animal locomotion in order to better understand why animals use different gait patterns as well as to form hypotheses about the gaits of extinct species. In addition particle bed behavior can be quantified by Froude number (Fr) in order to establish the optimum operating window. See also Notes References External links https://web.archive.org/web/20070927085042/http://www.qub.ac.uk/waves/fastferry/reference/MCA457.pdf Dimensionless numbers of fluid mechanics Fluid dynamics Naval architecture
0.773114
0.993994
0.76847
Rigid body
In physics, a rigid body, also known as a rigid object, is a solid body in which deformation is zero or negligible. The distance between any two given points on a rigid body remains constant in time regardless of external forces or moments exerted on it. A rigid body is usually considered as a continuous distribution of mass. In the study of special relativity, a perfectly rigid body does not exist; and objects can only be assumed to be rigid if they are not moving near the speed of light. In quantum mechanics, a rigid body is usually thought of as a collection of point masses. For instance, molecules (consisting of the point masses: electrons and nuclei) are often seen as rigid bodies (see classification of molecules as rigid rotors). Kinematics Linear and angular position The position of a rigid body is the position of all the particles of which it is composed. To simplify the description of this position, we exploit the property that the body is rigid, namely that all its particles maintain the same distance relative to each other. If the body is rigid, it is sufficient to describe the position of at least three non-collinear particles. This makes it possible to reconstruct the position of all the other particles, provided that their time-invariant position relative to the three selected particles is known. However, typically a different, mathematically more convenient, but equivalent approach is used. The position of the whole body is represented by: the linear position or position of the body, namely the position of one of the particles of the body, specifically chosen as a reference point (typically coinciding with the center of mass or centroid of the body), together with the angular position (also known as orientation, or attitude) of the body. Thus, the position of a rigid body has two components: linear and angular, respectively. The same is true for other kinematic and kinetic quantities describing the motion of a rigid body, such as linear and angular velocity, acceleration, momentum, impulse, and kinetic energy. The linear position can be represented by a vector with its tail at an arbitrary reference point in space (the origin of a chosen coordinate system) and its tip at an arbitrary point of interest on the rigid body, typically coinciding with its center of mass or centroid. This reference point may define the origin of a coordinate system fixed to the body. There are several ways to numerically describe the orientation of a rigid body, including a set of three Euler angles, a quaternion, or a direction cosine matrix (also referred to as a rotation matrix). All these methods actually define the orientation of a basis set (or coordinate system) which has a fixed orientation relative to the body (i.e. rotates together with the body), relative to another basis set (or coordinate system), from which the motion of the rigid body is observed. For instance, a basis set with fixed orientation relative to an airplane can be defined as a set of three orthogonal unit vectors b1, b2, b3, such that b1 is parallel to the chord line of the wing and directed forward, b2 is normal to the plane of symmetry and directed rightward, and b3 is given by the cross product . In general, when a rigid body moves, both its position and orientation vary with time. In the kinematic sense, these changes are referred to as translation and rotation, respectively. Indeed, the position of a rigid body can be viewed as a hypothetic translation and rotation (roto-translation) of the body starting from a hypothetic reference position (not necessarily coinciding with a position actually taken by the body during its motion). Linear and angular velocity Velocity (also called linear velocity) and angular velocity are measured with respect to a frame of reference. The linear velocity of a rigid body is a vector quantity, equal to the time rate of change of its linear position. Thus, it is the velocity of a reference point fixed to the body. During purely translational motion (motion with no rotation), all points on a rigid body move with the same velocity. However, when motion involves rotation, the instantaneous velocity of any two points on the body will generally not be the same. Two points of a rotating body will have the same instantaneous velocity only if they happen to lie on an axis parallel to the instantaneous axis of rotation. Angular velocity is a vector quantity that describes the angular speed at which the orientation of the rigid body is changing and the instantaneous axis about which it is rotating (the existence of this instantaneous axis is guaranteed by the Euler's rotation theorem). All points on a rigid body experience the same angular velocity at all times. During purely rotational motion, all points on the body change position except for those lying on the instantaneous axis of rotation. The relationship between orientation and angular velocity is not directly analogous to the relationship between position and velocity. Angular velocity is not the time rate of change of orientation, because there is no such concept as an orientation vector that can be differentiated to obtain the angular velocity. Kinematical equations Addition theorem for angular velocity The angular velocity of a rigid body B in a reference frame N is equal to the sum of the angular velocity of a rigid body D in N and the angular velocity of B with respect to D: In this case, rigid bodies and reference frames are indistinguishable and completely interchangeable. Addition theorem for position For any set of three points P, Q, and R, the position vector from P to R is the sum of the position vector from P to Q and the position vector from Q to R: The norm of a position vector is the spatial distance. Here the coordinates of all three vectors must be expressed in coordinate frames with the same orientation. Mathematical definition of velocity The velocity of point P in reference frame N is defined as the time derivative in N of the position vector from O to P: where O is any arbitrary point fixed in reference frame N, and the N to the left of the d/dt operator indicates that the derivative is taken in reference frame N. The result is independent of the selection of O so long as O is fixed in N. Mathematical definition of acceleration The acceleration of point P in reference frame N is defined as the time derivative in N of its velocity: Velocity of two points fixed on a rigid body For two points P and Q that are fixed on a rigid body B, where B has an angular velocity in the reference frame N, the velocity of Q in N can be expressed as a function of the velocity of P in N: where is the position vector from P to Q., with coordinates expressed in N (or a frame with the same orientation as N.) This relation can be derived from the temporal invariance of the norm distance between P and Q. Acceleration of two points fixed on a rigid body By differentiating the equation for the Velocity of two points fixed on a rigid body in N with respect to time, the acceleration in reference frame N of a point Q fixed on a rigid body B can be expressed as where is the angular acceleration of B in the reference frame N. Angular velocity and acceleration of two points fixed on a rigid body As mentioned above, all points on a rigid body B have the same angular velocity in a fixed reference frame N, and thus the same angular acceleration Velocity of one point moving on a rigid body If the point R is moving in the rigid body B while B moves in reference frame N, then the velocity of R in N is where Q is the point fixed in B that is instantaneously coincident with R at the instant of interest. This relation is often combined with the relation for the Velocity of two points fixed on a rigid body. Acceleration of one point moving on a rigid body The acceleration in reference frame N of the point R moving in body B while B is moving in frame N is given by where Q is the point fixed in B that instantaneously coincident with R at the instant of interest. This equation is often combined with Acceleration of two points fixed on a rigid body. Other quantities If C is the origin of a local coordinate system L, attached to the body, the spatial or twist acceleration of a rigid body is defined as the spatial acceleration of C (as opposed to material acceleration above): where represents the position of the point/particle with respect to the reference point of the body in terms of the local coordinate system L (the rigidity of the body means that this does not depend on time) is the orientation matrix, an orthogonal matrix with determinant 1, representing the orientation (angular position) of the local coordinate system L, with respect to the arbitrary reference orientation of another coordinate system G. Think of this matrix as three orthogonal unit vectors, one in each column, which define the orientation of the axes of L with respect to G. represents the angular velocity of the rigid body represents the total velocity of the point/particle represents the total acceleration of the point/particle represents the angular acceleration of the rigid body represents the spatial acceleration of the point/particle represents the spatial acceleration of the rigid body (i.e. the spatial acceleration of the origin of L). In 2D, the angular velocity is a scalar, and matrix A(t) simply represents a rotation in the xy-plane by an angle which is the integral of the angular velocity over time. Vehicles, walking people, etc., usually rotate according to changes in the direction of the velocity: they move forward with respect to their own orientation. Then, if the body follows a closed orbit in a plane, the angular velocity integrated over a time interval in which the orbit is completed once, is an integer times 360°. This integer is the winding number with respect to the origin of the velocity. Compare the amount of rotation associated with the vertices of a polygon. Kinetics Any point that is rigidly connected to the body can be used as reference point (origin of coordinate system L) to describe the linear motion of the body (the linear position, velocity and acceleration vectors depend on the choice). However, depending on the application, a convenient choice may be: the center of mass of the whole system, which generally has the simplest motion for a body moving freely in space; a point such that the translational motion is zero or simplified, e.g. on an axle or hinge, at the center of a ball and socket joint, etc. When the center of mass is used as reference point: The (linear) momentum is independent of the rotational motion. At any time it is equal to the total mass of the rigid body times the translational velocity. The angular momentum with respect to the center of mass is the same as without translation: at any time it is equal to the inertia tensor times the angular velocity. When the angular velocity is expressed with respect to a coordinate system coinciding with the principal axes of the body, each component of the angular momentum is a product of a moment of inertia (a principal value of the inertia tensor) times the corresponding component of the angular velocity; the torque is the inertia tensor times the angular acceleration. Possible motions in the absence of external forces are translation with constant velocity, steady rotation about a fixed principal axis, and also torque-free precession. The net external force on the rigid body is always equal to the total mass times the translational acceleration (i.e., Newton's second law holds for the translational motion, even when the net external torque is nonzero, and/or the body rotates). The total kinetic energy is simply the sum of translational and rotational energy. Geometry Two rigid bodies are said to be different (not copies) if there is no proper rotation from one to the other. A rigid body is called chiral if its mirror image is different in that sense, i.e., if it has either no symmetry or its symmetry group contains only proper rotations. In the opposite case an object is called achiral: the mirror image is a copy, not a different object. Such an object may have a symmetry plane, but not necessarily: there may also be a plane of reflection with respect to which the image of the object is a rotated version. The latter applies for S2n, of which the case n = 1 is inversion symmetry. For a (rigid) rectangular transparent sheet, inversion symmetry corresponds to having on one side an image without rotational symmetry and on the other side an image such that what shines through is the image at the top side, upside down. We can distinguish two cases: the sheet surface with the image is not symmetric - in this case the two sides are different, but the mirror image of the object is the same, after a rotation by 180° about the axis perpendicular to the mirror plane. the sheet surface with the image has a symmetry axis - in this case the two sides are the same, and the mirror image of the object is also the same, again after a rotation by 180° about the axis perpendicular to the mirror plane. A sheet with a through and through image is achiral. We can distinguish again two cases: the sheet surface with the image has no symmetry axis - the two sides are different the sheet surface with the image has a symmetry axis - the two sides are the same Configuration space The configuration space of a rigid body with one point fixed (i.e., a body with zero translational motion) is given by the underlying manifold of the rotation group SO(3). The configuration space of a nonfixed (with non-zero translational motion) rigid body is E+(3), the subgroup of direct isometries of the Euclidean group in three dimensions (combinations of translations and rotations). See also Angular velocity Axes conventions Differential rotation Rigid body dynamics Infinitesimal rotations Euler's equations (rigid body dynamics) Euler's laws Born rigidity Rigid rotor Rigid transformation Geometric Mechanics Classical Mechanics (Goldstein) Notes References This reference effectively combines screw theory with rigid body dynamics for robotic applications. The author also chooses to use spatial accelerations extensively in place of material accelerations as they simplify the equations and allow for compact notation. JPL DARTS page has a section on spatial operator algebra (link: ) as well as an extensive list of references (link: ). (link: ). Prof. Dr. Dennis M. Kochmann, Dynamics Lecture Notes, ETH Zurich. External links Rigid bodies mechanics Rotational symmetry
0.774001
0.992828
0.76845
Mathematical formulation of quantum mechanics
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space. These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables. Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space. History of the formalism The "old quantum theory" and the need for new mathematics In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, , is now called the Planck constant in his honor. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons. All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. The "new quantum theory" Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent. Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form. Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. Later developments The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Path integral formulation Phase-space formulation of quantum mechanics & geometric quantization quantum field theory in curved spacetime axiomatic, algebraic and constructive quantum field theory C*-algebra formalism Generalized statistical model of quantum mechanics A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. Postulates of quantum mechanics A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.) The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms. Description of the state of a system Each isolated physical system is associated with a (topologically) separable complex Hilbert space with inner product . Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in , where two vectors (of length 1) represent the same state if they differ only by a phase factor. As such, quantum states form a ray in projective Hilbert space, not a vector. Many textbooks fail to make this distinction, which could be partly a result of the fact that the Schrödinger equation itself involves Hilbert-space "vectors", with the result that the imprecise use of "state vector" rather than ray is very difficult to avoid. Accompanying Postulate I is the composite system postulate: In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem). In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as , where . If there is only a single non-zero , then the state can be expressed just as and is called simply separable or product state. Measurement on a system Description of physical quantities Physical observables are represented by Hermitian matrices on . Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized. Results of measurement By spectral theory, we can associate a probability measure to the values of in any state . We can also show that the possible values of the observable in any state must belong to the spectrum of . The expectation value (in the sense of probability theory) of the observable for the system in state represented by the unit vector ∈ H is . If we represent the state in the basis formed by the eigenvectors of , then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue. For a mixed state , the expected value of in the state is , and the probability of obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable is given by . If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace: and then . Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics. Effect of measurement on the state When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured. For a mixed state , after obtaining an eigenvalue in a discrete, nondegenerate spectrum of the corresponding observable , the updated state is given by . If the eigenvalue has degenerate, orthonormal eigenvectors , then the projection operator onto the eigensubspace is . Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s). Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem). Time evolution of a system Though it is possible to derive the Schrödinger equation, which describes how a state vector evolves in time, most texts assert the equation as a postulate. Common derivations include using the de Broglie hypothesis or path integrals. Equivalently, the time evolution postulate can be stated as: For a closed system in a mixed state , the time evolution is . The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary. Other implications of the postulates Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely). Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Recent research has shown that the composite system postulate (tensor product postulate) can be derived from the state postulate (Postulate I) and the measurement postulates (Postulates II); Moreover, it has also been shown that the measurement postulates (Postulates II) can be derived from "unitary quantum mechanics", which includes only the state postulate (Postulate I), the composite system postulate (tensor product postulate) and the unitary evolution postulate (Postulate III). Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. Spin In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position and time as continuous variables, . For spin wavefunctions the spin is an additional discrete variable: , where takes the values; That is, the state of a single particle with spin is represented by a -component spinor of complex-valued wave functions. Two classes of particles with very different behaviour are bosons which have integer spin, and fermions possessing half-integer spin. Symmetrization postulate In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows: Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem. It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles. Exchange Degeneracy In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as: If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy. More generally, consider a linear combination of such states, . For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since , the possible eigenvalues of P are +1 and −1. The states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows: The explicit symmetric/antisymmetric form of is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions. Pauli exclusion principle The property of spin relates to another basic property concerning systems of identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an -particle wave function; again in the position representation one must postulate that for the transposition of any two of the particles one always should have i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor which is for bosons, but for fermions. Electrons are fermions with ; quanta of light are bosons with . Due to the form of anti-symmetrized wavefunction: if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is: where is the number of particles with same wavefunction. Exceptions for symmetrization postulate In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension can one construct entities where is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties. Mathematical structure of quantum mechanics Pictures of dynamics Summary: Representations The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations of quantization, the deformation extension from classical to quantum mechanics. The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent. Time as an operator The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter , and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in would be generated by a "Hamiltonian" , where E is the energy operator and is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to the quantization of constrained systems and quantization of gauge theories. It is also possible to formulate a quantum theory of "events" where time becomes an observable. Problem of measurement The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable , when the system is prepared in a pure state is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain): Let have spectral resolution where is the resolution of the identity (also called projection-valued measure) associated with . Then the probability of the measurement outcome lying in an interval of is . In other words, the probability is obtained by integrating the characteristic function of against the countably additive measure If the measured value is contained in , then immediately after the measurement, the system will be in the (generally non-normalized) state . If the measured value does not lie in , replace by its complement for the above state. For example, suppose the state space is the -dimensional complex Hilbert space and is a Hermitian matrix with eigenvalues , with corresponding eigenvectors . The projection-valued measure associated with , , is then where is a Borel set containing only the single eigenvalue . If the system is prepared in state Then the probability of a measurement returning the value can be calculated by integrating the spectral measure over . This gives trivially The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate. A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections by a finite set of positive operators whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is . Instead of collapsing to the (unnormalized) state after the measurement, the system now will be in the state Since the operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds. The same formulation applies to general mixed states. In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace. In any case it seems that the above-mentioned problems can only be resolved if the time evolution included not only the quantum system, but also, and essentially, the classical measurement apparatus (see above). List of mathematical tools Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. The main tools include: linear algebra: complex numbers, eigenvectors, eigenvalues functional analysis: Hilbert spaces, linear operators, spectral theory differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions harmonic analysis: Fourier transforms See also List of mathematical topics in quantum theory Symmetry in quantum mechanics Notes References Further reading Mathematical physics History of physics
0.773798
0.993084
0.768446
Redshift
In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). The opposite change, a decrease in wavelength and increase in frequency and energy, is known as a blueshift, or negative redshift. The terms derive from the colours red and blue which form the extremes of the visible light spectrum. The main causes of electromagnetic redshift in astronomy and cosmology are the relative motions of radiation sources, which give rise to the relativistic Doppler effect, and gravitational potentials, which gravitationally redshift escaping radiation. All sufficiently distant light sources show cosmological redshift corresponding to recession speeds proportional to their distances from Earth, a fact known as Hubble's law that implies the universe is expanding. All redshifts can be understood under the umbrella of frame transformation laws. Gravitational waves, which also travel at the speed of light, are subject to the same redshift phenomena. The value of a redshift is often denoted by the letter , corresponding to the fractional change in wavelength (positive for redshifts, negative for blueshifts), and by the wavelength ratio (which is greater than 1 for redshifts and less than 1 for blueshifts). Examples of strong redshifting are a gamma ray perceived as an X-ray, or initially visible light perceived as radio waves. Subtler redshifts are seen in the spectroscopic observations of astronomical objects, and are used in terrestrial technologies such as Doppler radar and radar guns. Other physical processes exist that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from (astronomical) redshift and are not generally referred to as such (see section on physical optics and radiative transfer). History The history of the subject began in the 19th century, with the development of classical wave mechanics and the exploration of phenomena which are associated with the Doppler effect. The effect is named after the Austrian mathematician, Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. In 1845, the hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot. Doppler correctly predicted that the phenomenon would apply to all waves and, in particular, suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations. The Doppler redshift was first described by French physicist Hippolyte Fizeau in 1848, who noted the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler–Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by the method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines, using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the "annual Doppler effect", the yearly change in the Doppler shift of stars located near the ecliptic, due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors. Arthur Eddington used the term "red-shift" as early as 1923, although the word does not appear unhyphenated until about 1934, when Willem de Sitter used it. Beginning with observations in 1912, Vesto Slipher discovered that most spiral galaxies, then mostly thought to be spiral nebulae, had considerable redshifts. Slipher first reported on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he stated that "the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well." Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae", and the distances to them, with the formulation of his eponymous Hubble's law. Milton Humason worked on those observations with Hubble. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the Friedmann–Lemaître equations. They are now considered to be strong evidence for an expanding universe and the Big Bang theory. Measurement, characterization, and interpretation The spectrum of light that comes from a source (see idealized spectrum illustration top-right) can be measured. To determine the redshift, one searches for features in the spectrum such as absorption lines, emission lines, or other variations in light intensity. If found, these features can be compared with known features in the spectrum of various chemical compounds found in experiments where that compound is located on Earth. A very common atomic element in space is hydrogen. The spectrum of originally featureless light shone through hydrogen will show a signature spectrum specific to hydrogen that has features at regular intervals. If restricted to absorption lines it would look similar to the illustration (top right). If the same pattern of intervals is seen in an observed spectrum from a distant source but occurring at shifted wavelengths, it can be identified as hydrogen too. If the same spectral line is identified in both spectra—but at different wavelengths—then the redshift can be calculated using the table below. Determining the redshift of an object in this way requires a frequency or wavelength range. In order to calculate the redshift, one has to know the wavelength of the emitted light in the rest frame of the source: in other words, the wavelength that would be measured by an observer located adjacent to and comoving with the source. Since in astronomical applications this measurement cannot be done directly, because that would require traveling to the distant star of interest, the method using spectral lines described here is used instead. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum). Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called . If represents wavelength and represents frequency (note, where is the speed of light), then is defined by the equations: After is measured, the distinction between redshift and blueshift is simply a matter of whether is positive or negative. For example, Doppler effect blueshifts are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. Conversely, Doppler effect redshifts are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions. Redshift formulae In general relativity one can derive several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. In all cases the magnitude of the shift (the value of ) is independent of the wavelength. Doppler effect If a source of the light is moving away from an observer, then redshift occurs; if the source moves towards the observer, then blueshift occurs. This is true for all electromagnetic waves and is explained by the Doppler effect. Consequently, this type of redshift is called the Doppler redshift. If the source moves away from the observer with velocity , which is much less than the speed of light, the redshift is given by     (since ) where is the speed of light. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency. A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. A complete derivation of the effect can be found in the article on the relativistic Doppler effect. In brief, objects moving close to the speed of light will experience deviations from the above formula due to the time dilation of special relativity which can be corrected for by introducing the Lorentz factor into the classical Doppler formula as follows (for motion solely in the line of sight): This phenomenon was first observed in a 1938 experiment performed by Herbert E. Ives and G.R. Stilwell, called the Ives–Stilwell experiment. Since the Lorentz factor is dependent only on the magnitude of the velocity, this causes the redshift associated with the relativistic correction to be independent of the orientation of the source movement. In contrast, the classical part of the formula is dependent on the projection of the movement of the source into the line-of-sight which yields different results for different orientations. If is the angle between the direction of relative motion and the direction of emission in the observer's frame (zero angle is directly away from the observer), the full form for the relativistic Doppler effect becomes: and for motion solely in the line of sight, this equation reduces to: For the special case that the light is moving at right angle to the direction of relative motion in the observer's frame, the relativistic redshift is known as the transverse redshift, and a redshift: is measured, even though the object is not moving away from the observer. Even when the source is moving towards the observer, if there is a transverse component to the motion then there is some speed at which the dilation just cancels the expected blueshift and at higher speed the approaching source will be redshifted. Expansion of space In the earlier part of the twentieth century, Slipher, Wirtz and others made the first measurements of the redshifts and blueshifts of galaxies beyond the Milky Way. They initially interpreted these redshifts and blueshifts as being due to random motions, but later Lemaître (1927) and Hubble (1929), using previous data, discovered a roughly linear correlation between the increasing redshifts of, and distances to, galaxies. Lemaître realized that these observations could be explained by a mechanism of producing redshifts seen in Friedmann's solutions to Einstein's equations of general relativity. The correlation between redshifts and distances arises in all expanding models. This cosmological redshift is commonly attributed to stretching of the wavelengths of photons propagating through the expanding space. This interpretation can be misleading, however; expanding space is only a choice of coordinates and thus cannot have physical consequences. The cosmological redshift is more naturally interpreted as a Doppler shift arising due to the recession of distant objects. The observational consequences of this effect can be derived using the equations from general relativity that describe a homogeneous and isotropic universe. The cosmological redshift can thus be written as a function of , the time-dependent cosmic scale factor: In an expanding universe such as the one we inhabit, the scale factor is monotonically increasing as time passes, thus, is positive and distant galaxies appear redshifted. Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. Denote a density ratio as : with the critical density demarcating a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per cubic meter of space. At large redshifts, , one finds: where is the present-day Hubble constant, and is the redshift. There are several websites for calculating various times and distances from redshift, as the precise calculations require numerical integrals for most values of the parameters. Distinguishing between cosmological and local effects For cosmological redshifts of additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of space. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched. The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, "Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that..." Steven Weinberg clarified, "The increase of wavelength from emission to absorption of light does not depend on the rate of change of [here is the Robertson–Walker scale factor] at the times of emission or absorption, but on the increase of in the whole period from emission to absorption." If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted. Gravitational redshift In the theory of general relativity, there is time dilation within a gravitational well. This is known as the gravitational redshift or Einstein Shift. The theoretical derivation of this effect follows from the Schwarzschild solution of the Einstein equations which yields the following formula for redshift associated with a photon traveling in the gravitational field of an uncharged, nonrotating, spherically symmetric mass: where is the gravitational constant, is the mass of the object creating the gravitational field, is the radial coordinate of the source (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), and is the speed of light. This gravitational redshift result can be derived from the assumptions of special relativity and the equivalence principle; the full theory of general relativity is not required. The effect is very small but measurable on Earth using the Mössbauer effect and was first observed in the Pound–Rebka experiment. However, it is significant near a black hole, and as an object approaches the event horizon the red shift becomes infinite. It is also the dominant cause of large angular-scale temperature fluctuations in the cosmic microwave background radiation (see Sachs–Wolfe effect). Observations in astronomy The redshift observed in astronomy can be measured because the emission and absorption spectra for atoms are distinctive and well known, calibrated from spectroscopic experiments in laboratories on Earth. When the redshift of various absorption and emission lines from a single astronomical object is measured, is found to be remarkably constant. Although distant objects may be slightly blurred and lines broadened, it is by no more than can be explained by thermal or mechanical motion of the source. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. Alternative hypotheses and explanations for redshift such as tired light are not generally considered plausible. Spectroscopy, as a measurement, is considerably more difficult than simple photometry, which measures the brightness of astronomical objects through certain filters. When photometric data is all that is available (for example, the Hubble Deep Field and the Hubble Ultra Deep Field), astronomers rely on a technique for measuring photometric redshifts. Due to the broad wavelength ranges in photometric filters and the necessary assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to , and are much less reliable than spectroscopic determinations. However, photometry does at least allow a qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of , it would be brightest in the infrared(1000nm) rather than at the blue-green(500nm) color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.) Local observations In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts have enabled astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries, a method first employed in 1868 by British astronomer William Huggins. Similarly, small redshifts and blueshifts detected in the spectroscopic measurements of individual stars are one way astronomers have been able to diagnose and measure the presence and characteristics of planetary systems around other stars and have even made very detailed differential measurements of redshifts during planetary transits to determine precise orbital parameters. Finely detailed measurements of redshifts are used in helioseismology to determine the precise movements of the photosphere of the Sun. Redshifts have also been used to make the first measurements of the rotation rates of planets, velocities of interstellar clouds, the rotation of galaxies, and the dynamics of accretion onto neutron stars and black holes which exhibit both Doppler and gravitational redshifts. The temperatures of various emitting and absorbing objects can be obtained by measuring Doppler broadening—effectively redshifts and blueshifts over a single emission or absorption line. By measuring the broadening and shifts of the 21-centimeter hydrogen line in different directions, astronomers have been able to measure the recessional velocities of interstellar gas, which in turn reveals the rotation curve of our Milky Way. Similar measurements have been performed on other galaxies, such as Andromeda. As a diagnostic tool, redshift measurements are one of the most important spectroscopic measurements made in astronomy. Extragalactic observations The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest-observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about ( corresponds to present time), and it shows the state of the universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang. The luminous point-like cores of quasars were the first "high-redshift" objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies. For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. In the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from us. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law. Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a fingers of god effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter. The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble "constant", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content. While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate. Highest redshifts Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of JADES-GS-z14-0 with a redshift of , corresponding to 290 million years after the Big Bang. The previous record was held by GN-z11, with a redshift of , corresponding to 400 million years after the Big Bang, and by UDFy-38135539 at a redshift of , corresponding to 600 million years after the Big Bang. Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift and the next highest being . The most distant-observed gamma-ray burst with a spectroscopic redshift measurement was GRB 090423, which had a redshift of . The most distant-known quasar, ULAS J1342+0928, is at . The highest-known redshift radio galaxy (TGSS1530) is at a redshift and the highest-known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at . Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs). The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of . In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it. Redshift surveys With advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect. The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the universe, measuring redshifts for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. The Sloan Digital Sky Survey (SDSS), is ongoing as of 2013 and aims to measure the redshifts of around 3 million objects. SDSS has recorded redshifts for galaxies as high as 0.8, and has been involved in the detection of quasars beyond . The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a high-redshift complement to SDSS and 2dF. Effects from physical optics or radiative transfer The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Such shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as "redshifts" and "blueshifts", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as "reddening" rather than "redshifting" which, as a term, is normally reserved for the effects discussed above. In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well. In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening—similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight. Blueshift The opposite of a redshift is a blueshift. A blueshift is any decrease in wavelength (increase in energy), with a corresponding increase in frequency, of an electromagnetic wave. In visible light, this shifts a color towards the blue end of the spectrum. Doppler blueshift Doppler blueshift is caused by movement of a source towards the observer. The term applies to any decrease in wavelength and increase in frequency caused by relative motion, even outside the visible spectrum. Only objects moving at near-relativistic speeds toward the observer are noticeably bluer to the naked eye, but the wavelength of any reflected or emitted photon or other particle is shortened in the direction of travel. Doppler blueshift is used in astronomy to determine relative motion: The Andromeda Galaxy is moving toward our own Milky Way galaxy within the Local Group; thus, when observed from Earth, its light is undergoing a blueshift. Components of a binary star system will be blueshifted when moving towards Earth When observing spiral galaxies, the side spinning toward us will have a slight blueshift relative to the side spinning away from us (see Tully–Fisher relation). Blazars are known to propel relativistic jets toward us, emitting synchrotron radiation and bremsstrahlung that appears blueshifted. Nearby stars such as Barnard's Star are moving toward us, resulting in a very small blueshift. Doppler blueshift of distant objects with a high z can be subtracted from the much larger cosmological redshift to determine relative motion in the expanding universe. Gravitational blueshift Unlike the relative Doppler blueshift, caused by movement of a source towards the observer and thus dependent on the received angle of the photon, gravitational blueshift is absolute and does not depend on the received angle of the photon: It is a natural consequence of conservation of energy and mass–energy equivalence, and was confirmed experimentally in 1959 with the Pound–Rebka experiment. Gravitational blueshift contributes to cosmic microwave background (CMB) anisotropy via the Sachs–Wolfe effect: when a gravitational well evolves while a photon is passing, the amount of blueshift on approach will differ from the amount of gravitational redshift as it leaves the region. Blue outliers There are faraway active galaxies that show a blueshift in their [O III] emission lines. One of the largest blueshifts is found in the narrow-line quasar, PG 1543+489, which has a relative velocity of -1150 km/s. These types of galaxies are called "blue outliers". Cosmological blueshift In a hypothetical universe undergoing a runaway Big Crunch contraction, a cosmological blueshift would be observed, with galaxies further away being increasingly blueshifted—the exact opposite of the actually observed cosmological redshift in the present expanding universe. See also Gravitational potential Relativistic Doppler effect References Sources Articles Odenwald, S. & Fienberg, RT. 1993; "Galaxy Redshifts Reconsidered" in Sky & Telescope Feb. 2003; pp31–35 (This article is useful further reading in distinguishing between the 3 types of redshift and their causes.) Lineweaver, Charles H. and Tamara M. Davis, "Misconceptions about the Big Bang", Scientific American, March 2005. (This article is useful for explaining the cosmological redshift mechanism as well as clearing up misconceptions regarding the physics of the expansion of space.) Books See also physical cosmology textbooks for applications of the cosmological and gravitational redshifts. External links Ned Wright's Cosmology tutorial Cosmic reference guide entry on redshift Mike Luciuk's Astronomical Redshift tutorial Animated GIF of Cosmological Redshift by Wayne Hu Astronomical spectroscopy Doppler effects Effects of gravity Physical cosmology Physical quantities Concepts in astronomy
0.769627
0.998416
0.768408
Klein–Gordon equation
The Klein–Gordon equation (Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic wave equation, related to the Schrödinger equation. It is second-order in space and time and manifestly Lorentz-covariant. It is a differential equation version of the relativistic energy–momentum relation . Statement The Klein–Gordon equation can be written in different ways. The equation itself usually refers to the position space form, where it can be written in terms of separated space and time components or by combining them into a four-vector . By Fourier transforming the field into momentum space, the solution is usually written in terms of a superposition of plane waves whose energy and momentum obey the energy-momentum dispersion relation from special relativity. Here, the Klein–Gordon equation is given for both of the two common metric signature conventions . Here, is the wave operator and is the Laplace operator. The speed of light and Planck constant are often seen to clutter the equations, so they are therefore often expressed in natural units where . Unlike the Schrödinger equation, the Klein–Gordon equation admits two values of for each : one positive and one negative. Only by separating out the positive and negative frequency parts does one obtain an equation describing a relativistic wavefunction. For the time-independent case, the Klein–Gordon equation becomes which is formally the same as the homogeneous screened Poisson equation. In addition, the Klein-Gordon equation can also be represented as: where, the momentum operator is given as: . Relevance The equation is to be understood first as a classical continuous scalar field equation that can be quantized. The quantization process introduces then a quantum field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation. The equation solutions include a scalar or pseudoscalar field. In the realm of particle physics electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, the practical utility for particles like pions is limited. There is a second version of the equation for a complex scalar field that is theoretically important being the equation of the Higgs Boson. In the realm of condensed matter it can be used for many approximations of quasi-particles without spin. The equation can be put into the form of a Schrödinger equation. In this form it is expressed as two coupled differential equations, each of first order in time. The solutions have two components, reflecting the charge degree of freedom in relativity. It admits a conserved quantity, but this is not positive definite. The wave function cannot therefore be interpreted as a probability amplitude. The conserved quantity is instead interpreted as electric charge, and the norm squared of the wave function is interpreted as a charge density. The equation describes all spinless particles with positive, negative, and zero charge. Any solution of the free Dirac equation is, for each of its four components, a solution of the free Klein–Gordon equation. Despite historically it was invented as a single particle equation the Klein–Gordon equation cannot form the basis of a consistent quantum relativistic one-particle theory, any relativistic theory implies creation and annihilation of particles beyond a certain energy threshold. Solution for free particle Here, the Klein–Gordon equation in natural units, , with the metric signature is solved by Fourier transformation. Inserting the Fourier transformationand using orthogonality of the complex exponentials gives the dispersion relationThis restricts the momenta to those that lie on shell, giving positive and negative energy solutionsFor a new set of constants , the solution then becomesIt is common to handle the positive and negative energy solutions by separating out the negative energies and work only with positive :In the last step, was renamed. Now we can perform the -integration, picking up the positive frequency part from the delta function only: This is commonly taken as a general solution to the free Klein–Gordon equation. Note that because the initial Fourier transformation contained Lorentz invariant quantities like only, the last expression is also a Lorentz invariant solution to the Klein–Gordon equation. If one does not require Lorentz invariance, one can absorb the -factor into the coefficients and . History The equation was named after the physicists Oskar Klein and Walter Gordon, who in 1926 proposed that it describes relativistic electrons. Vladimir Fock also discovered the equation independently in 1926 slightly after Klein's work, in that Klein's paper was received on 28 April 1926, Fock's paper was received on 30 July 1926 and Gordon's paper on 29 September 1926. Other authors making similar claims in that same year include Johann Kudar, Théophile de Donder and Frans-H. van den Dungen, and Louis de Broglie. Although it turned out that modeling the electron's spin required the Dirac equation, the Klein–Gordon equation correctly describes the spinless relativistic composite particles, like the pion. On 4 July 2012, European Organization for Nuclear Research CERN announced the discovery of the Higgs boson. Since the Higgs boson is a spin-zero particle, it is the first observed ostensibly elementary particle to be described by the Klein–Gordon equation. Further experimentation and analysis is required to discern whether the Higgs boson observed is that of the Standard Model or a more exotic, possibly composite, form. The Klein–Gordon equation was first considered as a quantum wave equation by Erwin Schrödinger in his search for an equation describing de Broglie waves. The equation is found in his notebooks from late 1925, and he appears to have prepared a manuscript applying it to the hydrogen atom. Yet, because it fails to take into account the electron's spin, the equation predicts the hydrogen atom's fine structure incorrectly, including overestimating the overall magnitude of the splitting pattern by a factor of for the -th energy level. The Dirac equation relativistic spectrum is, however, easily recovered if the orbital-momentum quantum number is replaced by total angular-momentum quantum number . In January 1926, Schrödinger submitted for publication instead his equation, a non-relativistic approximation that predicts the Bohr energy levels of hydrogen without fine structure. In 1926, soon after the Schrödinger equation was introduced, Vladimir Fock wrote an article about its generalization for the case of magnetic fields, where forces were dependent on velocity, and independently derived this equation. Both Klein and Fock used Kaluza and Klein's method. Fock also determined the gauge theory for the wave equation. The Klein–Gordon equation for a free particle has a simple plane-wave solution. Derivation The non-relativistic equation for the energy of a free particle is By quantizing this, we get the non-relativistic Schrödinger equation for a free particle: where is the momentum operator ( being the del operator), and is the energy operator. The Schrödinger equation suffers from not being relativistically invariant, meaning that it is inconsistent with special relativity. It is natural to try to use the identity from special relativity describing the energy: Then, just inserting the quantum-mechanical operators for momentum and energy yields the equation The square root of a differential operator can be defined with the help of Fourier transformations, but due to the asymmetry of space and time derivatives, Dirac found it impossible to include external electromagnetic fields in a relativistically invariant way. So he looked for another equation that can be modified in order to describe the action of electromagnetic forces. In addition, this equation, as it stands, is nonlocal (see also Introduction to nonlocal equations). Klein and Gordon instead began with the square of the above identity, i.e. which, when quantized, gives which simplifies to Rearranging terms yields Since all reference to imaginary numbers has been eliminated from this equation, it can be applied to fields that are real-valued, as well as those that have complex values. Rewriting the first two terms using the inverse of the Minkowski metric , and writing the Einstein summation convention explicitly we get Thus the Klein–Gordon equation can be written in a covariant notation. This often means an abbreviation in the form of where and This operator is called the wave operator. Today this form is interpreted as the relativistic field equation for spin-0 particles. Furthermore, any component of any solution to the free Dirac equation (for a spin-1/2 particle) is automatically a solution to the free Klein–Gordon equation. This generalizes to particles of any spin due to the Bargmann–Wigner equations. Furthermore, in quantum field theory, every component of every quantum field must satisfy the free Klein–Gordon equation, making the equation a generic expression of quantum fields. Klein–Gordon equation in a potential The Klein–Gordon equation can be generalized to describe a field in some potential as Then the Klein–Gordon equation is the case . Another common choice of potential which arises in interacting theories is the potential for a real scalar field Higgs sector The pure Higgs boson sector of the Standard model is modelled by a Klein–Gordon field with a potential, denoted for this section. The Standard model is a gauge theory and so while the field transforms trivially under the Lorentz group, it transforms as a -valued vector under the action of the part of the gauge group. Therefore while it is a vector field , it is still referred to as a scalar field, as scalar describes its transformation (formally, representation) under the Lorentz group. This is also discussed below in the scalar chromodynamics section. The Higgs field is modelled by a potential , which can be viewed as a generalization of the potential, but has an important difference: it has a circle of minima. This observation is an important one in the theory of spontaneous symmetry breaking in the Standard model. Conserved U(1) current The Klein–Gordon equation (and action) for a complex field admits a symmetry. That is, under the transformations the Klein–Gordon equation is invariant, as is the action (see below). By Noether's theorem for fields, corresponding to this symmetry there is a current defined as which satisfies the conservation equation The form of the conserved current can be derived systematically by applying Noether's theorem to the symmetry. We will not do so here, but simply verify that this current is conserved. From the Klein–Gordon equation for a complex field of mass , written in covariant notation and mostly plus signature, and its complex conjugate Multiplying by the left respectively by and (and omitting for brevity the explicit dependence), Subtracting the former from the latter, we obtain or in index notation, Applying this to the derivative of the current one finds This symmetry is a global symmetry, but it can also be gauged to create a local or gauge symmetry: see below scalar QED. The name of gauge symmetry is somewhat misleading: it is really a redundancy, while the global symmetry is a genuine symmetry. Lagrangian formulation The Klein–Gordon equation can also be derived by a variational method, arising as the Euler–Lagrange equation of the action In natural units, with signature mostly minus, the actions take the simple form for a real scalar field of mass , and for a complex scalar field of mass . Applying the formula for the stress–energy tensor to the Lagrangian density (the quantity inside the integral), we can derive the stress–energy tensor of the scalar field. It is and in natural units, By integration of the time–time component over all space, one may show that both the positive- and negative-frequency plane-wave solutions can be physically associated with particles with positive energy. This is not the case for the Dirac equation and its energy–momentum tensor. The stress energy tensor is the set of conserved currents corresponding to the invariance of the Klein–Gordon equation under space-time translations . Therefore each component is conserved, that is, (this holds only on-shell, that is, when the Klein–Gordon equations are satisfied). It follows that the integral of over space is a conserved quantity for each . These have the physical interpretation of total energy for and total momentum for with . Non-relativistic limit Classical field Taking the non-relativistic limit of a classical Klein–Gordon field begins with the ansatz factoring the oscillatory rest mass energy term, Defining the kinetic energy , in the non-relativistic limit , and hence Applying this yields the non-relativistic limit of the second time derivative of , Substituting into the free Klein–Gordon equation, , yields which (by dividing out the exponential and subtracting the mass term) simplifies to This is a classical Schrödinger field. Quantum field The analogous limit of a quantum Klein–Gordon field is complicated by the non-commutativity of the field operator. In the limit , the creation and annihilation operators decouple and behave as independent quantum Schrödinger fields. Scalar electrodynamics There is a way to make the complex Klein–Gordon field interact with electromagnetism in a gauge-invariant way. We can replace the (partial) derivative with the gauge-covariant derivative. Under a local gauge transformation, the fields transform as where is a function of spacetime, thus making it a local transformation, as opposed to a constant over all of spacetime, which would be a global transformation. A subtle point is that global transformations can arise as local ones, when the function is taken to be a constant function. A well-formulated theory should be invariant under such transformations. Precisely, this means that the equations of motion and action (see below) are invariant. To achieve this, ordinary derivatives must be replaced by gauge-covariant derivatives , defined as where the 4-potential or gauge field transforms under a gauge transformation as . With these definitions, the covariant derivative transforms as In natural units, the Klein–Gordon equation therefore becomes Since an ungauged symmetry is only present in complex Klein–Gordon theory, this coupling and promotion to a gauged symmetry is compatible only with complex Klein–Gordon theory and not real Klein–Gordon theory. In natural units and mostly minus signature we have where is known as the Maxwell tensor, field strength or curvature depending on viewpoint. This theory is often known as scalar quantum electrodynamics or scalar QED, although all aspects we've discussed here are classical. Scalar chromodynamics It is possible to extend this to a non-abelian gauge theory with a gauge group , where we couple the scalar Klein–Gordon action to a Yang–Mills Lagrangian. Here, the field is actually vector-valued, but is still described as a scalar field: the scalar describes its transformation under space-time transformations, but not its transformation under the action of the gauge group. For concreteness we fix to be , the special unitary group for some . Under a gauge transformation , which can be described as a function the scalar field transforms as a vector . The covariant derivative is where the gauge field or connection transforms as This field can be seen as a matrix valued field which acts on the vector space . Finally defining the chromomagnetic field strength or curvature, we can define the action. Klein–Gordon on curved spacetime In general relativity, we include the effect of gravity by replacing partial derivatives with covariant derivatives, and the Klein–Gordon equation becomes (in the mostly pluses signature) or equivalently, where is the inverse of the metric tensor that is the gravitational potential field, g is the determinant of the metric tensor, is the covariant derivative, and is the Christoffel symbol that is the gravitational force field. With natural units this becomes This also admits an action formulation on a spacetime (Lorentzian) manifold . Using abstract index notation and in mostly plus signature this is or See also Quantum field theory Quartic interaction Relativistic wave equations Dirac equation (spin 1/2) Proca action (spin 1) Rarita–Schwinger equation (spin 3/2) Scalar field theory Sine–Gordon equation Remarks Notes References External links Linear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations. Nonlinear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations. Introduction to nonlocal equations. Partial differential equations Special relativity Waves Quantum field theory Equations of physics Mathematical physics
0.771296
0.996187
0.768354
Adenosine diphosphate
Adenosine diphosphate (ADP), also known as adenosine pyrophosphate (APP), is an important organic compound in metabolism and is essential to the flow of energy in living cells. ADP consists of three important structural components: a sugar backbone attached to adenine and two phosphate groups bonded to the 5 carbon atom of ribose. The diphosphate group of ADP is attached to the 5’ carbon of the sugar backbone, while the adenine attaches to the 1’ carbon. ADP can be interconverted to adenosine triphosphate (ATP) and adenosine monophosphate (AMP). ATP contains one more phosphate group than does ADP. AMP contains one fewer phosphate group. Energy transfer used by all living things is a result of dephosphorylation of ATP by enzymes known as ATPases. The cleavage of a phosphate group from ATP results in the coupling of energy to metabolic reactions and a by-product of ADP. ATP is continually reformed from lower-energy species ADP and AMP. The biosynthesis of ATP is achieved throughout processes such as substrate-level phosphorylation, oxidative phosphorylation, and photophosphorylation, all of which facilitate the addition of a phosphate group to ADP. Bioenergetics ADP cycling supplies the energy needed to do work in a biological system, the thermodynamic process of transferring energy from one source to another. There are two types of energy: potential energy and kinetic energy. Potential energy can be thought of as stored energy, or usable energy that is available to do work. Kinetic energy is the energy of an object as a result of its motion. The significance of ATP is in its ability to store potential energy within the phosphate bonds. The energy stored between these bonds can then be transferred to do work. For example, the transfer of energy from ATP to the protein myosin causes a conformational change when connecting to actin during muscle contraction. It takes multiple reactions between myosin and actin to effectively produce one muscle contraction, and, therefore, the availability of large amounts of ATP is required to produce each muscle contraction. For this reason, biological processes have evolved to produce efficient ways to replenish the potential energy of ATP from ADP. Breaking one of ATP's phosphorus bonds generates approximately 30.5 kilojoules per mole of ATP (7.3 kcal). ADP can be converted, or powered back to ATP through the process of releasing the chemical energy available in food; in humans, this is constantly performed via aerobic respiration in the mitochondria. Plants use photosynthetic pathways to convert and store energy from sunlight, also conversion of ADP to ATP. Animals use the energy released in the breakdown of glucose and other molecules to convert ADP to ATP, which can then be used to fuel necessary growth and cell maintenance. Cellular respiration Catabolism The ten-step catabolic pathway of glycolysis is the initial phase of free-energy release in the breakdown of glucose and can be split into two phases, the preparatory phase and payoff phase. ADP and phosphate are needed as precursors to synthesize ATP in the payoff reactions of the TCA cycle and oxidative phosphorylation mechanism. During the payoff phase of glycolysis, the enzymes phosphoglycerate kinase and pyruvate kinase facilitate the addition of a phosphate group to ADP by way of substrate-level phosphorylation. Glycolysis Glycolysis is performed by all living organisms and consists of 10 steps. The net reaction for the overall process of glycolysis is: Glucose + 2 NAD+ + 2 Pi + 2 ADP → 2 pyruvate + 2 ATP + 2 NADH + 2 H2O Steps 1 and 3 require the input of energy derived from the hydrolysis of ATP to ADP and Pi (inorganic phosphate), whereas steps 7 and 10 require the input of ADP, each yielding ATP. The enzymes necessary to break down glucose are found in the cytoplasm, the viscous fluid that fills living cells, where the glycolytic reactions take place. Citric acid cycle The citric acid cycle, also known as the Krebs cycle or the TCA (tricarboxylic acid) cycle is an 8-step process that takes the pyruvate generated by glycolysis and generates 4 NADH, FADH2, and GTP, which is further converted to ATP. It is only in step 5, where GTP is generated, by succinyl-CoA synthetase, and then converted to ATP, that ADP is used (GTP + ADP → GDP + ATP). Oxidative phosphorylation Oxidative phosphorylation produces 26 of the 30 equivalents of ATP generated in cellular respiration by transferring electrons from NADH or FADH2 to O2 through electron carriers. The energy released when electrons are passed from higher-energy NADH or FADH2 to the lower-energy O2 is required to phosphorylate ADP and once again generate ATP. It is this energy coupling and phosphorylation of ADP to ATP that gives the electron transport chain the name oxidative phosphorylation. Mitochondrial ATP synthase complex During the initial phases of glycolysis and the TCA cycle, cofactors such as NAD+ donate and accept electrons that aid in the electron transport chain's ability to produce a proton gradient across the inner mitochondrial membrane. The ATP synthase complex exists within the mitochondrial membrane (FO portion) and protrudes into the matrix (F1 portion). The energy derived as a result of the chemical gradient is then used to synthesize ATP by coupling the reaction of inorganic phosphate to ADP in the active site of the ATP synthase enzyme; the equation for this can be written as ADP + Pi → ATP. Blood platelet activation Under normal conditions, small disk-shape platelets circulate in the blood freely and without interaction with one another. ADP is stored in dense bodies inside blood platelets and is released upon platelet activation. ADP interacts with a family of ADP receptors found on platelets (P2Y1, P2Y12, and P2X1), which leads to platelet activation. P2Y1 receptors initiate platelet aggregation and shape change as a result of interactions with ADP. P2Y12 receptors further amplify the response to ADP and draw forth the completion of aggregation. ADP in the blood is converted to adenosine by the action of ecto-ADPases, inhibiting further platelet activation via adenosine receptors. See also Nucleoside Nucleotide DNA RNA Oligonucleotide Apyrase Phosphate Adenosine diphosphate ribose References Adenosine receptor agonists Neurotransmitters Nucleotides Cellular respiration Purines Purinergic signalling Pyrophosphate esters
0.774053
0.99259
0.768317
Earth-centered inertial
Earth-centered inertial (ECI) coordinate frames have their origins at the center of mass of Earth and are fixed with respect to the stars. "I" in "ECI" stands for inertial (i.e. "not accelerating"), in contrast to the "Earth-centered – Earth-fixed" (ECEF) frames, which remains fixed with respect to Earth's surface in its rotation, and then rotates with respect to stars. For objects in space, the equations of motion that describe orbital motion are simpler in a non-rotating frame such as ECI. The ECI frame is also useful for specifying the direction toward celestial objects: To represent the positions and velocities of terrestrial objects, it is convenient to use ECEF coordinates or latitude, longitude, and altitude. In a nutshell: ECI: inertial, not rotating, with respect to the stars; useful to describe motion of celestial bodies and spacecraft. ECEF: not inertial, accelerated, rotating with respect to the stars; useful to describe motion of objects on Earth surface. The extent to which an ECI frame is actually inertial is limited by the non-uniformity of the surrounding gravitational field. For example, the Moon's gravitational influence on a high-Earth orbiting satellite is significantly different than its influence on Earth, so observers in an ECI frame would have to account for this acceleration difference in their laws of motion. The closer the observed object is to the ECI-origin, the less significant the effect of the gravitational disparity is. Coordinate system definitions It is convenient to define the orientation of an ECI frame using the Earth's orbit plane and the orientation of the Earth's rotational axis in space. The Earth's orbit plane is called the ecliptic, and it does not coincide with the Earth's equatorial plane. The angle between the Earth's equatorial plane and the ecliptic, ε, is called the obliquity of the ecliptic and ε ≈ 23.4°. An equinox occurs when the earth is at a position in its orbit such that a vector from the earth toward the sun points to where the ecliptic intersects the celestial equator. The equinox which occurs near the first day of spring (with respect to the North hemisphere) is called the vernal equinox. The vernal equinox can be used as a principal direction for ECI frames. The Sun lies in the direction of the vernal equinox around 21 March. The fundamental plane for ECI frames is usually either the equatorial plane or the ecliptic. The location of an object in space can be defined in terms of right ascension and declination which are measured from the vernal equinox and the celestial equator. Right ascension and declination are spherical coordinates analogous to longitude and latitude, respectively. Locations of objects in space can also be represented using Cartesian coordinates in an ECI frame. The gravitational attraction of the Sun and Moon on the Earth's equatorial bulge cause the rotational axis of the Earth to precess in space similar to the action of a top. This is called precession. Nutation is the smaller amplitude shorter-period (< 18.6 years) wobble that is superposed on the precessional motion of the Celestial pole. It is due to shorter-period fluctuations in the strength of the torque exerted on Earth's equatorial bulge by the sun, moon, and planets. When the short-term periodic oscillations of this motion are averaged out, they are considered "mean" as opposed to "true" values. Thus, the vernal equinox, the equatorial plane of the Earth, and the ecliptic plane vary according to date and are specified for a particular epoch. Models representing the ever-changing orientation of the Earth in space are available from the International Earth Rotation and Reference Systems Service. Examples include: J2000: One commonly used ECI frame is defined with the Earth's Mean Equator and Mean Equinox (MEME) at 12:00 Terrestrial Time on 1 January 2000. It can be referred to as J2K, J2000 or EME2000. The x-axis is aligned with the mean vernal equinox. The z-axis is aligned with the Earth's rotation axis (or equivalently, the celestial North Pole) as it was at that time. The y-axis is rotated by 90° East about the celestial equator. M50: This frame is similar to J2000, but is defined with the mean equator and equinox at the beginning of the Besselian year 1950, which is B1950.0 = JDE 2433282.423357 = 1950 January 0.9235 TT = 1949 December 31 22:09:50.4 TT. GCRF: Geocentric Celestial Reference Frame is the Earth-centered counterpart of the International Celestial Reference Frame. MOD: a Mean of Date (MOD) frame is defined using the mean equator and equinox on a particular date. TEME: the ECI frame used for the NORAD two-line elements is sometimes called true equator, mean equinox (TEME) although it does not use the conventional mean equinox. See also Earth's axial tilt Geocentric Celestial Reference System Orbital state vectors References Astronomical coordinate systems
0.776377
0.989618
0.768317
Structural equation modeling
Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself. SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures. The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis, confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling. SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases. A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately. History Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989). Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables. Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature. Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. The economic version of SEM can be seen in SEMNET discussions of endogeneity, and in the heat produced as Judea Pearl's approach to causality via directed acyclic graphs (DAG's) rubs against economic approaches to modeling. Discussions comparing and contrasting various SEM approaches are available but disciplinary differences in data structures and the concerns motivating economic models make reunion unlikely. Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms. SEM analyses are popular in the social sciences because computer programs make it possible to estimate complicated causal structures, but the complexity of the models introduces substantial variability in the quality of the results. Some, but not all, results are obtained without the "inconvenience" of understanding experimental design, statistical control, the consequences of sample size, and other features contributing to good research design. General steps and considerations The following considerations apply to the construction and assessment of many structural equation models. Model specification Building or specifying a model requires attending to: the set of variables to be employed, what is known about the variables, what is presumed or hypothesized about the variables' causal connections and disconnections, what the researcher seeks to learn from the modeling, and the cases for which values of the variables will be available (kids? workers? companies? countries? cells? accidents? cults?). Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies: which effects and/or correlations/covariances are to be included and estimated, which effects and other coefficients are forbidden or presumed unnecessary, and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2). The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables. The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices. Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections. Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure. There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation. Estimation of free model coefficients Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on: a) the coefficients' locations in the model (e.g. which variables are connected/disconnected), b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear), c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables), and d) the measurement scales appropriate for the variables (interval level measurement is often assumed). A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features. The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares. One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent. Model assessment Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider: whether the data contain reasonable measurements of appropriate variables, whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.) whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.) whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.) the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.) the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.) Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification. Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence. Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data. A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables. "Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to . The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification. Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that increases (and hence probability decreases) with increasing sample size (N). There are two mistakes in discounting on this basis. First, for proper models, does not increase with increasing N, so if increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by , so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The model test, possibly adjusted, is the strongest available structural equation model test. Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment. This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model. Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables. The considerations relevant to using fit indices include checking: whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency); whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured); whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables); whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.); whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time); whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.); whether a model test is, or is not, available. (A value, degrees of freedom, and probability will be available for models reporting indices based on .) and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.). Some of the more commonly used fit statistics include Chi-square A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified. Akaike information criterion (AIC) An index of relative model fit: The preferred model is the one with the lowest AIC value. where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model. Root Mean Square Error of Approximation (RMSEA) Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested. Standardized Root Mean Squared Residual (SRMR) The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit. Comparative Fit Index (CFI) In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable. The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit. Sample size, power, and estimation Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators. The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data. Interpretation Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications. Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores. SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model. SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models. Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect. The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable. The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features. Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation. Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations. Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible. The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation. Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures. Controversies and movements Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides. These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives. The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM. A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available. An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective. Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence. Extensions, modeling alternatives, and statistical kin Categorical dependent variables Categorical intervening variables Copulas Deep Path Modelling Exploratory Structural Equation Modeling Fusion validity models Item response theory models Latent class models Latent growth modeling Link functions Longitudinal models Measurement invariance models Mixture model Multilevel models, hierarchical models (e.g. people nested in groups) Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.) Multi-method multi-trait models Random intercepts models Structural Equation Model Trees Structural Equation Multidimensional scaling Software Structural equation modeling programs differ widely in their capabilities and user requirements. See also References Bibliography Further reading Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers, Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606. Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley, Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA, Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001. Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE, . External links Structural equation modeling page under David Garson's StatNotes, NCSU Issues and Opinion on Structural Equation Modeling, SEM in IS Research The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000. Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM Graphical models Latent variable models Regression models Structural equation models
0.771219
0.996226
0.768308
Woodward–Hoffmann rules
The Woodward–Hoffmann rules (or the pericyclic selection rules) are a set of rules devised by Robert Burns Woodward and Roald Hoffmann to rationalize or predict certain aspects of the stereochemistry and activation energy of pericyclic reactions, an important class of reactions in organic chemistry. The rules originate in certain symmetries of the molecule's orbital structure that any molecular Hamiltonian conserves. Consequently, any symmetry-violating reaction must couple extensively to the environment; this imposes an energy barrier on its occurrence, and such reactions are called symmetry-forbidden. Their opposites are symmetry-allowed. Although the symmetry-imposed barrier is often formidable (up to ca. 5 eV or 480 kJ/mol in the case of a forbidden [2+2] cycloaddition), the prohibition is not absolute, and symmetry-forbidden reactions can still take place if other factors (e.g. strain release) favor the reaction. Likewise, a symmetry-allowed reaction may be preempted by an insurmountable energetic barrier resulting from factors unrelated to orbital symmetry. All known cases only violate the rules superficially; instead, different parts of the mechanism become asynchronous, and each step conforms to the rules. Background and terminology A pericyclic reaction is an organic reaction that proceeds via a single concerted and cyclic transition state, the geometry of which allows for the continuous overlap of a cycle of (π and/or σ) orbitals. The terms conrotatory and disrotatory describe the relative sense of bond rotation involved in electrocyclic ring-opening and -closing reactions. In a disrotatory process, the breaking or forming bond's two ends rotate in opposing directions (one clockwise, one counterclockwise); in a conrotatory process, they rotate in the same direction (both clockwise or both counterclockwise), the process is conrotatory. Eventually, it was recognized that thermally-promoted pericyclic reactions in general obey a single set of generalized selection rules, depending on the electron count and topology of the orbital interactions. The key concept of orbital topology or faciality was introduced to unify several classes of pericyclic reactions under a single conceptual framework. In short, a set of contiguous atoms and their associated orbitals that react as one unit in a pericyclic reaction is known as a component, and each component is said to be antarafacial or suprafacial depending on whether the orbital lobes that interact during the reaction are on the opposite or same side of the nodal plane, respectively. (The older terms conrotatory and disrotatory, which are applicable to electrocyclic ring opening and closing only, are subsumed by the terms antarafacial and suprafacial, respectively, under this more general classification system.) History Woodward and Hoffmann developed the pericyclic selection rules after performing extensive orbital-overlap calculations. At the time, Woodward wanted to know whether certain electrocyclic reactions might help synthesize vitamin B12. Chemists knew that such reactions exhibited striking stereospecificity, but could not predict which stereoisomer a reaction might select. In 1965, Woodward–Hoffmann realized that a simple set of rules explained the observed stereospecificity at the ends of open-chain conjugated polyenes when heated or irradiated. In their original publication, they summarized the experimental evidence and molecular orbital analysis as follows: In an open-chain system containing 4n π electrons, the orbital symmetry of the highest occupied molecule orbital is such that a bonding interaction between the ends must involve overlap between orbital envelopes on opposite faces of the system and this can only be achieved in a conrotatory process. In open systems containing (4n + 2) π electrons, terminal bonding interaction within ground-state molecules requires overlap of orbital envelopes on the same face of the system, attainable only by disrotatory displacements. In a photochemical reaction an electron in the HOMO of the reactant is promoted to an excited state leading to a reversal of terminal symmetry relationships and stereospecificity. In 1969, they would use correlation diagrams to state a generalized pericyclic selection rule equivalent to that now attached to their name: a pericyclic reaction is allowed if the sum of the number of suprafacial 4q + 2 components and number of antarafacial 4r components is odd. . In the intervening four years, Howard Zimmerman and Michael J. S. Dewar proposed an equally general conceptual framework: the Möbius-Hückel concept, or aromatic transition state theory. In the Dewar-Zimmerman approach the orbital overlap topology (Hückel or Möbius) and electron count (4n + 2 or 4n) results in either an aromatic or antiaromatic transition state. Meanwhile, Kenichi Fukui analyzed the frontier orbitals of such systems. A process in which the HOMO-LUMO interaction is constructive (results in a net bonding interaction) is favorable and considered symmetry-allowed, while a process in which the HOMO-LUMO interaction is non-constructive (results in bonding and antibonding interactions that cancel) is disfavorable and considered symmetry-forbidden. Though conceptually distinct, aromatic transition state theory (Zimmerman and Dewar), frontier molecular orbital theory (Fukui), and orbital symmetry conservation (Woodward and Hoffmann) all make identical predictions. The Woodward–Hoffmann rules exemplify molecular orbital theory's power, and indeed helped demonstrate that useful chemical results could arise from orbital analysis. The discovery would earn Hoffmann and Fukui the 1981 Nobel Prize in Chemistry. By that time, Woodward had died, and so was ineligible for the prize. Illustrative examples The interconversion of model cyclobutene and butadiene derivatives under thermal (heating) and photochemical (Ultraviolet irradiation) conditions is illustrative. The Woodward–Hoffmann rules apply to either direction of a pericyclic process. Due to the inherent ring strain of cyclobutene derivatives, the equilibrium between the cyclobutene and the 1,3-butadiene lies far to the right. Hence, under thermal conditions, the ring opening of the cyclobutene to the 1,3-butadiene is strongly favored by thermodynamics. On the other hand, under irradiation by ultraviolet light, a photostationary state is reached, a composition which depends on both absorbance and quantum yield of the forward and reverse reactions at a particular wavelength. Due to the different degrees of conjugation of 1,3-butadienes and cyclobutenes, only the 1,3-butadiene will have a significant absorbance at higher wavelengths, assuming the absence of other chromophores. Hence, irradiation of the 1,3-butadiene at such a wavelength can result in high conversion to the cyclobutene. Thermolysis of trans-1,2,3,4-tetramethyl-1-cyclobutene (1) afforded only one geometric isomer, (E,E)-3,4-dimethyl-2,4-hexadiene (2); the (Z,Z) and the (E,Z) geometric isomers were not detected in the product mixture. Similarly, thermolysis of cis-1,2,3,4-tetramethyl-1-cyclobutene (3) afforded only (E,Z) isomer 4. In both ring opening reactions, the carbons on the ends of the breaking σ-bond rotate in the same direction. On the other hand, the opposite stereochemical course was followed under photochemical activation: When the related compound (E,E)-2,4-hexadiene (5) was exposed to light, cis-3,4-dimethyl-1-cyclobutene (6) was formed exclusively as a result of electrocyclic ring closure. This requires the ends of the π-system to rotate in opposite directions to form the new σ-bond. Thermolysis of 6 follows the same stereochemical course as 3: electrocyclic ring opening leads to the formation of (E,Z)-2,4-hexadiene (7) and not 5. The Woodward-Hoffmann rules explain these results through orbital overlap:In the case of a photochemically driven electrocyclic ring-closure of buta-1,3-diene, electronic promotion causes to become the HOMO and the reaction mechanism must be disrotatory.Conversely in the electrocyclic ring-closure of the substituted hexa-1,3,5-triene pictured below, the reaction proceeds through a disrotatory mechanism. Rule The Woodward–Hoffmann rules can be stated succinctly as a single sentence: A ground-state pericyclic process is brought about by addition of thermal energy (i.e., heating the system, symbolized by Δ). In contrast, an excited-state pericyclic process takes place if a reactant is promoted to an electronically excited state by activation with ultraviolet light (i.e., irradiating the system, symbolized by hν). It is important to recognize, however, that the operative mechanism of a formally pericyclic reaction taking place under photochemical irradiation is generally not as simple or clearcut as this dichotomy suggests. Several modes of electronic excitation are usually possible, and electronically excited molecules may undergo intersystem crossing, radiationless decay, or relax to an unfavorable equilibrium geometry before the excited-state pericyclic process can take place. Thus, many apparent pericyclic reactions that take place under irradiation are actually thought to be stepwise processes involving diradical intermediates. Nevertheless, it is frequently observed that the pericyclic selection rules become reversed when switching from thermal to photochemical activation. This can be rationalized by considering the correlation of the first electronic excited states of the reactants and products. Although more of a useful heuristic than a rule, a corresponding generalized selection principle for photochemical pericyclic reactions can be stated: A pericyclic process involving N electron pairs and A antarafacial components is often favored under photochemical conditions if N + A is even. Pericyclic reactions involving an odd number of electrons are also known. With respect to application of the generalized pericyclic selection rule, these systems can generally be treated as though one more electron were involved. In the language of aromatic transition state theory, the Woodward–Hoffmann rules can be restated as follows: A pericyclic transition state involving (4n + 2) electrons with Hückel topology or 4n electrons with Möbius topology is aromatic and allowed, while a pericyclic transition state involving 4n-electrons with Hückel topology or (4n + 2)-electrons with Möbius topology is antiaromatic and forbidden. Correlation diagrams Longuet-Higgins and E. W. Abrahamson showed that the Woodward–Hoffmann rules can best be derived by examining the correlation diagram of a given reaction. A symmetry element is a point of reference (usually a plane or a line) about which an object is symmetric with respect to a symmetry operation. If a symmetry element is present throughout the reaction mechanism (reactant, transition state, and product), it is called a conserved symmetry element. Then, throughout the reaction, the symmetry of molecular orbitals with respect to this element must be conserved. That is, molecular orbitals that are symmetric with respect to the symmetry element in the starting material must be correlated to (transform into) orbitals symmetric with respect to that element in the product. Conversely, the same statement holds for antisymmetry with respect to a conserved symmetry element. A molecular orbital correlation diagram correlates molecular orbitals of the starting materials and the product based upon conservation of symmetry. From a molecular orbital correlation diagram one can construct an electronic state correlation diagram that correlates electronic states (i.e. ground state, and excited states) of the reactants with electronic states of the products. Correlation diagrams can then be used to predict the height of transition state barriers. Although orbital "symmetry" is used as a tool for sketching orbital and state correlation diagrams, the absolute presence or absence of a symmetry element is not critical for the determination of whether a reaction is allowed or forbidden. That is, the introduction of a simple substituent that formally disrupts a symmetry plane or axis (e.g., a methyl group) does not generally affect the assessment of whether a reaction is allowed or forbidden. Instead, the symmetry present in an unsubstituted analog is used to simplify the construction of orbital correlation diagrams and avoid the need to perform calculations. Only the phase relationships between orbitals are important when judging whether a reaction is "symmetry"-allowed or forbidden. Moreover, orbital correlations can still be made, even if there are no conserved symmetry elements (e.g., 1,5-sigmatropic shifts and ene reactions). For this reason, the Woodward–Hoffmann, Fukui, and Dewar–Zimmerman analyses are equally broad in their applicability, though a certain approach may be easier or more intuitive to apply than another, depending on the reaction one wishes to analyze. Electrocyclic reactions Considering the electrocyclic ring closure of the substituted 1,3-butadiene, the reaction can proceed through either a conrotatory or a disrotatory reaction mechanism. As shown to the left, in the conrotatory transition state there is a C2 axis of symmetry and in the disrotatory transition state there is a σ mirror plane of symmetry. In order to correlate orbitals of the starting material and product, one must determine whether the molecular orbitals are symmetric or antisymmetric with respect to these symmetry elements. The π-system molecular orbitals of butadiene are shown to the right along with the symmetry element with which they are symmetric. They are antisymmetric with respect to the other. For example, Ψ2 of 1,3-butadiene is symmetric with respect to 180o rotation about the C2 axis, and antisymmetric with respect to reflection in the mirror plane. Ψ1 and Ψ3 are symmetric with respect to the mirror plane as the sign of the p-orbital lobes is preserved under the symmetry transformation. Similarly, Ψ1 and Ψ3 are antisymmetric with respect to the C2 axis as the rotation inverts the sign of the p-orbital lobes uniformly. Conversely Ψ2 and Ψ4 are symmetric with respect to the C2 axis and antisymmetric with respect to the σ mirror plane. The same analysis can be carried out for the molecular orbitals of cyclobutene. The result of both symmetry operations on each of the MOs is shown to the left. As the σ and σ* orbitals lie entirely in the plane containing C2 perpendicular to σ, they are uniformly symmetric and antisymmetric (respectively) to both symmetry elements. On the other hand, π is symmetric with respect to reflection and antisymmetric with respect to rotation, while π* is antisymmetric with respect to reflection and symmetric with respect to rotation. Correlation lines are drawn to connect molecular orbitals in the starting material and the product that have the same symmetry with respect to the conserved symmetry element. In the case of the conrotatory 4 electron electrocyclic ring closure of 1,3-butadiene, the lowest molecular orbital Ψ1 is asymmetric (A) with respect to the C2 axis. So this molecular orbital is correlated with the π orbital of cyclobutene, the lowest energy orbital that is also (A) with respect to the C2 axis. Similarly, Ψ2, which is symmetric (S) with respect to the C2 axis, is correlated with σ of cyclobutene. The final two correlations are between the antisymmetric (A) molecular orbitals Ψ3 and σ*, and the symmetric (S) molecular orbitals Ψ4 and π*. Similarly, there exists a correlation diagram for a disrotatory mechanism. In this mechanism, the symmetry element that persists throughout the entire mechanism is the σ mirror plane of reflection. Here the lowest energy MO Ψ1 of 1,3-butadiene is symmetric with respect to the reflection plane, and as such correlates with the symmetric σ MO of cyclobutene. Similarly the higher energy pair of symmetric molecular orbitals Ψ3 and π correlate. As for the asymmetric molecular orbitals, the lower energy pair Ψ2 and π* form a correlation pair, as do Ψ4 and σ*. Evaluating the two mechanisms, the conrotatory mechanism is predicted to have a lower barrier because it transforms the electrons from ground-state orbitals of the reactants (Ψ1 and Ψ2) into ground-state orbitals of the product (σ and π). Conversely, the disrotatory mechanism forces the conversion of the Ψ1 orbital into the σ orbital, and the Ψ2 orbital into the π* orbital. Thus the two electrons in the ground-state Ψ2 orbital are transferred to an excited antibonding orbital, creating a doubly excited electronic state of the cyclobutene. This would lead to a significantly higher transition state barrier to reaction. However, as reactions do not take place between disjointed molecular orbitals, but electronic states, the final analysis involves state correlation diagrams. A state correlation diagram correlates the overall symmetry of electronic states in the starting material and product. The ground state of 1,3-butadiene, as shown above, has 2 electrons in Ψ1 and 2 electrons in Ψ2, so it is represented as Ψ12Ψ22. The overall symmetry of the state is the product of the symmetries of each filled orbital with multiplicity for doubly populated orbitals. Thus, as Ψ1 is asymmetric with respect to the C2 axis, and Ψ2 is symmetric, the total state is represented by A2S2. To see why this particular product is mathematically overall S, that S can be represented as (+1) and A as (−1). This derives from the fact that signs of the lobes of the p-orbitals are multiplied by (+1) if they are symmetric with respect to a symmetry transformation (i.e. unaltered) and multiplied by (−1) if they are antisymmetric with respect to a symmetry transformation (i.e. inverted). Thus A2S2=(−1)2(+1)2=+1=S. The first excited state (ES-1) is formed from promoting an electron from the HOMO to the LUMO, and thus is represented as Ψ12Ψ2Ψ3. As Ψ1is A, Ψ2 is S, and Ψ3 is A, the symmetry of this state is given by A2SA=A.Now considering the electronic states of the product, cyclobutene, the ground-state is given by σ2π2, which has symmetry S2A2=S. The first excited state (ES-1') is again formed from a promotion of an electron from the HOMO to the LUMO, so in this case it is represented as σ2ππ*. The symmetry of this state is S2AS=A. The ground state Ψ12Ψ22 of 1,3-butadiene correlates with the ground state σ2π2 of cyclobutene as demonstrated in the MO correlation diagram above. Ψ1 correlates with π and Ψ2 correlates with σ. Thus the orbitals making up Ψ12Ψ22 must transform into the orbitals making up σ2π2 under a conrotatory mechanism. However, the state ES-1 does not correlate with the state ES-1' as the molecular orbitals do not transform into each other under the symmetry-requirement seen in the molecular orbital correlation diagram. Instead as Ψ1 correlates with π, Ψ2 correlates with σ, and Ψ3 correlates with σ*, the state Ψ12Ψ2Ψ3 attempts to transform into π2σσ*, which is a different excited state. So ES-1 attempts to correlate with ES-2'=σπ2σ*, which is higher in energy than Es-1'. Similarly ES-1'=σ2ππ* attempts to correlate with ES-2=Ψ1Ψ22Ψ4. These correlations can not actually take place due to the quantum-mechanical rule known as the avoided crossing rule. This says that energetic configurations of the same symmetry can not cross on an energy level correlation diagram. In short, this is caused by mixing of states of the same symmetry when brought close enough in energy. So instead a high energetic barrier is formed between a forced transformation of ES-1 into ES-1'. In the diagram below the symmetry-preferred correlations are shown in dashed lines and the bold curved lines indicate the actual correlation with the high energetic barrier. The same analysis can be applied to the disrotatory mechanism to create the following state correlation diagram. Thus if the molecule is in the ground state it will proceed through the conrotatory mechanism (i.e. under thermal control) to avoid an electronic barrier. However, if the molecule is in the first excited state (i.e. under photochemical control), the electronic barrier is present in the conrotatory mechanism and the reaction will proceed through the disrotatory mechanism. These are not completely distinct as both the conrotatory and disrotatory mechanisms lie on the same potential surface. Thus a more correct statement is that as a ground state molecule explores the potential energy surface, it is more likely to achieve the activation barrier to undergo a conrotatory mechanism. Cycloaddition reactions The Woodward–Hoffmann rules can also explain bimolecular cycloaddition reactions through correlation diagrams. A [πp + πq] cycloaddition brings together two components, one with p π-electrons, and the other with q π-electrons. Cycloaddition reactions are further characterized as suprafacial (s) or antarafacial (a) with respect to each of the π components. (See below "General formulation" for a detailed description of the generalization of WH notation to all pericyclic processes.) [2+2] Cycloadditions For ordinary alkenes, [2+2] cycloadditions only observed under photochemical activation. The rationale for the non-observation of thermal [2+2] cycloadditions begins with the analysis of the four possible stereochemical consequences for the [2+2] cycloaddition: [π2s + π2s], [π2a + π2s], [π2s + π2a], [π2a + π2a]. The geometrically most plausible [π2s + π2s] mode is forbidden under thermal conditions, while the [π2a + π2s], [π2s + π2a] approaches are allowed from the point of view of symmetry but are rare due to an unfavorable strain and steric profile. Considering the [π2s + π2s] cycloaddition. This mechanism leads to a retention of stereochemistry in the product, as illustrated to the right. Two symmetry elements are present in the starting materials, transition state, and product: σ1 and σ2. σ1 is the mirror plane between the components perpendicular to the p-orbitals; σ2 splits the molecules in half perpendicular to the σ-bonds. These are both local-symmetry elements in the case that the components are not identical. To determine symmetry and asymmetry with respect to σ1 and σ2, the starting material molecular orbitals must be considered in tandem. The figure to the right shows the molecular orbital correlation diagram for the [π2s + π2s] cycloaddition. The two π and π* molecular orbitals of the starting materials are characterized by their symmetry with respect to first σ1 and then σ2. Similarly, the σ and σ* molecular orbitals of the product are characterized by their symmetry. In the correlation diagram, molecular orbitals transformations over the course of the reaction must conserve the symmetry of the molecular orbitals. Thus πSS correlates with σSS, πAS correlates with σ*AS, π*SA correlates with σSA, and finally π*AA correlates with σ*AA. Due to conservation of orbital symmetry, the bonding orbital πAS is forced to correlate with the antibonding orbital σ*AS. Thus a high barrier is predicted. This is made precise in the state correlation diagram below. The ground state in the starting materials is the electronic state where πSS and πAS are both doubly populated – i.e. the state (SS)2(AS)2. As such, this state attempts to correlate with the electronic state in the product where both σSS and σ*AS are doubly populated – i.e. the state (SS)2(AS)2. However, this state is neither the ground state (SS)2(SA)2 of cyclobutane, nor the first excited state ES-1'=(SS)2(SA)(AS), where an electron is promoted from the HOMO to the LUMO. [4+2] cycloadditions A [4+2] cycloaddition is exemplified by the Diels-Alder reaction. The simplest case is the reaction of 1,3-butadiene with ethylene to form cyclohexene. One symmetry element is conserved in this transformation – the mirror plane through the center of the reactants as shown to the left. The molecular orbitals of the reactants are the set {Ψ1, Ψ2, Ψ3, Ψ4} of molecular orbitals of 1,3-butadiene shown above, along with π and π* of ethylene. Ψ1 is symmetric, Ψ2 is antisymmetric, Ψ3 is symmetric, and Ψ4 is antisymmetric with respect to the mirror plane. Similarly π is symmetric and π* is antisymmetric with respect to the mirror plane. The molecular orbitals of the product are the symmetric and antisymmetric combinations of the two newly formed σ and σ* bonds and the π and π* bonds as shown below. Correlating the pairs of orbitals in the starting materials and product of the same symmetry and increasing energy gives the correlation diagram to the right. As this transforms the ground state bonding molecular orbitals of the starting materials into the ground state bonding orbitals of the product in a symmetry conservative manner this is predicted to not have the great energetic barrier present in the ground state [2+2] reaction above. To make the analysis precise, one can construct the state correlation diagram for the general [4+2]-cycloaddition. As before, the ground state is the electronic state depicted in the molecular orbital correlation diagram to the right. This can be described as Ψ12π2Ψ22, of total symmetry S2S2 A2=S. This correlates with the ground state of the cyclohexene σSσAπ2 which is also S2S2A2=S. As such this ground state reaction is not predicted to have a high symmetry-imposed barrier. One can also construct the excited-state correlations as is done above. Here, there is a high energetic barrier to a photo-induced Diels-Alder reaction under a suprafacial-suprafacial bond topology due to the avoided crossing shown below. Group transfer reactions The symmetry-imposed barrier heights of group transfer reactions can also be analyzed using correlation diagrams. A model reaction is the transfer of a pair of hydrogen atoms from ethane to perdeuterioethylene shown to the right. The only conserved symmetry element in this reaction is the mirror plane through the center of the molecules as shown to the left. The molecular orbitals of the system are constructed as symmetric and antisymmetric combinations of σ and σ* C–H bonds in ethane and π and π* bonds in the deutero-substituted ethene. Thus the lowest energy MO is the symmetric sum of the two C–H σ-bond (σS), followed by the antisymmetric sum (σA). The two highest energy MOs are formed from linear combinations of the σCH antibonds – highest is the antisymmetric σ*A, preceded by the symmetric σ*A at a slightly lower energy. In the middle of the energetic scale are the two remaining MOs that are the πCC and π*CC of ethene. The full molecular orbital correlation diagram is constructed in by matching pairs of symmetric and asymmetric MOs of increasing total energy, as explained above. As can be seen in the adjacent diagram, as the bonding orbitals of the reactants exactly correlate with the bonding orbitals of the products, this reaction is not predicted to have a high electronic symmetry-imposed barrier. Selection rules Using correlation diagrams one can derive selection rules for the following generalized classes of pericyclic reactions. Each of these particular classes is further generalized in the generalized Woodward–Hoffmann rules. The more inclusive bond topology descriptors antarafacial and suprafacial subsume the terms conrotatory and disrotatory, respectively. Antarafacial refers to bond making or breaking through the opposite face of a π system, p orbital, or σ bond, while suprafacial refers to the process occurring through the same face. A suprafacial transformation at a chiral center preserves stereochemistry, whereas an antarafacial transformation reverses stereochemistry. Electrocyclic reactions The selection rule of electrocyclization reactions is given in the original statement of the Woodward–Hoffmann rules. If a generalized electrocyclic ring closure occurs in a polyene of 4n π-electrons, then it is conrotatory under thermal conditions and disrotatory under photochemical conditions. Conversely in a polyene of 4n + 2 π-electrons, an electrocyclic ring closure is disrotatory under thermal conditions and conrotatory under photochemical conditions. This result can either be derived via an FMO analysis based upon the sign of p orbital lobes of the HOMO of the polyene or with correlation diagrams. Taking first the first possibility, in the ground state, if a polyene has 4n electrons, the outer p-orbitals of the HOMO that form the σ bond in the electrocyclized product are of opposite signs. Thus a constructive overlap is only produced under a conrotatory or antarafacial process. Conversely for a polyene with 4n + 2 electrons, the outer p-orbitals of the ground state HOMO are of the same sign. Thus constructive orbital overlap occurs with a disrotatory or suprafacical process. Additionally, the correlation diagram for any 4n electrocyclic reaction will resemble the diagram for the 4 electron cyclization of 1,3-butadiene, while the correlation diagram any 4n + 2 electron electrocyclic reaction will resemble the correlation diagram for the 6 electron cyclization of 1,3,5-hexatriene. This is summarized in the following table: Sigmatropic rearrangement reactions A general sigmatropic rearrangement can be classified as order [i,j], meaning that a σ bond originally between atoms denoted 1 and 1', adjacent to one or more π systems, is shifted to between atoms i and j. Thus it migrates (i − 1), (j − 1) atoms away from its original position. A formal symmetry analysis via correlation diagrams is of no use in the study of sigmatropic rearrangements as there are, in general, only symmetry elements present in the transition state. Except in special cases (e.g. [3,3]-rearrangements), there are no symmetry elements that are conserved as the reaction coordinate is traversed. Nevertheless, orbital correlations between starting materials and products can still be analyzed, and correlations of starting material orbitals with high energy product orbitals will, as usual, result in "symmetry-forbidden" processes. However, an FMO based approach (or the Dewar-Zimmerman analysis) is more straightforward to apply. One of the most prevalent classes of sigmatropic shifts is classified as [1,j], where j is odd. That means one terminus of the σ-bond migrates (j − 1) bonds away across a π-system while the other terminus does not migrate. It is a reaction involving j + 1 electrons: j − 1 from the π-system and 2 from σ-bond. Using FMO analysis, [1,j]-sigmatropic rearrangements are allowed if the transition state has constructive overlap between the migrating group and the accepting p orbital of the HOMO. In [1,j]-sigmatropic rearrangements if j + 1 = 4n, then supra/antara is thermally allowed, and if j + 1 = 4n + 2, then supra/supra or antara/antara is thermally allowed. The other prevalent class of sigmatropic rearrangements are [3,3], notably the Cope and Claisen rearrangements. Here, the constructive interactions must be between the HOMOs of the two allyl radical fragments in the transition state. The ground state HOMO Ψ2 of the allyl fragment is shown below. As the terminal p-orbitals are of opposite sign, this reaction can either take place in a supra/supra topology, or an antara/antara topology. The selection rules for an [i,j]-sigmatropic rearrangement are as follows: For supra/supra or antara/antara [i,j]-sigmatropic shifts, if i + j = 4n + 2 they are thermally allowed and if i + j = 4n they are photochemically allowed For supra/antara [i,j]-sigmatropic shifts, if i + j = 4n they are thermally allowed, and if i + j = 4n + 2 they are photochemically allowed This is summarized in the following table: Cycloaddition reactions A general [p+q]-cycloaddition is a concerted addition reaction between two components, one with p π-electrons, and one with q π-electrons. This reaction is symmetry allowed under the following conditions: For a supra/supra or antara/antara cycloaddition, it is thermally allowed if p + q = 4n + 2 and photochemically allowed if p + q = 4n For a supra/antara cycloaddition, it is thermally allowed if p + q = 4n and photochemically allowed if p + q = 4n + 2 This is summarized in the following table: Group transfer reactions A general double group transfer reaction which is synchronous can be represented as an interaction between a component with p π electrons and a component with q π electrons as shown. Then the selection rules are the same as for the generalized cycloaddition reactions. That is For supra/supra or antara/antara double group transfers, if p + q = 4n + 2 it is thermally allowed, and if p + q = 4n it is photochemically allowed For supra/antara double group transfers, if p + q = 4n it is thermally allowed, and if p + q = 4n + 2 it is photochemically allowed This is summarized in the following table: The case of q = 0 corresponds to the thermal elimination of the "transferred" R groups. There is evidence that the pyrolytic eliminations of dihydrogen and ethane from 1,4-cyclohexadiene and 3,3,6,6-tetramethyl-1,4-cyclohexadiene, respectively, represent examples of this type of pericyclic process. The ene reaction is often classified as a type of group transfer process, even though it does not involve the transfer of two σ-bonded groups. Rather, only one σ-bond is transferred while a second σ-bond is formed from a broken π-bond. As an all suprafacial process involving 6 electrons, it is symmetry-allowed under thermal conditions. The Woodward-Hoffmann symbol for the ene reaction is [π2s + π2s + σ2s] (see below). General formulation Though the Woodward–Hoffmann rules were first stated in terms of electrocyclic processes, they were eventually generalized to all pericyclic reactions, as the similarity and patterns in the above selection rules should indicate. In the generalized Woodward–Hoffmann rules, everything is characterized in terms of antarafacial and suprafacial bond topologies. The terms conrotatory and disrotatory are sufficient for describing the relative sense of bond rotation in electrocyclic ring closing or opening reactions, as illustrated on the right. However, they are unsuitable for describing the topologies of bond forming and breaking taking place in a general pericyclic reaction. As described in detail below, in the general formulation of the Woodward–Hoffmann rules, the bond rotation terms conrotatory and disrotatory are subsumed by the bond topology (or faciality) terms antarafacial and suprafacial, respectively. These descriptors can be used to characterize the topology of the bond forming and breaking that takes place in any pericyclic process. Woodward-Hoffmann notation A component is any part of a molecule or molecules that function as a unit in a pericyclic reaction. A component consists of one or more atoms and any of the following types of associated orbitals: An isolated p- or spx-orbital (unfilled or filled, symbol ω) A conjugated π system (symbol π) A σ bond (symbol σ) The electron count of a component is the number of electrons in the orbital(s) of the component: The electron count of an unfilled ω orbital (i.e., an empty p orbital) is 0, while that of a filled ω orbital (i.e., a lone pair) is 2. The electron count of a conjugated π system with n double bonds is 2n (or 2n + 2, if a (formal) lone pair from a heteroatom or carbanion is conjugated thereto). The electron count of a σ bond is 2. The bond topology of a component can be suprafacial and antarafacial: The relationship is suprafacial (symbol: s) when the interactions with the π system or p orbital occur on the same side of the nodal plane (think syn). For a σ bond, it corresponds to interactions occurring on the two "interior" lobes or two "exterior" lobes of the bond. The relationship is antarafacial (symbol: a) when the interactions with the π system or p orbital occur on opposite sides of the nodal plane (think anti). For a σ bond, it corresponds to interactions occurring on one "interior" lobe and one "exterior" lobe of the bond. Using this notation, all pericyclic reactions can be assigned a descriptor, consisting of a series of symbols σ/π/ωNs/a, connected by + signs and enclosed in brackets, describing, in order, the type of orbital(s), number of electrons, and bond topology involved for each component. Some illustrative examples follow: The Diels-Alder reaction (a (4+2)-cycloaddition) is [π4s + π2s]. The 1,3-dipolar cycloaddition of ozone and an olefin in the first step of ozonolysis (a (3+2)-cycloaddition) is [π4s + π2s]. The cheletropic addition of sulfur dioxide to 1,3-butadiene (a (4+1)-cheletropic addition) is [ω0a + π4s] + [ω2s + π4s]. The Cope rearrangement (a [3,3]-sigmatropic shift) is [π2s + σ2s + π2s] or [π2a + σ2s + π2a]. The [1,3]-alkyl migration with inversion at carbon discovered by Berson (a [1,3]-sigmatropic shift) is [σ2a + π2s]. The conrotatory electrocyclic ring closing of 1,3-butadiene (a 4π-electrocyclization) is [π4a]. The conrotatory electrocyclic ring opening of cyclobutene (a reverse 4π-electrocyclization) is [σ2a + π2s] or [σ2s + π2a]. The disrotatory electrocyclic ring closing of 1,3-cyclooctadien-5-ide anion (a 6π-electrocyclization) is [π6s]. A Wagner-Meerwein shift of a carbocation (a [1,2]-sigmatropic shift) is [ω0s + σ2s]. Antarafacial and suprafacial are associated with (conrotation or inversion) and (disrotation or retention), respectively. A single descriptor may correspond to two pericyclic processes that are chemically distinct, that a reaction and its microscopic reverse are often described with two different descriptors, and that a single process may have more than a one correct descriptor. One can verify, using the pericyclic selection rule given below, that all of these reactions are allowed processes. Original statement Using this notation, Woodward and Hoffmann state in their 1969 review the general formulation for all pericyclic reactions as follows: A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2)s and (4r)a components is odd. Here, (4q + 2)s and (4r)a refer to suprafacial (4q + 2)-electron and antarafacial (4r)-electron components, respectively. Moreover, this criterion should be interpreted as both sufficient (stated above) as well as necessary (not explicitly stated above, see: if and only if) Derivation of an alternative statement Alternatively, the general statement can be formulated in terms of the total number of electrons using simple rules of divisibility by a straightforward analysis of two cases. First, consider the case where the total number of electrons is 4n + 2: 4n + 2 = a(4q + 2)s + b(4p + 2)a + c(4t)s + d(4r)a, where a, b, c, and d are coefficients indicating the number of each type of component. This equation implies that one of, but not both, a or b is odd, for if a and b are both even or both odd, then the sum of the four terms is 0 (mod 4). The generalized statement of the Woodward–Hoffmann rules states that a + d is odd if the reaction is allowed. Now, if a is even, then this implies that d is odd. Since b is odd in this case, the number of antarafacial components, b + d, is even. Likewise, if a is odd, then d is even. Since b even in this case, the number of antarafacial components, b + d, is again even. Thus, regardless of the initial assumption of parity for a and b, the number of antarafacial components is even when the electron count is 4n + 2. Contrariwise,, b + d is odd. In the case where the total number of electrons is 4n, similar arguments (omitted here) lead to the conclusion that the number of antarafacial components b + d must be odd in the allowed case and even in the forbidden case. Finally, to complete the argument, and show that this new criterion is truly equivalent to the original criterion, one needs to argue the converse statements as well, namely, that the number of antarafacial components b + d and the electron count (4n + 2 or 4n) implies the parity of a + d that is given by the Woodward–Hoffmann rules (odd for allowed, even for forbidden). Another round of (somewhat tedious) case analyses will easily show this to be the case. The pericyclic selection rule states: A pericyclic process involving 4n+2 or 4n electrons is thermally allowed if and only if the number of antarafacial components involved is even or odd, respectively. In this formulation, the electron count refers to the entire reacting system, rather than to individual components, as enumerated in Woodward and Hoffmann's original statement. In practice, an even or odd number of antarafacial components usually means zero or one antarafacial components, respectively, as transition states involving two or more antarafacial components are typically disfavored by strain. As exceptions, certain intramolecular reactions may be geometrically constrained in such a way that enforces an antarafacial trajectory for multiple components. In addition, in some cases, e.g., the Cope rearrangement, the same (not necessarily strained) transition state geometry can be considered to contain two supra or two antara π components, depending on how one draws the connections between orbital lobes. (This ambiguity is a consequence of the convention that overlap of either both interior or both exterior lobes of a σ component can be considered to be suprafacial.) This alternative formulation makes the equivalence of the Woodward–Hoffmann rules to the Dewar–Zimmerman analysis (see below) clear. An even total number of phase inversions is equivalent to an even number of antarafacial components and corresponds to Hückel topology, requiring 4n + 2 electrons for aromaticity, while an odd total number of phase inversions is equivalent to an odd number of antarafacial components and corresponds to Möbius topology, requiring 4n electrons for aromaticity. To summarize aromatic transition state theory: Thermal pericyclic reactions proceed via (4n + 2)-electron Hückel or (4n)-electron Möbius transition states. As a mnemonic, the above formulation can be further restated as the following: A ground-state pericyclic process involving N electron pairs and A antarafacial components is symmetry-allowed if and only if N + A is odd. Alternative proof of equivalence The equivalence of the two formulations can also be seen by a simple parity argument without appeal to case analysis. Proposition. The following formulations of the Woodward–Hoffmann rules are equivalent: (A) For a pericyclic reaction, if the sum of the number of suprafacial 4q + 2 components and antarafacial 4r components is odd then it is thermally allowed; otherwise the reaction is thermally forbidden. (B) For a pericyclic reaction, if the total number of antarafacial components of a (4n + 2)-electron reaction is even or the total number of antarafacial components of a 4n-electron reaction is odd then it is thermally allowed; otherwise the reaction is thermally forbidden. Proof of equivalence: Index the components of a k-component pericyclic reaction and assign component i with Woodward-Hoffmann symbol σ/π/ωNs/a the electron count and topology parity symbol according to the following rules:We have a mathematically equivalent restatement of (A): (A') A collection of symbols is thermally allowed if and only if the number of symbols with the property is odd. Since the total electron count is 4n + 2 or 4n precisely when (the number of (4q + 2)-electron components) is odd or even, respectively, while gives the number of antarafacial components, we can also restate (B): (B') A collection of symbols is thermally allowed if and only if exactly one of or is odd. It suffices to show that (A') and (B') are equivalent. Exactly one of or is odd if and only if is odd. If , holds; hence, omission of symbols with the property from a collection will not change the parity of . On the other hand, when , we have , but simply enumerates the number of components with the property . Therefore, . Thus, and the number of symbols in a collection with the property have the same parity. Since formulations (A') and (B') are equivalent, so are (A) and (B), as claimed. □ To give a concrete example, a hypothetical reaction with the descriptor [π6s + π4a + π2a] would be assigned the collection {(1, 0, 1), (0, 1, 2), (1, 1, 3)} in the scheme above. There are two components, (1, 0, 1) and (0, 1, 2), with the property , so the reaction is not allowed by (A'). Likewise, and are both even, so (B') yields the same conclusion (as it must): the reaction is not allowed. Examples This formulation for a 2 component reaction is equivalent to the selection rules for a [p + q]-cycloaddition reactions shown in the following table: If the total number of electrons is 4n + 2, then one is in the bottom row of the table. The reaction is thermally allowed if it is suprafacial with respect to both components or antarafacial with respect to both components. That is to say the number of antarafacial components is even (it is 0 or 2). Similarly if the total number of electrons is 4n, then one is in the top row of the table. This is thermally allowed if it is suprafacial with respect to one component and antarafacial with respect to the other. Thus the total number of antarafacial components is always odd as it is always 1. The following are some common ground state (i.e. thermal) reaction classes analyzed in light of the generalized Woodward–Hoffmann rules. [2+2] Cycloaddition A [2+2]-cycloaddition is a 4 electron process that brings together two components. Thus, by the above general WH rules, it is only allowed if the reaction is antarafacial with respect to exactly one component. This is the same conclusion reached with correlation diagrams in the section above. A rare but stereochemically unambiguous example of a [π2s + π2a]-cycloaddition is shown on the right. The strain and steric properties of the trans double bond enables this generally kinetically unfavorable process. cis, trans-1,5-Cyclooctadiene is also believed to undergo dimerization via this mode. Ketenes are a large class of reactants favoring [2 + 2] cycloaddition with olefins. The MO analysis of ketene cycloaddition is rendered complicated and ambiguous by the simultaneous but independent interaction of the orthogonal orbitals of the ketene but may involve a [π2s + π2a] interaction as well. [4+2] Cycloaddition The synchronous 6π-electron Diels-Alder reaction is a [π4s + π2s]-cycloaddition (i.e. suprafacial with respect to both components), as exemplified by the reaction to the right. Thus as the total number of antarafacial components is 0, which is even, the reaction is symmetry-allowed. This prediction agrees with experiment as the Diels-Alder reaction is a rather facile pericyclic reaction. 4n Electrocyclic Reaction A 4n electron electrocyclic ring opening reaction can be considered to have 2 components – the π-system and the breaking σ-bond. With respect to the π-system, the reaction is suprafacial. However, with a conrotatory mechanism, as shown in the figure above, the reaction is antarafacial with respect to the σ-bond. Conversely with a disrotatory mechanism it is suprafacial with respect to the breaking σ-bond. By the above rules, for a 4n electron pericyclic reaction of 2 components, there must be one antarafacial component. Thus the reaction must proceed through a conrotatory mechanism. This agrees with the result derived in the correlation diagrams above. 4n + 2 electrocyclic reaction A 4n + 2 electrocyclic ring opening reaction is also a 2-component pericyclic reaction which is suprafacial with respect to the π-system. Thus, in order for the reaction to be allowed, the number of antarafacial components must be 0, i.e. it must be suprafacial with respect to the breaking σ-bond as well. Thus a disrotatory mechanism is symmetry-allowed. [1,j]-sigmatropic rearrangement A [1,j]-sigmatropic rearrangement is also a two component pericyclic reaction: one component is the π-system, the other component is the migrating group. The simplest case is a [1,j]-hydride shift across a π-system where j is odd. In this case, as the hydrogen has only a spherically symmetric s orbital, the reaction must be suprafacial with respect to the hydrogen. The total number of electrons involved is (j + 1) as there are (j − 1)/2 π-bond plus the σ bond involved in the reaction. If j = 4n − 1 then it must be antarafacial, and if j = 4n + 1, then it must be suprafacial. This agrees with experiment that [1,3]-hydride shifts are generally not observed as the symmetry-allowed antarafacial process is not feasible, but [1,5]-hydride shifts are quite facile. For a [1,j]-alkyl shift, where the reaction can be antarafacial (i.e. invert stereochemistry) with respect to the carbon center, the same rules apply. If j = 4n − 1 then the reaction is symmetry-allowed if it is either antarafacial with respect to the π-system, or inverts stereochemistry at the carbon. If j = 4n + 1 then the reaction is symmetry-allowed if it is suprafacial with respect to the π-system and retains stereochemistry at the carbon center. On the right is one of the first examples of a [1,3]-sigmatropic shift to be discovered, reported by Berson in 1967. In order to allow for inversion of configuration, as the σ bond breaks, the C(H)(D) moiety twists around at the transition state, with the hybridization of the carbon approximating sp2, so that the remaining unhybridized p orbital maintains overlap with both carbons 1 and 3. Equivalence of other theoretical models Dewar–Zimmerman analysis The generalized Woodward–Hoffmann rules, first given in 1969, are equivalent to an earlier general approach, the Möbius-Hückel concept of Zimmerman, which was first stated in 1966 and is also known as aromatic transition state theory. As its central tenet, aromatic transition state theory holds that 'allowed' pericyclic reactions proceed via transition states with aromatic character, while 'forbidden' pericyclic reactions would encounter transition states that are antiaromatic in nature. In the Dewar-Zimmerman analysis, one is concerned with the topology of the transition state of the pericyclic reaction. If the transition state involves 4n electrons, the Möbius topology is aromatic and the Hückel topology is antiaromatic, while if the transition state involves 4n + 2 electrons, the Hückel topology is aromatic and the Möbius topology is antiaromatic. The parity of the number of phase inversions (described in detail below) in the transition state determines its topology. A Möbius topology involves an odd number of phase inversions whereas a Hückel topology involves an even number of phase inversions. In connection with Woodward–Hoffmann terminology, the number of antarafacial components and the number of phase inversions always have the same parity. Consequently, an odd number of antarafacial components gives Möbius topology, while an even number gives Hückel topology. Thus, to restate the results of aromatic transition state theory in the language of Woodward and Hoffmann, a 4n-electron reaction is thermally allowed if and only if it has an odd number of antarafacial components (i.e., Möbius topology); a (4n + 2)-electron reaction is thermally allowed if and only if it has an even number of antarafacial components (i.e., Hückel topology). Procedure for Dewar-Zimmerman analysis (examples shown on the right): Step 1. Shade in all basis orbitals that are part of the pericyclic system. The shading can be arbitrary. In particular the shading does not need to reflect the phasing of the polyene MOs; each basis orbital simply need to have two oppositely phased lobes in the case of p or spx hybrid orbitals, or a single phase in the case of an s orbital. Step 2. Draw connections between the lobes of basis orbitals that are geometrically well-disposed to interact at the transition state. The connections to be made depend on the transition state topology. (For example, in the figure, different connections are shown in the cases of con- and disrotatory electrocyclization.) Step 3. Count the number of connections that occur between lobes of opposite shading: each of these connections constitutes a phase inversion. If the number of phase inversions is even, the transition state is Hückel, while if the number of phase inversions is odd, the transition state is Möbius. Step 4. Conclude that the pericyclic reaction is allowed if the electron count is 4n + 2 and the transition state is Hückel, or if the electron count is 4n and the transition state is Möbius; otherwise, conclude that the pericyclic reaction is forbidden. Importantly, any scheme of assigning relative phases to the basis orbitals is acceptable, as inverting the phase of any single orbital adds 0 or ±2 phase inversions to the total, an even number, so that the parity of the number of inversions (number of inversions modulo 2) is unchanged. Reinterpretation with conceptual density functional theory Recently, the Woodward–Hoffmann rules have been reinterpreted using conceptual density functional theory (DFT). The key to the analysis is the dual descriptor function, proposed by Christophe Morell, André Grand and Alejandro Toro-Labbé , the second derivative of the electron density with respect to the number of electrons . This response function is important as the reaction of two components A and B involving a transfer of electrons will depend on the responsiveness of the electron density to electron donation or acceptance, i.e. the derivative of the Fukui function . In fact, from a simplistic viewpoint, the dual descriptor function gives a readout on the electrophilicity or nucleophilicity of the various regions of the molecule. For , the region is electrophilic, and for , the region is nucleophilic. Using the frontier molecular orbital assumption and a finite difference approximation of the Fukui function, one may write the dual descriptor as This makes intuitive sense as if a region is better at accepting electrons than donating, then the LUMO must dominate and dual descriptor function will be positive. Conversely, if a region is better at donating electrons then the HOMO term will dominate and the descriptor will be negative. Notice that although the concept of phase and orbitals are replaced simply by the notion of electron density, this function still takes both positive and negative values. The Woodward–Hoffmann rules are reinterpreted using this formulation by matching favorable interactions between regions of electron density for which the dual descriptor has opposite signs. This is equivalent to maximizing predicted favorable interactions and minimizing repulsive interactions. For the case of a [4+2] cycloaddition, a simplified schematic of the reactants with the dual descriptor function colored (red=positive, blue=negative) is shown in the optimal supra/supra configuration to the left. This method correctly predicts the WH rules for the major classes of pericyclic reactions. Exceptions In Chapter 12 of The Conservation of Orbital Symmetry, entitled "Violations," Woodward and Hoffmann famously stated:There are none! Nor can violations be expected of so fundamental a principle of maximum bonding.This pronouncement notwithstanding, it is important to recognize that the Woodward–Hoffmann rules are used to predict relative barrier heights, and thus likely reaction mechanisms, and that they only take into account barriers due to conservation of orbital symmetry. Thus it is not guaranteed that a WH symmetry-allowed reaction actually takes place in a facile manner. Conversely, it is possible, upon enough energetic input, to achieve an anti-Woodward-Hoffmann product. This is especially prevalent in sterically constrained systems, where the WH-product has an added steric barrier to overcome. For example, in the electrocyclic ring-opening of the dimethylbicyclo[0.2.3]heptene derivative (1), a conrotatory mechanism is not possible due to resulting angle strain and the reaction proceeds slowly through a disrotatory mechanism at 400o C to give a cycloheptadiene product. Violations may also be observed in cases with very strong thermodynamic driving forces. The decomposition of dioxetane-1,2-dione to two molecules of carbon dioxide, famous for its role in the luminescence of glowsticks, has been scrutinized computationally. In the absence of fluorescers, the reaction is now believed to proceed in a concerted (though asynchronous) fashion, via a retro-[2+2]-cycloaddition that formally violates the Woodward–Hoffmann rules. Similarly, a recent paper describes how mechanical stress can be used to reshape chemical reaction pathways to lead to products that apparently violate Woodward–Hoffman rules. In this paper, they use ultrasound irradiation to induce a mechanical stress on link-functionalized polymers attached syn or anti on the cyclobutene ring. Computational studies predict that the mechanical force, resulting from friction of the polymers, induces bond lengthening along the reaction coordinate of the conrotatory mechanism in the anti-bisubstituted-cyclobutene, and along the reaction coordinate of the disrotatory mechanism in the syn-bisubstituted-cyclobutene. Thus in the syn-bisubstituted-cyclobutene, the anti-WH product is predicted to be formed. This computational prediction was backed up by experiment on the system below. Link-functionalized polymers were conjugated to cis benzocyclobutene in both syn- and anti- conformations. As predicted, both products gave the same (Z,Z) product as determined by quenching by a stereospecific Diels-Alder reaction with the substituted maleimide. In particular, the syn-substituted product gave the anti-WH product, presumably as the mechanical stretching along the coordinate of the disrotatory pathway lowered the barrier of the reaction under the disrotatory pathway enough to bias that mechanism. Controversy It has been stated that Elias James Corey, also a Nobel Prize winner, feels he is responsible for the ideas that laid the foundation for this research, and that Woodward unfairly neglected to credit him in the discovery. In a 2004 memoir published in the Journal of Organic Chemistry, Corey makes his claim to priority of the idea: "On May 4, 1964, I suggested to my colleague R. B. Woodward a simple explanation involving the symmetry of the perturbed (HOMO) molecular orbitals for the stereoselective cyclobutene to 1,3-butadiene and 1,3,5-hexatriene to cyclohexadiene conversions that provided the basis for the further development of these ideas into what became known as the Woodward–Hoffmann rules". Corey, then 35, was working into the evening on Monday, May 4, as he and the other driven chemists often did. At about 8:30 p.m., he dropped by Woodward's office, and Woodward posed a question about how to predict the type of ring a chain of atoms would form. After some discussion, Corey proposed that the configuration of electrons governed the course of the reaction. Woodward insisted the solution would not work, but Corey left drawings in the office, sure that he was on to something. "I felt that this was going to be a really interesting development and was looking forward to some sort of joint undertaking," he wrote. But the next day, Woodward flew into Corey's office as he and a colleague were leaving for lunch and presented Corey's idea as his own – and then left. Corey was stunned. In a 2004 rebuttal published in the Angewandte Chemie, Roald Hoffmann denied the claim: he quotes Woodward from a lecture given in 1966 saying: "I REMEMBER very clearly—and it still surprises me somewhat—that the crucial flash of enlightenment came to me in algebraic, rather than in pictorial or geometric form. Out of the blue, it occurred to me that the coefficients of the terminal terms in the mathematical expression representing the highest occupied molecular orbital of butadiene were of opposite sign, while those of the corresponding expression for hexatriene possessed the same sign. From here it was but a short step to the geometric, and more obviously chemically relevant, view that in the internal cyclisation of a diene, the top face of one terminal atom should attack the bottom face of the other, while in the triene case, the formation of a new bond should involve the top (or pari passu, the bottom) faces of both terminal atoms." In addition, Hoffmann points out that in two publications from 1963 and 1965, Corey described a total synthesis of the compound dihydrocostunolide. Although they describe an electrocyclic reaction, Corey has nothing to offer with respect to explaining the stereospecificity of the synthesis. This photochemical reaction involving 6 = 4×1 + 2 electrons is now recognized as conrotatory. See also Woodward's rules for calculating UV absorptions Torquoselectivity Notes References Journal Articles Understanding the Woodward–Hoffmann Rules by Using Changes in Electron Density Eponymous chemical rules Physical organic chemistry Cycloadditions
0.777872
0.987695
0.7683
Tinbergen's four questions
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause. Second question: Phylogeny (evolution) Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve. Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot. It corresponds to Aristotle's formal cause. Proximate explanations Third question: Mechanism (causation) Some prominent classes of Proximate causal mechanisms include: The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability. Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species. Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates. In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect. However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity." It corresponds to Aristotle's efficient cause. Fourth question: Ontogeny (development) Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form. In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture). An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism). Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact. A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87). See developmental biology and developmental psychology. It corresponds to Aristotle's material cause. Causal relationships The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels. Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour. Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes. In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods. Examples Vision Four ways of explaining visual perception: Function: To find food and avoid danger. Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot. Mechanism: The lens of the eye focuses light on the retina. Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99). Westermarck effect Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196): Function: To discourage inbreeding, which decreases the number of viable offspring. Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago. Mechanism: Little is known about the neuromechanism. Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs. Romantic love Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021): Function: Mate choice, courtship, sex, pair-bonding. Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans. Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love. Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan. Sleep Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021): Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger. Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds. Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm. Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences. Use of the four-question schema as "periodic table" Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry. This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF). References Sources Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. . Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. . Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, . Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, . Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32. Mayr, Ernst (2001) What Evolution Is, Basic Books. . Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015, Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682. Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. . Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. . Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433. Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. . External links Diagrams The Four Areas of Biology pdf The Four Areas and Levels of Inquiry pdf Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt Tinbergen's Four Questions, organized pdf Derivative works On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff. Behavioral ecology Ethology Evolutionary psychology Sociobiology
0.78164
0.982909
0.768281
Potential of mean force
When examining a system computationally one may be interested in knowing how the free energy changes as a function of some inter- or intramolecular coordinate (such as the distance between two atoms or a torsional angle). The free energy surface along the chosen coordinate is referred to as the potential of mean force (PMF). If the system of interest is in a solvent, then the PMF also incorporates the solvent effects. General description The PMF can be obtained in Monte Carlo or molecular dynamics simulations to examine how a system's energy changes as a function of some specific reaction coordinate parameter. For example, it may examine how the system's energy changes as a function of the distance between two residues, or as a protein is pulled through a lipid bilayer. It can be a geometrical coordinate or a more general energetic (solvent) coordinate. Often PMF simulations are used in conjunction with umbrella sampling, because typically the PMF simulation will fail to adequately sample the system space as it proceeds. Mathematical description The Potential of Mean Force of a system with N particles is by construction the potential that gives the average force over all the configurations of all the n+1...N particles acting on a particle j at any fixed configuration keeping fixed a set of particles 1...n Above, is the averaged force, i.e. "mean force" on particle j. And is the so-called potential of mean force. For , is the average work needed to bring the two particles from infinite separation to a distance . It is also related to the radial distribution function of the system, , by: Application The potential of mean force is usually applied in the Boltzmann inversion method as a first guess for the effective pair interaction potential that ought to reproduce the correct radial distribution function in a mesoscopic simulation. Lemkul et al. have used steered molecular dynamics simulations to calculate the potential of mean force to assess the stability of Alzheimer's amyloid protofibrils. Gosai et al. have also used umbrella sampling simulations to show that potential of mean force decreases between thrombin and its aptamer (a protein-ligand complex) under the effect of electrical fields. See also Statistical potential Free energy perturbation Potential energy surface References Further reading McQuarrie, D. A. Statistical Mechanics. Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press. External links Potential of Mean force Physical chemistry
0.792144
0.969862
0.768271
Quantum tunnelling
In physics, quantum tunnelling, barrier penetration, or simply tunnelling is a quantum mechanical phenomenon in which an object such as an electron or atom passes through a potential energy barrier that, according to classical mechanics, should not be passable due to the object not having sufficient energy to pass or surmount the barrier. Tunneling is a consequence of the wave nature of matter, where the quantum wave function describes the state of a particle or other physical system, and wave equations such as the Schrödinger equation describe their behavior. The probability of transmission of a wave packet through a barrier decreases exponentially with the barrier height, the barrier width, and the tunneling particle's mass, so tunneling is seen most prominently in low-mass particles such as electrons or protons tunneling through microscopically narrow barriers. Tunneling is readily detectable with barriers of thickness about 1–3 nm or smaller for electrons, and about 0.1 nm or smaller for heavier particles such as protons or hydrogen atoms. Some sources describe the mere penetration of a wave function into the barrier, without transmission on the other side, as a tunneling effect, such as in tunneling into the walls of a finite potential well. Tunneling plays an essential role in physical phenomena such as nuclear fusion and alpha radioactive decay of atomic nuclei. Tunneling applications include the tunnel diode, quantum computing, flash memory, and the scanning tunneling microscope. Tunneling limits the minimum size of devices used in microelectronics because electrons tunnel readily through insulating layers and transistors that are thinner than about 1 nm. The effect was predicted in the early 20th century. Its acceptance as a general physical phenomenon came mid-century. Introduction to the concept Quantum tunnelling falls under the domain of quantum mechanics. To understand the phenomenon, particles attempting to travel across a potential barrier can be compared to a ball trying to roll over a hill. Quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. In quantum mechanics, a particle can, with a small probability, tunnel to the other side, thus crossing the barrier. The reason for this difference comes from treating matter as having properties of waves and particles. Tunnelling problem The wave function of a physical system of particles specifies everything that can be known about the system. Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as the Schrödinger equation, the time evolution of a known wave function can be deduced. The square of the absolute value of this wave function is directly related to the probability distribution of the particle positions, which describes the probability that the particles would be measured at those positions. As shown in the animation, a wave packet impinges on the barrier, most of it is reflected and some is transmitted through the barrier. The wave packet becomes more de-localized: it is now on both sides of the barrier and lower in maximum amplitude, but equal in integrated square-magnitude, meaning that the probability the particle is somewhere remains unity. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling. Some models of a tunneling barrier, such as the rectangular barriers shown, can be analysed and solved algebraically. Most problems do not have an algebraic solution, so numerical solutions are used. "Semiclassical methods" offer approximate solutions that are easier to compute, such as the WKB approximation. History The Schrödinger equation was published in 1926. The first person to apply the Schrödinger equation to a problem that involved tunneling between two classically allowed regions through a potential barrier was Friedrich Hund in a series of articles published in 1927. He studied the solutions of a double-well potential and discussed molecular spectra. Leonid Mandelstam and Mikhail Leontovich discovered tunneling independently and published their results in 1928. In 1927, Lothar Nordheim, assisted by Ralph Fowler, published a paper that discussed thermionic emission and reflection of electrons from metals. He assumed a surface potential barrier that confines the electrons within the metal and showed that the electrons have a finite probability of tunneling through or reflecting from the surface barrier when their energies are close to the barrier energy. Classically, the electron would either transmit or reflect with 100% certainty, depending on its energy. In 1928 J. Robert Oppenheimer published two papers on field emission, i.e. the emission of electrons induced by strong electric fields. Nordheim and Fowler simplified Oppenheimer's derivation and found values for the emitted currents and work functions that agreed with experiments. A great success of the tunnelling theory was the mathematical explanation for alpha decay, which was developed in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon. The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling. All three researchers were familiar with the works on field emission, and Gamow was aware of Mandelstam and Leontovich's findings. In the early days of quantum theory, the term tunnel effect was not used, and the effect was instead referred to as penetration of, or leaking through, a barrier. The German term wellenmechanische Tunneleffekt was used in 1931 by Walter Schottky. The English term tunnel effect entered the language in 1932 when it was used by Yakov Frenkel in his textbook. In 1957 Leo Esaki demonstrated tunneling of electrons over a few nanometer wide barrier in a semiconductor structure and developed a diode based on tunnel effect. In 1960, following Esaki's work, Ivar Giaever showed experimentally that tunnelling also took place in superconductors. The tunnelling spectrum gave direct evidence of the superconducting energy gap. In 1962, Brian Josephson predicted the tunneling of superconducting Cooper pairs. Esaki, Giaever and Josephson shared the 1973 Nobel Prize in Physics for their works on quantum tunneling in solids. In 1981, Gerd Binnig and Heinrich Rohrer developed a new type of microscope, called scanning tunneling microscope, which is based on tunnelling and is used for imaging surfaces at the atomic level. Binnig and Rohrer were awarded the Nobel Prize in Physics in 1986 for their discovery. Applications Tunnelling is the cause of some important macroscopic physical phenomena. Solid-state physics Electronics Tunnelling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in a substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made. Tunnelling is a fundamental technique used to program the floating gates of flash memory. Cold emission Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field. These materials are important for flash memory, vacuum tubes, and some electron microscopes. Tunnel junction A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires understanding quantum tunnelling. Josephson junctions take advantage of quantum tunnelling and superconductivity to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields, as well as the multijunction solar cell. Tunnel diode Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunnelling. When a small forward bias is applied, the current due to tunnelling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically. Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage. The resonant tunnelling diode makes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantum potential well that has a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunnelling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable. Tunnel field-effect transistors A European research project demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they would improve the performance per power of integrated circuits. Conductivity of crystalline solids While the Drude-Lorentz model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions. When a free electron wave packet encounters a long array of uniformly spaced barriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely high conductance, and that impurities in the metal will disrupt it. Scanning tunneling microscope The scanning tunnelling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, may allow imaging of individual atoms on the surface of a material. It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunnelling between the needle and the surface reveals the distance between the needle and the surface. By using piezoelectric rods that change in size when voltage is applied, the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor. STMs are accurate to 0.001 nm, or about 1% of atomic diameter. Nuclear physics Nuclear fusion Quantum tunnelling is an essential phenomenon for nuclear fusion. The temperature in stellar cores is generally insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve thermonuclear fusion. Quantum tunnelling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction. Radioactive decay Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunneling into the nucleus is electron capture). This was the first application of quantum tunnelling. Radioactive decay is a relevant issue for astrobiology as this consequence of quantum tunnelling creates a constant energy source over a large time interval for environments outside the circumstellar habitable zone where insolation would not be possible (subsurface oceans) or effective. Quantum tunnelling may be one of the mechanisms of hypothetical proton decay. Chemistry Energetically forbidden reactions Chemical reactions in the interstellar medium occur at extremely low energies. Probably the most fundamental ion-molecule reaction involves hydrogen ions with hydrogen molecules. The quantum mechanical tunnelling rate for the same reaction using the hydrogen isotope deuterium, D- + H2 → H- + HD, has been measured experimentally in an ion trap. The deuterium was placed in an ion trap and cooled. The trap was then filled with hydrogen. At the temperatures used in the experiment, the energy barrier for reaction would not allow the reaction to succeed with classical dynamics alone. Quantum tunneling allowed reactions to happen in rare collisions. It was calculated from the experimental data that collisions happened one in every hundred billion. Kinetic isotope effect In chemical kinetics, the substitution of a light isotope of an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled using transition state theory. However, in certain cases, large isotopic effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunnelling is required. R. P. Bell developed a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon. Astrochemistry in interstellar clouds By including quantum tunnelling, the astrochemical syntheses of various molecules in interstellar clouds can be explained, such as the synthesis of molecular hydrogen, water (ice) and the prebiotic important formaldehyde. Tunnelling of molecular hydrogen has been observed in the lab. Quantum biology Quantum tunnelling is among the central non-trivial quantum effects in quantum biology. Here it is important both as electron tunnelling and proton tunnelling. Electron tunnelling is a key factor in many biochemical redox reactions (photosynthesis, cellular respiration) as well as enzymatic catalysis. Proton tunnelling is a key factor in spontaneous DNA mutation. Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled. A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation. Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix. Other instances of quantum tunnelling-induced mutations in biology are believed to be a cause of ageing and cancer. Mathematical discussion Schrödinger equation The time-independent Schrödinger equation for one particle in one dimension can be written as or where is the reduced Planck constant, m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ is the Schrödinger wave function, V is the potential energy of the particle (measured relative to any convenient reference level), E is the energy of the particle that is associated with motion in the x-axis (measured relative to V), M(x) is a quantity defined by V(x) − E, which has no accepted name in physics. The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. When M(x) is constant and negative, then the Schrödinger equation can be written in the form The solutions of this equation represent travelling waves, with phase-constant +k or −k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form The solutions of this equation are rising and falling exponentials in the form of evanescent waves. When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign of M(x) determines the nature of the medium, with negative M(x) corresponding to medium A and positive M(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positive M(x) is sandwiched between two regions of negative M(x), hence creating a potential barrier. The mathematics of dealing with the situation where M(x) varies with x is difficult, except in special cases that usually do not correspond to physical reality. A full mathematical treatment appears in the 1965 monograph by Fröman and Fröman. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect. WKB approximation The wave function is expressed as the exponential of a function: where is then separated into real and imaginary parts: where A(x) and B(x) are real-valued functions. Substituting the second equation into the first and using the fact that the real part needs to be 0 results in: To solve this equation using the semiclassical approximation, each function must be expanded as a power series in . From the equations, the power series must start with at least an order of to satisfy the real part of the equation; for a good classical limit starting with the highest power of the Planck constant possible is preferable, which leads to and with the following constraints on the lowest order terms, and At this point two extreme cases can be considered. Case 1 If the amplitude varies slowly as compared to the phase and which corresponds to classical motion. Resolving the next order of expansion yields Case 2 If the phase varies slowly as compared to the amplitude, and which corresponds to tunneling. Resolving the next order of the expansion yields In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning points . Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made. To start, a classical turning point, is chosen and is expanded in a power series about : Keeping only the first order term ensures linearity: Using this approximation, the equation near becomes a differential equation: This can be solved using Airy functions as solutions. Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the two coefficients on one side of a classical turning point, the two coefficients on the other side of a classical turning point can be determined by using this local solution to connect them. Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships between and are and With the coefficients found, the global solution can be found. Therefore, the transmission coefficient for a particle tunneling through a single potential barrier is where are the two classical turning points for the potential barrier. For a rectangular barrier, this expression simplifies to: Faster than light Some physicists have claimed that it is possible for spin-zero particles to travel faster than the speed of light when tunnelling. This appears to violate the principle of causality, since a frame of reference then exists in which the particle arrives before it has left. In 1998, Francis E. Low reviewed briefly the phenomenon of zero-time tunnelling. More recently, experimental tunnelling time data of phonons, photons, and electrons was published by Günter Nimtz. Another experiment overseen by A. M. Steinberg, seems to indicate that particles could tunnel at apparent speeds faster than light. Other physicists, such as Herbert Winful, disputed these claims. Winful argued that the wave packet of a tunnelling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wave packet does not measure its speed, but is related to the amount of time the wave packet is stored in the barrier. Moreover, if quantum tunneling is modeled with the relativistic Dirac equation, well established mathematical theorems imply that the process is completely subluminal. Dynamical tunneling The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunnelling. Tunnelling in phase space The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In the case of an integrable system, where bounded classical trajectories are confined onto tori in phase space, tunnelling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori. Chaos-assisted tunnelling In real life, most systems are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunnelling between them. This phenomenon is referred as chaos-assisted tunnelling. and is characterized by sharp resonances of the tunnelling rate when varying any system parameter. Resonance-assisted tunnelling When is small in front of the size of the regular islands, the fine structure of the classical phase space plays a key role in tunnelling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands. Related phenomena Several phenomena have the same behavior as quantum tunnelling. Two examples are evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings". These effects are modeled similarly to the rectangular potential barrier. In these cases, one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B. In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case. A classical wave-particle association was originally analyzed as analogous to quantum tunneling, but subsequent analysis found a fluid dynamics cause related to the vertical momentum imparted to particles near the barrier. See also Dielectric barrier discharge Field electron emission Holstein–Herring method Proton tunneling Quantum cloning Superconducting tunnel junction Tunnel diode Tunnel junction White hole References Further reading External links Animation, applications and research linked to tunnel effect and other quantum phenomena (Université Paris Sud) Animated illustration of quantum tunneling Animated illustration of quantum tunneling in a RTD device Interactive Solution of Schrodinger Tunnel Equation Articles containing video clips Particle physics Quantum mechanics Solid state engineering
0.769565
0.998319
0.768271
Energy condition
In relativistic classical field theories of gravitation, particularly general relativity, an energy condition is a generalization of the statement "the energy density of a region of space cannot be negative" in a relativistically phrased mathematical formulation. There are multiple possible alternative ways to express such a condition such that can be applied to the matter content of the theory. The hope is then that any reasonable matter theory will satisfy this condition or at least will preserve the condition if it is satisfied by the starting conditions. Energy conditions are not physical constraints , but are rather mathematically imposed boundary conditions that attempt to capture a belief that "energy should be positive". Many energy conditions are known to not correspond to physical reality—for example, the observable effects of dark energy are well known to violate the strong energy condition. In general relativity, energy conditions are often used (and required) in proofs of various important theorems about black holes, such as the no hair theorem or the laws of black hole thermodynamics. Motivation In general relativity and allied theories, the distribution of the mass, momentum, and stress due to matter and to any non-gravitational fields is described by the energy–momentum tensor (or matter tensor) . However, the Einstein field equation in itself does not specify what kinds of states of matter or non-gravitational fields are admissible in a spacetime model. This is both a strength, since a good general theory of gravitation should be maximally independent of any assumptions concerning non-gravitational physics, and a weakness, because without some further criterion the Einstein field equation admits putative solutions with properties most physicists regard as unphysical, i.e. too weird to resemble anything in the real universe even approximately. The energy conditions represent such criteria. Roughly speaking, they crudely describe properties common to all (or almost all) states of matter and all non-gravitational fields that are well-established in physics while being sufficiently strong to rule out many unphysical "solutions" of the Einstein field equation. Mathematically speaking, the most apparent distinguishing feature of the energy conditions is that they are essentially restrictions on the eigenvalues and eigenvectors of the matter tensor. A more subtle but no less important feature is that they are imposed eventwise, at the level of tangent spaces. Therefore, they have no hope of ruling out objectionable global features, such as closed timelike curves. Some observable quantities In order to understand the statements of the various energy conditions, one must be familiar with the physical interpretation of some scalar and vector quantities constructed from arbitrary timelike or null vectors and the matter tensor. First, a unit timelike vector field can be interpreted as defining the world lines of some family of (possibly noninertial) ideal observers. Then the scalar field can be interpreted as the total mass–energy density (matter plus field energy of any non-gravitational fields) measured by the observer from our family (at each event on his world line). Similarly, the vector field with components represents (after a projection) the momentum measured by our observers. Second, given an arbitrary null vector field the scalar field can be considered a kind of limiting case of the mass–energy density. Third, in the case of general relativity, given an arbitrary timelike vector field , again interpreted as describing the motion of a family of ideal observers, the Raychaudhuri scalar is the scalar field obtained by taking the trace of the tidal tensor corresponding to those observers at each event: This quantity plays a crucial role in Raychaudhuri's equation. Then from Einstein field equation we immediately obtain where is the trace of the matter tensor. Mathematical statement There are several alternative energy conditions in common use: Null energy condition The null energy condition stipulates that for every future-pointing null vector field , Each of these has an averaged version, in which the properties noted above are to hold only on average along the flowlines of the appropriate vector fields. Otherwise, the Casimir effect leads to exceptions. For example, the averaged null energy condition states that for every flowline (integral curve) of the null vector field we must have Weak energy condition The weak energy condition stipulates that for every timelike vector field the matter density observed by the corresponding observers is always non-negative: Dominant energy condition The dominant energy condition stipulates that, in addition to the weak energy condition holding true, for every future-pointing causal vector field (either timelike or null) the vector field must be a future-pointing causal vector. That is, mass–energy can never be observed to be flowing faster than light. Strong energy condition The strong energy condition stipulates that for every timelike vector field , the trace of the tidal tensor measured by the corresponding observers is always non-negative: There are many classical matter configurations which violate the strong energy condition, at least from a mathematical perspective. For instance, a scalar field with a positive potential can violate this condition. Moreover, observations of dark energy/cosmological constant show that the strong energy condition fails to describe our universe, even when averaged across cosmological scales. Furthermore, it is strongly violated in any cosmological inflationary process (even one not driven by a scalar field). Perfect fluids Perfect fluids possess a matter tensor of form where is the four-velocity of the matter particles and where is the projection tensor onto the spatial hyperplane elements orthogonal to the four-velocity, at each event. (Notice that these hyperplane elements will not form a spatial hyperslice unless the velocity is vorticity-free, that is, irrotational.) With respect to a frame aligned with the motion of the matter particles, the components of the matter tensor take the diagonal form Here, is the energy density and is the pressure. The energy conditions can then be reformulated in terms of these eigenvalues: The null energy condition stipulates that The weak energy condition stipulates that The dominant energy condition stipulates that The strong energy condition stipulates that The implications among these conditions are indicated in the figure at right. Note that some of these conditions allow negative pressure. Also, note that despite the names the strong energy condition does not imply the weak energy condition even in the context of perfect fluids. Non-perfect fluids Finally, there are proposals for extension of the energy conditions to spacetimes containing non-perfect fluids, where the second law of thermodynamics provides a natural Lyapunov function to probe both stability and causality, where the physical origin of the connection between stability and causality lies in the relationship between entropy and information. These attempts generalize the Hawking-Ellis vacuum conservation theorem (according to which, if energy can enter an empty region faster than the speed of light, then the dominant energy condition is violated, and the energy density may become negative in some reference frame) to spacetimes containing out-of-equilibrium matter at finite temperature and chemical potential. Indeed, the idea that there is a connection between causality violation and fluid instabilities has a long history. For example, in the words of W. Israel: “If the source of an effect can be delayed, it should be possible for a system to borrow energy from its ground state, and this implies instability”. It is possible to show that this is a restatement of the Hawking-Ellis vacuum conservation theorem at finite temperature and chemical potential. Attempts at falsifying the energy conditions While the intent of the energy conditions is to provide simple criteria that rule out many unphysical situations while admitting any physically reasonable situation, in fact, at least when one introduces an effective field modeling of some quantum mechanical effects, some possible matter tensors which are known to be physically reasonable and even realistic because they have been experimentally verified, actually fail various energy conditions. In particular, in the Casimir effect, in the region between two conducting plates held parallel at a very small separation d, there is a negative energy density between the plates. (Be mindful, though, that the Casimir effect is topological, in that the sign of the vacuum energy depends on both the geometry and topology of the configuration. Being negative for parallel plates, the vacuum energy is positive for a conducting sphere.) However, various quantum inequalities suggest that a suitable averaged energy condition may be satisfied in such cases. In particular, the averaged null energy condition is satisfied in the Casimir effect. Indeed, for energy–momentum tensors arising from effective field theories on Minkowski spacetime, the averaged null energy condition holds for everyday quantum fields. Extending these results is an open problem. The strong energy condition is obeyed by all normal/Newtonian matter, but a false vacuum can violate it. Consider the linear barotropic equation state where is the matter energy density, is the matter pressure, and is a constant. Then the strong energy condition requires ; but for the state known as a false vacuum, we have . See also Congruence (general relativity) Exact solutions in general relativity Frame fields in general relativity Positive energy theorem Quantum inequalities Notes References The energy conditions are discussed in §4.3. Various energy conditions (including all of those mentioned above) are discussed in Section 2.1. Various energy conditions are discussed in Section 4.6. Common energy conditions are discussed in Section 9.2. Violations of the strong energy condition is discussed in Section 6.1. Mathematical methods in general relativity
0.783325
0.980754
0.76825
Non-equilibrium thermodynamics
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions. Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Many systems and processes can, however, be considered to be in equilibrium locally, thus allowing description by currently known equilibrium thermodynamics. Nevertheless, some natural systems and processes remain beyond the scope of equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium. However, it can be done locally, and the macroscopic entropy will then be given by the integral of the locally defined entropy density. It has been found that many systems far outside global equilibrium still obey the concept of local equilibrium. Scope Difference between equilibrium and non-equilibrium thermodynamics A profound difference separates equilibrium from non-equilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail. Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, needs its state variables to have a very close connection with those of equilibrium thermodynamics. This conceptual issue is overcome under the assumption of local equilibrium, which entails that the relationships that hold between macroscopic state variables at equilibrium hold locally, also outside equilibrium. Throughout the past decades, the assumption of local equilibrium has been tested, and found to hold, under increasingly extreme conditions, such as in the shock front of violent explosions, on reacting surfaces, and under extreme thermal gradients. Thus, non-equilibrium thermodynamics provides a consistent framework for modelling not only the initial and final states of a system, but also the evolution of the system in time. Together with the concept of entropy production, this provides a powerful tool in process optimisation, and provides a theoretical foundation for exergy analysis. Non-equilibrium state variables The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. When the system is in local equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables. Because of the spatial non-uniformity, non-equilibrium state variables that correspond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extensive equilibrium state variables. When the system is in local equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables. In reality, these requirements, although strict, have been shown to be fulfilled even under extreme conditions, such as during phase transitions, at reacting interfaces, and in plasma droplets surrounded by ambient air. There are, however, situations where there are appreciable non-linear effects even at the local scale. Overview Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also), time rate of entropy production (Onsager 1931), thermodynamic fields, dissipative structure, and non-linear dynamical structure. One problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables. One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics, but they are hardly touched on in the present article. Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored. Local equilibrium thermodynamics The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables. Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption (see also Keizer (1987)). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed spatial variation from infinitesimal volume element to adjacent infinitesimal volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. While these demands may appear severely constrictive, it has been found that the assumptions of local equilibrium hold for a wide variety of systems, including reacting interfaces, on the surfaces of catalysts, in confined systems such as zeolites, under temperature gradients as large as K m, and even in shock fronts moving at up to six times the speed of sound. In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Lars Onsager in the twentieth. These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation. Local equilibrium thermodynamics with materials with "memory" A further extension of local equilibrium thermodynamics is to allow that materials may have "memory", so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state. Extended irreversible thermodynamics Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes. The formalism is well-suited for describing high-frequency processes and small-length scales materials. Basic concepts There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems. The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure. Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential. By definition, the entropy (S) is a function of the collection of extensive quantities . Each extensive quantity has a conjugate intensive variable (a restricted definition of intensive variable is used here by comparison to the definition given in this link) so that: We then define the extended Massieu function as follows: where is the Boltzmann constant, whence The independent variables are the intensities. Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system. It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not. Stationary states, fluctuations, and stability In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process. If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323). The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system. If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy. Local thermodynamic equilibrium The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium. Ponderable matter Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables. One can think here of two 'relaxation times' separated by order of magnitude. The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate. Milne's definition in terms of radiative equilibrium Edward A. Milne, thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons. Entropy in evolving systems It is pointed out by W.T. Grandy Jr, that entropy, though it may be defined for a non-equilibrium system is—when strictly considered—only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking. This point of view shares many points in common with the concept and the use of entropy in continuum thermomechanics, which evolved completely independently of statistical mechanics and maximum-entropy principles. Entropy in non-equilibrium To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their tending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where is a relaxation time of a corresponding variables. It is convenient to consider the initial value are equal to zero. The above equation is valid for small deviations from equilibrium; The dynamics of internal variables in general case is considered by Pokrovskii. Entropy of the system in non-equilibrium is a function of the total set of variables The essential contribution to the thermodynamics of the non-equilibrium systems was brought by Prigogine, when he and his collaborators investigated the systems of chemically reacting substances. The stationary states of such systems exists due to exchange both particles and energy with the environment. In section 8 of the third chapter of his book, Prigogine has specified three contributions to the variation of entropy of the considered system at the given volume and constant temperature . The increment of entropy can be calculated according to the formula The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy coming into the system with the stream of particles of substances that can be positive or negative, , where is chemical potential of substance . The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables . In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on. Flows and forces The fundamental relation of classical equilibrium thermodynamics expresses the change in entropy of a system as a function of the intensive quantities temperature , pressure and chemical potential and of the differentials of the extensive quantities energy , volume and particle number . Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities , and and of the intensive macroscopic quantities , and . For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities. Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations. Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities. In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below. One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described. Onsager reciprocal relations Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows are small and the thermodynamic forces vary slowly, the rate of creation of entropy is linearly related to the flows: and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted : from which it follows that: The second law of thermodynamics requires that the matrix be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix is symmetric. This fact is called the Onsager reciprocal relations. The generalization of the above equations for the rate of creation of entropy was given by Pokrovskii. Speculated extremal principles for non-equilibrium processes Until recently, prospects for useful extremal principles in this area have seemed clouded. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997). There is good experimental evidence that heat convection does not obey extremal principles for time rate of entropy production. Theoretical analysis shows that chemical reactions do not obey extremal principles for the second differential of time rate of entropy production. The development of a general extremal principle seems infeasible in the current state of knowledge. Applications Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes. It is also used to give a description of the dynamics of nanoparticles, which can be out of equilibrium in systems where catalysis and electrochemical conversion is involved. Also, ideas from non-equilibrium thermodynamics and the informatic theory of entropy have been adapted to describe general economic systems. See also Time crystal Dissipative system Entropy production Extremal principles in non-equilibrium thermodynamics Self-organization Autocatalytic reactions and order creation Self-organizing criticality Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations Boltzmann equation Vlasov equation Maxwell's demon Information entropy Spontaneous symmetry breaking Autopoiesis Maximum power principle References Sources Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, . Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, Wiley-Interscience, London, 1971, . Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. . Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the Hungarian (1967) by E. Gyarmati and W.F. Heinz, Springer, Berlin. Lieb, E.H., Yngvason, J. (1999). 'The physics and mathematics of the second law of thermodynamics', Physics Reports, 310: 1–96. See also this. Further reading Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. . Second edition (1983) . Kleidon, A., Lorenz, R.D., editors (2005). Non-equilibrium Thermodynamics and the Production of Entropy, Springer, Berlin. . Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York. Zubarev D. N. (1974): Nonequilibrium Statistical Thermodynamics. New York, Consultants Bureau. ; . Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, . Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory. John Wiley & Sons. . Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley & Sons. . Tuck, Adrian F. (2008). Atmospheric turbulence : a molecular dynamics perspective. Oxford University Press. . Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. . Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures. John Wiley & Sons, Chichester. . de Groot S.R., Mazur P. (1984). Non-Equilibrium Thermodynamics (Dover). Ramiro Augusto Salazar La Rotta. (2011). The Non-Equilibrium Thermodynamics, Perpetual External links Stephan Herminghaus' Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics - 1992- book by Xavier de Hemptinne. Nonequilibrium Thermodynamics of Small Systems - PhysicsToday.org Into the Cool - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory. "Thermodynamics "beyond" local equilibrium" Branches of thermodynamics
0.776316
0.989595
0.768239
Primary energy
Primary energy (PE) is the energy found in nature that has not been subjected to any human engineered conversion process. It encompasses energy contained in raw fuels and other forms of energy, including waste, received as input to a system. Primary energy can be non-renewable or renewable. Total primary energy supply (TPES) is the sum of production and imports, plus or minus stock changes, minus exports and international bunker storage. The International Recommendations for Energy Statistics (IRES) prefers total energy supply (TES) to refer to this indicator. These expressions are often used to describe the total energy supply of a national territory. Secondary energy is a carrier of energy, such as electricity. These are produced by conversion from a primary energy source. Primary energy is used as a measure in energy statistics in the compilation of energy balances, as well as in the field of energetics. In energetics, a primary energy source (PES) refers to the energy forms required by the energy sector to generate the supply of energy carriers used by human society. Primary energy only counts raw energy and not usable energy and fails to account well for energy losses, particularly the large losses in thermal sources. It therefore generally grossly undercounts non thermal renewable energy sources . Examples of sources Primary energy sources should not be confused with the energy system components (or conversion processes) through which they are converted into energy carriers. Usable energy Primary energy sources are transformed in energy conversion processes to more convenient forms of energy that can directly be used by society, such as electrical energy, refined fuels, or synthetic fuels such as hydrogen fuel. In the field of energetics, these forms are called energy carriers and correspond to the concept of "secondary energy" in energy statistics. Conversion to energy carriers (or secondary energy) Energy carriers are energy forms which have been transformed from primary energy sources. Electricity is one of the most common energy carriers, being transformed from various primary energy sources such as coal, oil, natural gas, and wind. Electricity is particularly useful since it has low entropy (is highly ordered) and so can be converted into other forms of energy very efficiently. District heating is another example of secondary energy. According to the laws of thermodynamics, primary energy sources cannot be produced. They must be available to society to enable the production of energy carriers. Conversion efficiency varies. For thermal energy, electricity and mechanical energy production is limited by Carnot's theorem, and generates a lot of waste heat. Other non-thermal conversions can be more efficient. For example, while wind turbines do not capture all of the wind's energy, they have a high conversion efficiency and generate very little waste heat since wind energy is low entropy. In principle solar photovoltaic conversions could be very efficient, but current conversion can only be done well for narrow ranges of wavelength, whereas solar thermal is also subject to Carnot efficiency limits. Hydroelectric power is also very ordered, and converted very efficiently. The amount of usable energy is the exergy of a system. Site and source energy Site energy is the term used in North America for the amount of end-use energy of all forms consumed at a specified location. This can be a mix of primary energy (such as natural gas burned at the site) and secondary energy (such as electricity). Site energy is measured at the campus, building, or sub-building level and is the basis for energy charges on utility bills. Source energy, in contrast, is the term used in North America for the amount of primary energy consumed in order to provide a facility's site energy. It is always greater than the site energy, as it includes all site energy and adds to it the energy lost during transmission, delivery, and conversion. While source or primary energy provides a more complete picture of energy consumption, it cannot be measured directly and must be calculated using conversion factors from site energy measurements. For electricity, a typical value is three units of source energy for one unit of site energy. However, this can vary considerably depending on factors such as the primary energy source or fuel type, the type of power plant, and the transmission infrastructure. One full set of conversion factors is available as technical reference from Energy STAR. Either site or source energy can be an appropriate metric when comparing or analyzing energy use of different facilities. The U.S Energy Information Administration, for example, uses primary (source) energy for its energy overviews but site energy for its Commercial Building Energy Consumption Survey and Residential Building Energy Consumption Survey. The US Environmental Protection Agency's Energy STAR program recommends using source energy, and the US Department of Energy uses site energy in its definition of a zero net energy building. Conversion factor conventions Where primary energy is used to describe fossil fuels, the embodied energy of the fuel is available as thermal energy and around two thirds is typically lost in conversion to electrical or mechanical energy. There are very much less significant conversion losses when hydroelectricity, wind and solar power produce electricity, but today's UN conventions on energy statistics counts the electricity made from hydroelectricity, wind and solar as the primary energy itself for these sources. One consequence of employing primary energy as an energy metric is that the contribution of hydro, wind and solar energy is under reported compared to fossil energy sources, and there is hence an international debate on how to count energy from non thermal renewables, with many estimates having them undercounted by a factor of about three. The false notion that all primary energy from thermal fossil fuel sources has to be replaced by an equivalent amount of non thermal renewables (which is not necessary as conversion losses do not need to be replaced) has been termed the "primary energy fallacy". See also Energy industry Energy development Energy mix Energy system List of countries by total primary energy consumption and production Notes References Further reading Kydes, Andy (Lead Author); Cutler J. Cleveland (Topic Editor). 2007. "Primary energy." In: Encyclopedia of Earth. Eds. Cutler J. Cleveland (Washington, D.C.: Environmental Information Coalition, National Council for Science and the Environment). [First published in the Encyclopedia of Earth June 1, 2006; Last revised August 14, 2007; Retrieved November 15, 2007. External links The Encyclopedia of Earth: Primary energy Our Energy Futures glossary: Primary Energy Sources Energy Thermodynamics
0.778055
0.987371
0.76823
Gravitational potential
In classical mechanics, the gravitational potential is a scalar potential associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point in the conservative gravitational field. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance. Their similarity is correlated with both associated fields having conservative forces. Mathematically, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies. Potential energy The gravitational potential (V) at a location is the gravitational potential energy (U) at that location per unit mass: where m is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity. In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, g, can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height: Mathematical form The gravitational potential V at a distance x from a point mass of mass M can be defined as the work W that needs to be done by an external agent to bring a unit mass in from infinity to that point: where G is the gravitational constant, and F is the gravitational force. The product GM is the standard gravitational parameter and is often known to higher precision than G or M separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero. The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is where x is a vector of length x pointing from the point mass toward the small body and is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law: The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., xn and have masses m1, ..., mn, then the potential of the distribution at the point x is If the mass distribution is given as a mass measure dm on three-dimensional Euclidean space R3, then the potential is the convolution of with dm. In good cases this equals the integral where is the distance between the points x and r. If there is a function ρ(r) representing the density of the distribution at r, so that , where dv(r) is the Euclidean volume element, then the gravitational potential is the volume integral If V is a potential function coming from a continuous mass distribution ρ(r), then ρ can be recovered using the Laplace operator, : This holds pointwise whenever ρ is continuous and is zero outside of a bounded set. In general, the mass measure dm can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation. See also Green's function for the three-variable Laplace equation and Newtonian potential. The integral may be expressed in terms of known transcendental functions for all ellipsoidal shapes, including the symmetrical and degenerate ones. These include the sphere, where the three semi axes are equal; the oblate (see reference ellipsoid) and prolate spheroids, where two semi axes are equal; the degenerate ones where one semi axes is infinite (the elliptical and circular cylinder) and the unbounded sheet where two semi axes are infinite. All these shapes are widely used in the applications of the gravitational potential integral (apart from the constant G, with 𝜌 being a constant charge density) to electromagnetism. Spherical symmetry A spherically symmetric mass distribution behaves to an observer completely outside the distribution as though all of the mass was concentrated at the center, and thus effectively as a point mass, by the shell theorem. On the surface of the earth, the acceleration is given by so-called standard gravity g, approximately 9.8 m/s2, although this value varies slightly with latitude and altitude. The magnitude of the acceleration is a little larger at the poles than at the equator because Earth is an oblate spheroid. Within a spherically symmetric mass distribution, it is possible to solve Poisson's equation in spherical coordinates. Within a uniform spherical body of radius R, density ρ, and mass m, the gravitational force g inside the sphere varies linearly with distance r from the center, giving the gravitational potential inside the sphere, which is which differentiably connects to the potential function for the outside of the sphere (see the figure at the top). General relativity In general relativity, the gravitational potential is replaced by the metric tensor. When the gravitational field is weak and the sources are moving very slowly compared to light-speed, general relativity reduces to Newtonian gravity, and the metric tensor can be expanded in terms of the gravitational potential. Multipole expansion The potential at a point is given by The potential can be expanded in a series of Legendre polynomials. Represent the points x and r as position vectors relative to the center of mass. The denominator in the integral is expressed as the square root of the square to give where, in the last integral, and is the angle between x and r. (See "mathematical form".) The integrand can be expanded as a Taylor series in , by explicit calculation of the coefficients. A less laborious way of achieving the same result is by using the generalized binomial theorem. The resulting series is the generating function for the Legendre polynomials: valid for and . The coefficients Pn are the Legendre polynomials of degree n. Therefore, the Taylor coefficients of the integrand are given by the Legendre polynomials in . So the potential can be expanded in a series that is convergent for positions x such that for all mass elements of the system (i.e., outside a sphere, centered at the center of mass, that encloses the system): The integral is the component of the center of mass in the direction; this vanishes because the vector x emanates from the center of mass. So, bringing the integral under the sign of the summation gives This shows that elongation of the body causes a lower potential in the direction of elongation, and a higher potential in perpendicular directions, compared to the potential due to a spherical mass, if we compare cases with the same distance to the center of mass. (If we compare cases with the same distance to the surface, the opposite is true.) Numerical values The absolute value of gravitational potential at a number of locations with regards to the gravitation from the Earth, the Sun, and the Milky Way is given in the following table; i.e. an object at Earth's surface would need 60 MJ/kg to "leave" Earth's gravity field, another 900 MJ/kg to also leave the Sun's gravity field and more than 130 GJ/kg to leave the gravity field of the Milky Way. The potential is half the square of the escape velocity. Compare the gravity at these locations. See also Applications of Legendre polynomials in physics Standard gravitational parameter (GM) Geoid Geopotential Geopotential model Notes References . . Energy (physics) Gravity Potentials Scalar physical quantities
0.771968
0.995121
0.768202
Heat
In thermodynamics, heat is energy in transfer between a thermodynamic system and its surroundings by modes other than thermodynamic work and transfer of matter. Such modes are microscopic, mainly thermal conduction, radiation, and friction, as distinct from the macroscopic modes, thermodynamic work and transfer of matter. For a closed system (transfer of matter excluded), the heat involved in a process is the difference in internal energy between the final and initial states of a system, and subtracting the work done in the process. For a closed system, this is the formulation of the first law of thermodynamics. Calorimetry is measurement of quantity of energy transferred as heat by its effect on the states of interacting bodies, for example, by the amount of ice melted or by change in temperature of a body. In the International System of Units (SI), the unit of measurement for heat, as a form of energy, is the joule (J). With various other meanings, the word 'heat' is also used in engineering, and it occurs also in ordinary language, but such are not the topic of the present article. Notation and units As a form of energy, heat has the unit joule (J) in the International System of Units (SI). In addition, many applied branches of engineering use other, traditional units, such as the British thermal unit (BTU) and the calorie. The standard unit for the rate of heating is the watt (W), defined as one joule per second. The symbol for heat was introduced by Rudolf Clausius and Macquorn Rankine in . Heat released by a system into its surroundings is by convention, as a contributor to internal energy, a negative quantity; when a system absorbs heat from its surroundings, it is positive. Heat transfer rate, or heat flow per unit time, is denoted by , but it is not a time derivative of a function of state (which can also be written with the dot notation) since heat is not a function of state. Heat flux is defined as rate of heat transfer per unit cross-sectional area (watts per square metre). History As a common noun, English heat or warmth (just as French chaleur, German Wärme, Latin calor, Greek θάλπος, etc.) refers to (the human perception of) either thermal energy or temperature. From an early time, the French technical term chaleur used by Sadi Carnot was taken as equivalent to the English heat and German Wärme (lit. "warmth", while the equivalent of heat would be German Hitze). Speculation on "heat" as a separate form of matter has a long history, identified as caloric theory, phlogiston theory, and fire. Many careful and accurate historical experiments practically exclude friction, mechanical and thermodynamic work and matter transfer, investigating transfer of energy only by thermal conduction and radiation. Such experiments give impressive rational support to the caloric theory of heat. To account also for changes of internal energy due to friction, and mechanical and thermodynamic work, the caloric theory was, around the end of the eighteenth century, replaced by the "mechanical" theory of heat, which is accepted today. 17th century–early 18th century "Heat is motion" As scientists of the early modern age began to adopt the view that matter consists of particles, a close relationship between heat and motion was widely surmised, or even the equivalency of the concepts, boldly expressed by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat ... but that the very essence of heat ... is motion and nothing else." A distinction between heat and temperature was not clearly articulated until the mid-18th century.Heat has been discussed in ordinary language by philosophers. An example is this 1720 quote from John Locke: This source was repeatedly quoted by Joule. Also the transfer of heat was explained by the motion of particles. Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another." John Tyndall's Heat Considered as Mode of Motion (1863) was instrumental in popularizing the idea of heat as motion to the English-speaking public. The theory was developed in academic publications in French, English and German. 18th century Heat vs. temperature Unstated distinctions between heat and “hotness” may be very old, heat seen as something dependent on the quantity of a hot substance, “heat”, vaguely perhaps distinct from the quality of "hotness". In 1723, the English mathematician Brook Taylor measured the temperature—the expansion of the liquid in a thermometer—of mixtures of various amounts of hot water in cold water. As expected, the increase in temperature was in proportion to the proportion of hot water in the mixture. The distinction between heat and temperature is implicitly expressed in the last sentence of his report. Evaporative cooling In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. The ether boiled, while no heat was withdrawn from it, and its temperature decreased. And in 1758 on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Discovery of specific heat In 1756 or soon thereafter, Joseph Black, Cullen’s friend and former assistant, began an extensive study of heat. In 1760 Black realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave. For clarity, he then described a hypothetical but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (to 120 °F), though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: “Quicksilver [mercury] ... has less capacity for the matter of heat than water.” Degrees of heat In his investigations of specific heat, Black used a unit of heat he called "degrees of heat"—as opposed to just "degrees" [of temperature]. This unit was context-dependent and could only be used when circumstances were identical. It was based on change in temperature multiplied by the mass of the substance involved. Discovery of latent heat It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone. In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice had now absorbed an additional 8 “degrees of heat”, which Black called sensible heat, manifest as temperature change, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were also absorbed as latent heat, manifest as phase change rather than as temperature change. Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”). Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale. First calorimeter A calorimeter is a device used for measuring heat capacity, as well as the heat absorbed or released in chemical reactions or physical changes. In 1780, French chemist Antoine Lavoisier used such an apparatus—which he named 'calorimeter'—to investigate the heat released by respiration, by observing how this heat melted snow surrounding his apparatus. A so called ice calorimeter was used 1782–83 by Lavoisier and his colleague Pierre-Simon Laplace to measure the heat released in various chemical reactions. The heat so released melted a specific amount of ice, and the heat required for the melting of a certain amount of ice was known beforehand. Classical thermodynamics The modern understanding of heat is often partly attributed to Thompson's 1798 mechanical theory of heat (An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction), postulating a mechanical equivalent of heat. A collaboration between Nicolas Clément and Sadi Carnot (Reflections on the Motive Power of Fire) in the 1820s had some related thinking along similar lines. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water. The theory of classical thermodynamics matured in the 1850s to 1860s. Clausius (1850) In 1850, Clausius, responding to Joule's experimental demonstrations of heat production by friction, rejected the caloric doctrine of conservation of heat, writing: If we assume that heat, like matter, cannot be lessened in quantity, we must also assume that it cannot be increased; but it is almost impossible to explain the ascension of temperature brought about by friction otherwise than by assuming an actual increase of heat. The careful experiments of Joule, who developed heat in various ways by the application of mechanical force, establish almost to a certainty, not only the possibility of increasing the quantity of heat, but also the fact that the newly-produced heat is proportional to the work expended in its production. It may be remarked further, that many facts have lately transpired which tend to overthrow the hypothesis that heat is itself a body, and to prove that it consists in a motion of the ultimate particles of bodies. The process function was introduced by Rudolf Clausius in 1850. Clausius described it with the German compound Wärmemenge, translated as "amount of heat". James Clerk Maxwell (1871) James Clerk Maxwell in his 1871 Theory of Heat outlines four stipulations for the definition of heat: It is something which may be transferred from one body to another, according to the second law of thermodynamics. It is a measurable quantity, and so can be treated mathematically. It cannot be treated as a material substance, because it may be transformed into something that is not a material substance, e.g., mechanical work. Heat is one of the forms of energy. Bryan (1907) In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Bryan was writing when thermodynamics had been established empirically, but people were still interested to specify its logical structure. The 1909 work of Carathéodory also belongs to this historical era. Bryan was a physicist while Carathéodory was a mathematician. Bryan started his treatise with an introductory chapter on the notions of heat and of temperature. He gives an example of where the notion of heating as raising a body's temperature contradicts the notion of heating as imparting a quantity of heat to that body. He defined an adiabatic transformation as one in which the body neither gains nor loses heat. This is not quite the same as defining an adiabatic transformation as one that occurs to a body enclosed by walls impermeable to radiation and conduction. He recognized calorimetry as a way of measuring quantity of heat. He recognized water as having a temperature of maximum density. This makes water unsuitable as a thermometric substance around that temperature. He intended to remind readers of why thermodynamicists preferred an absolute scale of temperature, independent of the properties of a particular thermometric substance. His second chapter started with the recognition of friction as a source of heat, by Benjamin Thompson, by Humphry Davy, by Robert Mayer, and by James Prescott Joule. He stated the First Law of Thermodynamics, or Mayer–Joule Principle as follows: When heat is transformed into work or conversely work is transformed into heat, the quantity of heat gained or lost is proportional to the quantity of work lost or gained. He wrote: If heat be measured in dynamical units the mechanical equivalent becomes equal to unity, and the equations of thermodynamics assume a simpler and more symmetrical form. He explained how the caloric theory of Lavoisier and Laplace made sense in terms of pure calorimetry, though it failed to account for conversion of work into heat by such mechanisms as friction and conduction of electricity. Having rationally defined quantity of heat, he went on to consider the second law, including the Kelvin definition of absolute thermodynamic temperature. In section 41, he wrote:          §41. Physical unreality of reversible processes. In Nature all phenomena are irreversible in a greater or less degree. The motions of celestial bodies afford the closest approximations to reversible motions, but motions which occur on this earth are largely retarded by friction, viscosity, electric and other resistances, and if the relative velocities of moving bodies were reversed, these resistances would still retard the relative motions and would not accelerate them as they should if the motions were perfectly reversible. He then stated the principle of conservation of energy. He then wrote: In connection with irreversible phenomena the following axioms have to be assumed.          (1) If a system can undergo an irreversible change it will do so.          (2) A perfectly reversible change cannot take place of itself; such a change can only be regarded as the limiting form of an irreversible change. On page 46, thinking of closed systems in thermal connection, he wrote: We are thus led to postulate a system in which energy can pass from one element to another otherwise than by the performance of mechanical work. On page 47, still thinking of closed systems in thermal connection, he wrote:          §58. Quantity of Heat. Definition. When energy flows from one system or part of a system to another otherwise than by the performance of work, the energy so transferred i[s] called heat. On page 48, he wrote:          § 59. When two bodies act thermically on one another the quantities of heat gained by one and lost by the other are not necessarily equal.          In the case of bodies at a distance, heat may be taken from or given to the intervening medium.          The quantity of heat received by any portion of the ether may be defined in the same way as that received by a material body. [He was thinking of thermal radiation.]          Another important exception occurs when sliding takes place between two rough bodies in contact. The algebraic sum of the works done is different from zero, because, although the action and reaction are equal and opposite the velocities of the parts of the bodies in contact are different. Moreover, the work lost in the process does not increase the mutual potential energy of the system and there is no intervening medium between the bodies. Unless the lost energy can be accounted for in other ways, (as when friction produces electrification), it follows from the Principle of Conservation of Energy that the algebraic sum of the quantities of heat gained by the two systems is equal to the quantity of work lost by friction. [This thought was echoed by Bridgman, as above.] Carathéodory (1909) A celebrated and frequent definition of heat in thermodynamics is based on the work of Carathéodory (1909), referring to processes in a closed system. Carathéodory was responding to a suggestion by Max Born that he examine the logical structure of thermodynamics. The internal energy of a body in an arbitrary state can be determined by amounts of work adiabatically performed by the body on its surroundings when it starts from a reference state . Such work is assessed through quantities defined in the surroundings of the body. It is supposed that such work can be assessed accurately, without error due to friction in the surroundings; friction in the body is not excluded by this definition. The adiabatic performance of work is defined in terms of adiabatic walls, which allow transfer of energy as work, but no other transfer, of energy or matter. In particular they do not allow the passage of energy as heat. According to this definition, work performed adiabatically is in general accompanied by friction within the thermodynamic system or body. On the other hand, according to Carathéodory (1909), there also exist non-adiabatic, diathermal walls, which are postulated to be permeable only to heat. For the definition of quantity of energy transferred as heat, it is customarily envisaged that an arbitrary state of interest is reached from state by a process with two components, one adiabatic and the other not adiabatic. For convenience one may say that the adiabatic component was the sum of work done by the body through volume change through movement of the walls while the non-adiabatic wall was temporarily rendered adiabatic, and of isochoric adiabatic work. Then the non-adiabatic component is a process of energy transfer through the wall that passes only heat, newly made accessible for the purpose of this transfer, from the surroundings to the body. The change in internal energy to reach the state from the state is the difference of the two amounts of energy transferred. Although Carathéodory himself did not state such a definition, following his work it is customary in theoretical studies to define heat, , to the body from its surroundings, in the combined process of change to state from the state , as the change in internal energy, , minus the amount of work, , done by the body on its surrounds by the adiabatic process, so that . In this definition, for the sake of conceptual rigour, the quantity of energy transferred as heat is not specified directly in terms of the non-adiabatic process. It is defined through knowledge of precisely two variables, the change of internal energy and the amount of adiabatic work done, for the combined process of change from the reference state to the arbitrary state . It is important that this does not explicitly involve the amount of energy transferred in the non-adiabatic component of the combined process. It is assumed here that the amount of energy required to pass from state to state , the change of internal energy, is known, independently of the combined process, by a determination through a purely adiabatic process, like that for the determination of the internal energy of state above. The rigour that is prized in this definition is that there is one and only one kind of energy transfer admitted as fundamental: energy transferred as work. Energy transfer as heat is considered as a derived quantity. The uniqueness of work in this scheme is considered to guarantee rigor and purity of conception. The conceptual purity of this definition, based on the concept of energy transferred as work as an ideal notion, relies on the idea that some frictionless and otherwise non-dissipative processes of energy transfer can be realized in physical actuality. The second law of thermodynamics, on the other hand, assures us that such processes are not found in nature. Before the rigorous mathematical definition of heat based on Carathéodory's 1909 paper, historically, heat, temperature, and thermal equilibrium were presented in thermodynamics textbooks as jointly primitive notions. Carathéodory introduced his 1909 paper thus: "The proposition that the discipline of thermodynamics can be justified without recourse to any hypothesis that cannot be verified experimentally must be regarded as one of the most noteworthy results of the research in thermodynamics that was accomplished during the last century." Referring to the "point of view adopted by most authors who were active in the last fifty years", Carathéodory wrote: "There exists a physical quantity called heat that is not identical with the mechanical quantities (mass, force, pressure, etc.) and whose variations can be determined by calorimetric measurements." James Serrin introduces an account of the theory of thermodynamics thus: "In the following section, we shall use the classical notions of heat, work, and hotness as primitive elements, ... That heat is an appropriate and natural primitive for thermodynamics was already accepted by Carnot. Its continued validity as a primitive element of thermodynamical structure is due to the fact that it synthesizes an essential physical concept, as well as to its successful use in recent work to unify different constitutive theories." This traditional kind of presentation of the basis of thermodynamics includes ideas that may be summarized by the statement that heat transfer is purely due to spatial non-uniformity of temperature, and is by conduction and radiation, from hotter to colder bodies. It is sometimes proposed that this traditional kind of presentation necessarily rests on "circular reasoning". This alternative approach to the definition of quantity of energy transferred as heat differs in logical structure from that of Carathéodory, recounted just above. This alternative approach admits calorimetry as a primary or direct way to measure quantity of energy transferred as heat. It relies on temperature as one of its primitive concepts, and used in calorimetry. It is presupposed that enough processes exist physically to allow measurement of differences in internal energies. Such processes are not restricted to adiabatic transfers of energy as work. They include calorimetry, which is the commonest practical way of finding internal energy differences. The needed temperature can be either empirical or absolute thermodynamic. In contrast, the Carathéodory way recounted just above does not use calorimetry or temperature in its primary definition of quantity of energy transferred as heat. The Carathéodory way regards calorimetry only as a secondary or indirect way of measuring quantity of energy transferred as heat. As recounted in more detail just above, the Carathéodory way regards quantity of energy transferred as heat in a process as primarily or directly defined as a residual quantity. It is calculated from the difference of the internal energies of the initial and final states of the system, and from the actual work done by the system during the process. That internal energy difference is supposed to have been measured in advance through processes of purely adiabatic transfer of energy as work, processes that take the system between the initial and final states. By the Carathéodory way it is presupposed as known from experiment that there actually physically exist enough such adiabatic processes, so that there need be no recourse to calorimetry for measurement of quantity of energy transferred as heat. This presupposition is essential but is explicitly labeled neither as a law of thermodynamics nor as an axiom of the Carathéodory way. In fact, the actual physical existence of such adiabatic processes is indeed mostly supposition, and those supposed processes have in most cases not been actually verified empirically to exist. Planck (1926) Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat. Planck criticised Carathéodory for not attending to this. Carathéodory was a mathematician who liked to think in terms of adiabatic processes, and perhaps found friction too tricky to think about, while Planck was a physicist. Heat transfer Heat transfer between two bodies Referring to conduction, Partington writes: "If a hot body is brought in conducting contact with a cold body, the temperature of the hot body falls and that of the cold body rises, and it is said that a quantity of heat has passed from the hot body to the cold body." Referring to radiation, Maxwell writes: "In Radiation, the hotter body loses heat, and the colder body receives heat by means of a process occurring in some intervening medium which does not itself thereby become hot." Maxwell writes that convection as such "is not a purely thermal phenomenon". In thermodynamics, convection in general is regarded as transport of internal energy. If, however, the convection is enclosed and circulatory, then it may be regarded as an intermediary that transfers energy as heat between source and destination bodies, because it transfers only energy and not matter from the source to the destination body. In accordance with the first law for closed systems, energy transferred solely as heat leaves one body and enters another, changing the internal energies of each. Transfer, between bodies, of energy as work is a complementary way of changing internal energies. Though it is not logically rigorous from the viewpoint of strict physical concepts, a common form of words that expresses this is to say that heat and work are interconvertible. Cyclically operating engines that use only heat and work transfers have two thermal reservoirs, a hot and a cold one. They may be classified by the range of operating temperatures of the working body, relative to those reservoirs. In a heat engine, the working body is at all times colder than the hot reservoir and hotter than the cold reservoir. In a sense, it uses heat transfer to produce work. In a heat pump, the working body, at stages of the cycle, goes both hotter than the hot reservoir, and colder than the cold reservoir. In a sense, it uses work to produce heat transfer. Heat engine In classical thermodynamics, a commonly considered model is the heat engine. It consists of four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A cyclic process leaves the working body in an unchanged state, and is envisaged as being repeated indefinitely often. Work transfers between the working body and the work reservoir are envisaged as reversible, and thus only one work reservoir is needed. But two thermal reservoirs are needed, because transfer of energy as heat is irreversible. A single cycle sees energy taken by the working body from the hot reservoir and sent to the two other reservoirs, the work reservoir and the cold reservoir. The hot reservoir always and only supplies energy, and the cold reservoir always and only receives energy. The second law of thermodynamics requires that no cycle can occur in which no energy is received by the cold reservoir. Heat engines achieve higher efficiency when the ratio of the initial and final temperature is greater. Heat pump or refrigerator Another commonly considered model is the heat pump or refrigerator. Again there are four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A single cycle starts with the working body colder than the cold reservoir, and then energy is taken in as heat by the working body from the cold reservoir. Then the work reservoir does work on the working body, adding more to its internal energy, making it hotter than the hot reservoir. The hot working body passes heat to the hot reservoir, but still remains hotter than the cold reservoir. Then, by allowing it to expand without passing heat to another body, the working body is made colder than the cold reservoir. It can now accept heat transfer from the cold reservoir to start another cycle. The device has transported energy from a colder to a hotter reservoir, but this is not regarded as by an inanimate agency; rather, it is regarded as by the harnessing of work . This is because work is supplied from the work reservoir, not just by a simple thermodynamic process, but by a cycle of thermodynamic operations and processes, which may be regarded as directed by an animate or harnessing agency. Accordingly, the cycle is still in accord with the second law of thermodynamics. The 'efficiency' of a heat pump (which exceeds unity) is best when the temperature difference between the hot and cold reservoirs is least. Functionally, such engines are used in two ways, distinguishing a target reservoir and a resource or surrounding reservoir. A heat pump transfers heat to the hot reservoir as the target from the resource or surrounding reservoir. A refrigerator transfers heat, from the cold reservoir as the target, to the resource or surrounding reservoir. The target reservoir may be regarded as leaking: when the target leaks heat to the surroundings, heat pumping is used; when the target leaks coldness to the surroundings, refrigeration is used. The engines harness work to overcome the leaks. Macroscopic view According to Planck, there are three main conceptual approaches to heat. One is the microscopic or kinetic theory approach. The other two are macroscopic approaches. One of the macroscopic approaches is through the law of conservation of energy taken as prior to thermodynamics, with a mechanical analysis of processes, for example in the work of Helmholtz. This mechanical view is taken in this article as currently customary for thermodynamic theory. The other macroscopic approach is the thermodynamic one, which admits heat as a primitive concept, which contributes, by scientific induction to knowledge of the law of conservation of energy. This view is widely taken as the practical one, quantity of heat being measured by calorimetry. Bailyn also distinguishes the two macroscopic approaches as the mechanical and the thermodynamic. The thermodynamic view was taken by the founders of thermodynamics in the nineteenth century. It regards quantity of energy transferred as heat as a primitive concept coherent with a primitive concept of temperature, measured primarily by calorimetry. A calorimeter is a body in the surroundings of the system, with its own temperature and internal energy; when it is connected to the system by a path for heat transfer, changes in it measure heat transfer. The mechanical view was pioneered by Helmholtz and developed and used in the twentieth century, largely through the influence of Max Born. It regards quantity of heat transferred as heat as a derived concept, defined for closed systems as quantity of heat transferred by mechanisms other than work transfer, the latter being regarded as primitive for thermodynamics, defined by macroscopic mechanics. According to Born, the transfer of internal energy between open systems that accompanies transfer of matter "cannot be reduced to mechanics". It follows that there is no well-founded definition of quantities of energy transferred as heat or as work associated with transfer of matter. Nevertheless, for the thermodynamical description of non-equilibrium processes, it is desired to consider the effect of a temperature gradient established by the surroundings across the system of interest when there is no physical barrier or wall between system and surroundings, that is to say, when they are open with respect to one another. The impossibility of a mechanical definition in terms of work for this circumstance does not alter the physical fact that a temperature gradient causes a diffusive flux of internal energy, a process that, in the thermodynamic view, might be proposed as a candidate concept for transfer of energy as heat. In this circumstance, it may be expected that there may also be active other drivers of diffusive flux of internal energy, such as gradient of chemical potential which drives transfer of matter, and gradient of electric potential which drives electric current and iontophoresis; such effects usually interact with diffusive flux of internal energy driven by temperature gradient, and such interactions are known as cross-effects. If cross-effects that result in diffusive transfer of internal energy were also labeled as heat transfers, they would sometimes violate the rule that pure heat transfer occurs only down a temperature gradient, never up one. They would also contradict the principle that all heat transfer is of one and the same kind, a principle founded on the idea of heat conduction between closed systems. One might to try to think narrowly of heat flux driven purely by temperature gradient as a conceptual component of diffusive internal energy flux, in the thermodynamic view, the concept resting specifically on careful calculations based on detailed knowledge of the processes and being indirectly assessed. In these circumstances, if perchance it happens that no transfer of matter is actualized, and there are no cross-effects, then the thermodynamic concept and the mechanical concept coincide, as if one were dealing with closed systems. But when there is transfer of matter, the exact laws by which temperature gradient drives diffusive flux of internal energy, rather than being exactly knowable, mostly need to be assumed, and in many cases are practically unverifiable. Consequently, when there is transfer of matter, the calculation of the pure 'heat flux' component of the diffusive flux of internal energy rests on practically unverifiable assumptions. This is a reason to think of heat as a specialized concept that relates primarily and precisely to closed systems, and applicable only in a very restricted way to open systems. In many writings in this context, the term "heat flux" is used when what is meant is therefore more accurately called diffusive flux of internal energy; such usage of the term "heat flux" is a residue of older and now obsolete language usage that allowed that a body may have a "heat content". Microscopic view In the kinetic theory, heat is explained in terms of the microscopic motions and interactions of constituent particles, such as electrons, atoms, and molecules. The immediate meaning of the kinetic energy of the constituent particles is not as heat. It is as a component of internal energy. In microscopic terms, heat is a transfer quantity, and is described by a transport theory, not as steadily localized kinetic energy of particles. Heat transfer arises from temperature gradients or differences, through the diffuse exchange of microscopic kinetic and potential particle energy, by particle collisions and other interactions. An early and vague expression of this was made by Francis Bacon. Precise and detailed versions of it were developed in the nineteenth century. In statistical mechanics, for a closed system (no transfer of matter), heat is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the energy levels of the system, without change in the values of the energy levels themselves. It is possible for macroscopic thermodynamic work to alter the occupation numbers without change in the values of the system energy levels themselves, but what distinguishes transfer as heat is that the transfer is entirely due to disordered, microscopic action, including radiative transfer. A mathematical definition can be formulated for small increments of quasi-static adiabatic work in terms of the statistical distribution of an ensemble of microstates. Calorimetry Quantity of heat transferred can be measured by calorimetry, or determined through calculations based on other quantities. Calorimetry is the empirical basis of the idea of quantity of heat transferred in a process. The transferred heat is measured by changes in a body of known properties, for example, temperature rise, change in volume or length, or phase change, such as melting of ice. A calculation of quantity of heat transferred can rely on a hypothetical quantity of energy transferred as adiabatic work and on the first law of thermodynamics. Such calculation is the primary approach of many theoretical studies of quantity of heat transferred. Engineering The discipline of heat transfer, typically considered an aspect of mechanical engineering and chemical engineering, deals with specific applied methods by which thermal energy in a system is generated, or converted, or transferred to another system. Although the definition of heat implicitly means the transfer of energy, the term heat transfer encompasses this traditional usage in many engineering disciplines and laymen language. Heat transfer is generally described as including the mechanisms of heat conduction, heat convection, thermal radiation, but may include mass transfer and heat in processes of phase changes. Convection may be described as the combined effects of conduction and fluid flow. From the thermodynamic point of view, heat flows into a fluid by diffusion to increase its energy, the fluid then transfers (advects) this increased internal energy (not heat) from one location to another, and this is then followed by a second thermal interaction which transfers heat to a second body or system, again by diffusion. This entire process is often regarded as an additional mechanism of heat transfer, although technically, "heat transfer" and thus heating and cooling occurs only on either end of such a conductive flow, but not as a result of flow. Thus, conduction can be said to "transfer" heat only as a net result of the process, but may not do so at every time within the complicated convective process. Latent and sensible heat In an 1847 lecture entitled On Matter, Living Force, and Heat, James Prescott Joule characterized the terms latent heat and sensible heat as components of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively. He described latent energy as the energy possessed via a distancing of particles where attraction was over a greater distance, i.e. a form of potential energy, and the sensible heat as an energy involving the motion of particles, i.e. kinetic energy. Latent heat is the heat released or absorbed by a chemical substance or a thermodynamic system during a change of state that occurs without a change in temperature. Such a process may be a phase transition, such as the melting of ice or the boiling of water. Heat capacity Heat capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The molar heat capacity is the heat capacity per unit amount (SI unit: mole) of a pure substance, and the specific heat capacity, often called simply specific heat, is the heat capacity per unit mass of a material. Heat capacity is a physical property of a substance, which means that it depends on the state and properties of the substance under consideration. The specific heats of monatomic gases, such as helium, are nearly constant with temperature. Diatomic gases such as hydrogen display some temperature dependence, and triatomic gases (e.g., carbon dioxide) still more. Before the development of the laws of thermodynamics, heat was measured by changes in the states of the participating bodies. Some general rules, with important exceptions, can be stated as follows. In general, most bodies expand on heating. In this circumstance, heating a body at a constant volume increases the pressure it exerts on its constraining walls, while heating at a constant pressure increases its volume. Beyond this, most substances have three ordinarily recognized states of matter, solid, liquid, and gas. Some can also exist in a plasma. Many have further, more finely differentiated, states of matter, such as glass and liquid crystal. In many cases, at fixed temperature and pressure, a substance can exist in several distinct states of matter in what might be viewed as the same 'body'. For example, ice may float in a glass of water. Then the ice and the water are said to constitute two phases within the 'body'. Definite rules are known, telling how distinct phases may coexist in a 'body'. Mostly, at a fixed pressure, there is a definite temperature at which heating causes a solid to melt or evaporate, and a definite temperature at which heating causes a liquid to evaporate. In such cases, cooling has the reverse effects. All of these, the commonest cases, fit with a rule that heating can be measured by changes of state of a body. Such cases supply what are called thermometric bodies, that allow the definition of empirical temperatures. Before 1848, all temperatures were defined in this way. There was thus a tight link, apparently logically determined, between heat and temperature, though they were recognized as conceptually thoroughly distinct, especially by Joseph Black in the later eighteenth century. There are important exceptions. They break the obviously apparent link between heat and temperature. They make it clear that empirical definitions of temperature are contingent on the peculiar properties of particular thermometric substances, and are thus precluded from the title 'absolute'. For example, water contracts on being heated near 277 K. It cannot be used as a thermometric substance near that temperature. Also, over a certain temperature range, ice contracts on heating. Moreover, many substances can exist in metastable states, such as with negative pressure, that survive only transiently and in very special conditions. Such facts, sometimes called 'anomalous', are some of the reasons for the thermodynamic definition of absolute temperature. In the early days of measurement of high temperatures, another factor was important, and used by Josiah Wedgwood in his pyrometer. The temperature reached in a process was estimated by the shrinkage of a sample of clay. The higher the temperature, the more the shrinkage. This was the only available more or less reliable method of measurement of temperatures above 1000 °C (1,832 °F). But such shrinkage is irreversible. The clay does not expand again on cooling. That is why it could be used for the measurement. But only once. It is not a thermometric material in the usual sense of the word. Nevertheless, the thermodynamic definition of absolute temperature does make essential use of the concept of heat, with proper circumspection. "Hotness" The property of hotness is a concern of thermodynamics that should be defined without reference to the concept of heat. Consideration of hotness leads to the concept of empirical temperature. All physical systems are capable of heating or cooling others. With reference to hotness, the comparative terms hotter and colder are defined by the rule that heat flows from the hotter body to the colder. If a physical system is inhomogeneous or very rapidly or irregularly changing, for example by turbulence, it may be impossible to characterize it by a temperature, but still there can be transfer of energy as heat between it and another system. If a system has a physical state that is regular enough, and persists long enough to allow it to reach thermal equilibrium with a specified thermometer, then it has a temperature according to that thermometer. An empirical thermometer registers degree of hotness for such a system. Such a temperature is called empirical. For example, Truesdell writes about classical thermodynamics: "At each time, the body is assigned a real number called the temperature. This number is a measure of how hot the body is." Physical systems that are too turbulent to have temperatures may still differ in hotness. A physical system that passes heat to another physical system is said to be the hotter of the two. More is required for the system to have a thermodynamic temperature. Its behavior must be so regular that its empirical temperature is the same for all suitably calibrated and scaled thermometers, and then its hotness is said to lie on the one-dimensional hotness manifold. This is part of the reason why heat is defined following Carathéodory and Born, solely as occurring other than by work or transfer of matter; temperature is advisedly and deliberately not mentioned in this now widely accepted definition. This is also the reason that the zeroth law of thermodynamics is stated explicitly. If three physical systems, A, B, and C are each not in their own states of internal thermodynamic equilibrium, it is possible that, with suitable physical connections being made between them, A can heat B and B can heat C and C can heat A. In non-equilibrium situations, cycles of flow are possible. It is the special and uniquely distinguishing characteristic of internal thermodynamic equilibrium that this possibility is not open to thermodynamic systems (as distinguished amongst physical systems) which are in their own states of internal thermodynamic equilibrium; this is the reason why the zeroth law of thermodynamics needs explicit statement. That is to say, the relation 'is not colder than' between general non-equilibrium physical systems is not transitive, whereas, in contrast, the relation 'has no lower a temperature than' between thermodynamic systems in their own states of internal thermodynamic equilibrium is transitive. It follows from this that the relation 'is in thermal equilibrium with' is transitive, which is one way of stating the zeroth law. Just as temperature may be undefined for a sufficiently inhomogeneous system, so also may entropy be undefined for a system not in its own state of internal thermodynamic equilibrium. For example, 'the temperature of the Solar System' is not a defined quantity. Likewise, 'the entropy of the Solar System' is not defined in classical thermodynamics. It has not been possible to define non-equilibrium entropy, as a simple number for a whole system, in a clearly satisfactory way. Classical thermodynamics Heat and enthalpy For a closed system (a system from which no matter can enter or exit), one version of the first law of thermodynamics states that the change in internal energy of the system is equal to the amount of heat supplied to the system minus the amount of thermodynamic work done by system on its surroundings. The foregoing sign convention for work is used in the present article, but an alternate sign convention, followed by IUPAC, for work, is to consider the work performed on the system by its surroundings as positive. This is the convention adopted by many modern textbooks of physical chemistry, such as those by Peter Atkins and Ira Levine, but many textbooks on physics define work as work done by the system. This formula can be re-written so as to express a definition of quantity of energy transferred as heat, based purely on the concept of adiabatic work, if it is supposed that is defined and measured solely by processes of adiabatic work: The thermodynamic work done by the system is through mechanisms defined by its thermodynamic state variables, for example, its volume , not through variables that necessarily involve mechanisms in the surroundings. The latter are such as shaft work, and include isochoric work. The internal energy, , is a state function. In cyclical processes, such as the operation of a heat engine, state functions of the working substance return to their initial values upon completion of a cycle. The differential, or infinitesimal increment, for the internal energy in an infinitesimal process is an exact differential . The symbol for exact differentials is the lowercase letter . In contrast, neither of the infinitesimal increments nor in an infinitesimal process represents the change in a state function of the system. Thus, infinitesimal increments of heat and work are inexact differentials. The lowercase Greek letter delta, , is the symbol for inexact differentials. The integral of any inexact differential in a process where the system leaves and then returns to the same thermodynamic state does not necessarily equal zero. As recounted above, in the section headed heat and entropy, the second law of thermodynamics observes that if heat is supplied to a system in a reversible process, the increment of heat and the temperature form the exact differential and that , the entropy of the working body, is a state function. Likewise, with a well-defined pressure, , behind a slowly moving (quasistatic) boundary, the work differential, , and the pressure, , combine to form the exact differential with the volume of the system, which is a state variable. In general, for systems of uniform pressure and temperature without composition change, Associated with this differential equation is the concept that the internal energy may be considered to be a function of its natural variables and . The internal energy representation of the fundamental thermodynamic relation is written as If is constant and if is constant with the enthalpy defined by The enthalpy may be considered to be a function of its natural variables and . The enthalpy representation of the fundamental thermodynamic relation is written The internal energy representation and the enthalpy representation are partial Legendre transforms of one another. They contain the same physical information, written in different ways. Like the internal energy, the enthalpy stated as a function of its natural variables is a thermodynamic potential and contains all thermodynamic information about a body. If a quantity of heat is added to a body while it does only expansion work on its surroundings, one has If this is constrained to happen at constant pressure, i.e. with , the expansion work done by the body is given by ; recalling the first law of thermodynamics, one has Consequently, by substitution one has In this scenario, the increase in enthalpy is equal to the quantity of heat added to the system. This is the basis of the determination of enthalpy changes in chemical reactions by calorimetry. Since many processes do take place at constant atmospheric pressure, the enthalpy is sometimes given the misleading name of 'heat content' or heat function, while it actually depends strongly on the energies of covalent bonds and intermolecular forces. In terms of the natural variables of the state function , this process of change of state from state 1 to state 2 can be expressed as It is known that the temperature is identically stated by Consequently, In this case, the integral specifies a quantity of heat transferred at constant pressure. Heat and entropy In 1856, Rudolf Clausius, referring to closed systems, in which transfers of matter do not occur, defined the second fundamental theorem (the second law of thermodynamics) in the mechanical theory of heat (thermodynamics): "if two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value:" In 1865, he came to define the entropy symbolized by S, such that, due to the supply of the amount of heat Q at temperature T the entropy of the system is increased by In a transfer of energy as heat without work being done, there are changes of entropy in both the surroundings which lose heat and the system which gains it. The increase, , of entropy in the system may be considered to consist of two parts, an increment, that matches, or 'compensates', the change, , of entropy in the surroundings, and a further increment, that may be considered to be 'generated' or 'produced' in the system, and is said therefore to be 'uncompensated'. Thus This may also be written The total change of entropy in the system and surroundings is thus This may also be written It is then said that an amount of entropy has been transferred from the surroundings to the system. Because entropy is not a conserved quantity, this is an exception to the general way of speaking, in which an amount transferred is of a conserved quantity. From the second law of thermodynamics it follows that in a spontaneous transfer of heat, in which the temperature of the system is different from that of the surroundings: For purposes of mathematical analysis of transfers, one thinks of fictive processes that are called reversible, with the temperature of the system being hardly less than that of the surroundings, and the transfer taking place at an imperceptibly slow rate. Following the definition above in formula, for such a fictive reversible process, a quantity of transferred heat (an inexact differential) is analyzed as a quantity , with (an exact differential): This equality is only valid for a fictive transfer in which there is no production of entropy, that is to say, in which there is no uncompensated entropy. If, in contrast, the process is natural, and can really occur, with irreversibility, then there is entropy production, with . The quantity was termed by Clausius the "uncompensated heat", though that does not accord with present-day terminology. Then one has This leads to the statement which is the second law of thermodynamics for closed systems. In non-equilibrium thermodynamics that makes the approximation of assuming the hypothesis of local thermodynamic equilibrium, there is a special notation for this. The transfer of energy as heat is assumed to take place across an infinitesimal temperature difference, so that the system element and its surroundings have near enough the same temperature . Then one writes where by definition The second law for a natural process asserts that See also Effect of sun angle on climate Heat death of the Universe Heat diffusion Heat equation Heat exchanger Heat flux sensor Heat recovery steam generator Heat recovery ventilation Heat transfer coefficient Heat wave History of heat Orders of magnitude (temperature) Relativistic heat conduction Renewable heat Sigma heat Thermal energy storage Thermal management of electronic devices and systems Thermometer Waste heat Waste heat recovery unit Water heat recycling Notes References Quotations Bibliography of cited references Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, . Atkins, P., de Paula, J. (1978/2010). Physical Chemistry, (first edition 1978), ninth edition 2010, Oxford University Press, Oxford UK, . Bacon, F. (1620). Novum Organum Scientiarum, translated by Devey, J., P.F. Collier & Son, New York, 1902. Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . A translation may be found here. A mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Chandrasekhar, S. (1961). Hydrodynamic and Hydromagnetic Stability, Oxford University Press, Oxford UK. Clausius, R. (1854). Annalen der Physik (Poggendoff's Annalen), Dec. 1854, vol. xciii. p. 481; translated in the Journal de Mathematiques, vol. xx. Paris, 1855, and in the Philosophical Magazine, August 1856, s. 4. vol. xii, p. 81. Clausius, R. (1865/1867). The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies, London: John van Voorst. 1867. Also the second edition translated into English by W.R. Browne (1879) here and here. De Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, . Denbigh, K. (1955/1981). The Principles of Chemical Equilibrium, Cambridge University Press, Cambridge . Greven, A., Keller, G., Warnecke (editors) (2003). Entropy, Princeton University Press, Princeton NJ, . , Lecture on Matter, Living Force, and Heat. 5 and 12 May 1847. Kittel, C. Kroemer, H. (1980). Thermal Physics, second edition, W.H. Freeman, San Francisco, . Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, . Landau, L., Lifshitz, E.M. (1958/1969). Statistical Physics, volume 5 of Course of Theoretical Physics, translated from the Russian by J.B. Sykes, M.J. Kearsley, Pergamon, Oxford. Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-. Lieb, E.H., Yngvason, J. (2003). The Entropy of Classical Thermodynamics, Chapter 8 of Entropy, Greven, A., Keller, G., Warnecke (editors) (2003). Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge. Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, first English edition, Longmans, Green and Co., London. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London. Shavit, A., Gutfinger, C. (1995). Thermodynamics. From Concepts to Applications, Prentice Hall, London, . Truesdell, C. (1969). Rational Thermodynamics: a Course of Lectures on Selected Topics, McGraw-Hill Book Company, New York. Truesdell, C. (1980). The Tragicomical History of Thermodynamics 1822–1854, Springer, New York, . Further bibliography Gyftopoulos, E.P., & Beretta, G.P. (1991). Thermodynamics: foundations and applications. (Dover Publications) Hatsopoulos, G.N., & Keenan, J.H. (1981). Principles of general thermodynamics. RE Krieger Publishing Company. External links Plasma heat at 2 gigakelvins – Article about extremely high temperature generated by scientists (Foxnews.com) Correlations for Convective Heat Transfer – ChE Online Resources Heat transfer Thermodynamics Physical quantities
0.769392
0.998408
0.768167
Radiant flux
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second, while that of spectral flux in frequency is the watt per hertz and that of spectral flux in wavelength is the watt per metre—commonly the watt per nanometre. Mathematical definitions Radiant flux Radiant flux, denoted ('e' for "energetic", to avoid confusion with photometric quantities), is defined as where is the time; is the radiant energy passing out of a closed surface ; is the Poynting vector, representing the current density of radiant energy; is the normal vector of a point on ; represents the area of ; represents the time period. The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving where is the time average, and is the angle between and Spectral flux Spectral flux in frequency, denoted Φe,ν, is defined as where is the frequency. Spectral flux in wavelength, denoted , is defined as where is the wavelength. SI radiometry units See also Luminous flux Heat flux Power (physics) Radiosity (heat transfer) References Further reading Power (physics) Physical quantities Radiometry Temporal rates
0.773771
0.992751
0.768162
Quantum harmonic oscillator
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known. One-dimensional harmonic oscillator Hamiltonian and energy eigenstates The Hamiltonian of the particle is: where is the particle's mass, is the force constant, is the angular frequency of the oscillator, is the position operator (given by in the coordinate basis), and is the momentum operator (given by in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law. The time-independent Schrödinger equation (TISE) is, where denotes a real number (which needs to be determined) that will specify a time-independent energy level, or eigenvalue, and the solution denotes that level's energy eigenstate. Then solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function , using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions, The functions Hn are the physicists' Hermite polynomials, The corresponding energy levels are The expectation values of position and momentum combined with variance of each variable can be derived from the wavefunction to understand the behavior of the energy eigenkets. They are shown to be and owing to the symmetry of the problem, whereas: The variance in both position and momentum are observed to increase for higher energy levels. The lowest energy level has value of which is its minimum value due to uncertainty relation and also corresponds to a gaussian wavefunction. This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the state, called the ground state) is not equal to the minimum of the potential well, but above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian. Ladder operator method The "ladder operator" method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators and its adjoint , Note these operators classically are exactly the generators of normalized rotation in the phase space of and , i.e they describe the forwards and backwards evolution in time of a classical harmonic oscillator. These operators lead to the following representation of and , The operator is not Hermitian, since itself and its adjoint are not equal. The energy eigenstates , when operated on by these ladder operators, give From the relations above, we can also define a number operator , which has the following property: The following commutators can be easily obtained by substituting the canonical commutation relation, and the Hamilton operator can be expressed as so the eigenstates of are also the eigenstates of energy. To see that, we can apply to a number state : Using the property of the number operator : we get: Thus, since solves the TISE for the Hamiltonian operator , is also one of its eigenstates with the corresponding eigenvalue: QED. The commutation property yields and similarly, This means that acts on to produce, up to a multiplicative constant, , and acts on to produce . For this reason, is called an annihilation operator ("lowering operator"), and a creation operator ("raising operator"). The two operators together are called ladder operators. Given any energy eigenstate, we can act on it with the lowering operator, , to produce another eigenstate with less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to . However, since the smallest eigenvalue of the number operator is 0, and In this case, subsequent applications of the lowering operator will just produce zero, instead of additional energy eigenstates. Furthermore, we have shown above that Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates such that which matches the energy spectrum given in the preceding section. Arbitrary eigenstates can be expressed in terms of |0⟩, Analytical questions The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equation . In the position representation, this is the first-order differential equation whose solution is easily found to be the Gaussian Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the Hermite functions energy eigenstates constructed by the ladder method form a complete orthonormal set of functions. Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined by , hence so that , and so on. Natural length and energy scales The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization. The result is that, if energy is measured in units of and distance in units of , then the Hamiltonian simplifies to while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half, where are the Hermite polynomials. To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter. For example, the fundamental solution (propagator) of , the time-dependent Schrödinger operator for this oscillator, simply boils down to the Mehler kernel, where . The most general solution for a given initial configuration then is simply Coherent states The coherent states (also known as Glauber states) of the harmonic oscillator are special nondispersive wave packets, with minimum uncertainty , whose observables' expectation values evolve like a classical system. They are eigenvectors of the annihilation operator, not the Hamiltonian, and form an overcomplete basis which consequentially lacks orthogonality. The coherent states are indexed by and expressed in the basis as Since coherent states are not energy eigenstates, their time evolution is not a simple shift in wavefunction phase. The time-evolved states are, however, also coherent states but with phase-shifting parameter instead: . Because and via the Kermack-McCrae identity, the last form is equivalent to a unitary displacement operator acting on the ground state: . Calculating the expectation values: where is the phase contributed by complex . These equations confirm the oscillating behavior of the particle. The uncertainties calculated using the numeric method are: which gives . Since the only wavefunction that can have lowest position-momentum uncertainty, , is a gaussian wavefunction, and since the coherent state wavefunction has minimum position-momentum uncertainty, we note that the general gaussian wavefunction in quantum mechanics has the form:Substituting the expectation values as a function of time, gives the required time varying wavefunction. The probability of each energy eigenstates can be calculated to find the energy distribution of the wavefunction: which corresponds to Poisson distribution. Highly excited states When is large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energy can move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified through asymptotics of the Hermite polynomials, and also through the WKB approximation. The frequency of oscillation at is proportional to the momentum of a classical particle of energy and position . Furthermore, the square of the amplitude (determining the probability density) is inversely proportional to , reflecting the length of time the classical particle spends near . The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using an Airy function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately This is also given, asymptotically, by the integral Phase space solutions In the phase space formulation of quantum mechanics, eigenstates of the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution. The Wigner quasiprobability distribution for the energy eigenstate is, in the natural units described above, where Ln are the Laguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map. Meanwhile, the Husimi Q function of the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we have This claim can be verified using the Segal–Bargmann transform. Specifically, since the raising operator in the Segal–Bargmann representation is simply multiplication by and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simply . At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform. N-dimensional isotropic harmonic oscillator The one-dimensional harmonic oscillator is readily generalizable to dimensions, where . In one dimension, the position of the particle was specified by a single coordinate, . In dimensions, this is replaced by position coordinates, which we label . Corresponding to each position coordinate is a momentum; we label these . The canonical commutation relations between these operators are The Hamiltonian for this system is As the form of this Hamiltonian makes clear, the -dimensional harmonic oscillator is exactly analogous to independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities would refer to the positions of each of the particles. This is a convenient property of the potential, which allows the potential energy to be separated into terms depending on one coordinate each. This observation makes the solution straightforward. For a particular set of quantum numbers the energy eigenfunctions for the -dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as: In the ladder operator method, we define sets of ladder operators, By an analogous procedure to the one-dimensional case, we can then show that each of the and operators lower and raise the energy by respectively. The Hamiltonian is This Hamiltonian is invariant under the dynamic symmetry group (the unitary group in dimensions), defined by where is an element in the defining matrix representation of . The energy levels of the system are As in the one-dimensional case, the energy is quantized. The ground state energy is times the one-dimensional ground energy, as we would expect using the analogy to independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In -dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy. The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define . All states with the same will have the same energy. For a given , we choose a particular . Then . There are possible pairs . can take on the values to , and for each the value of is fixed. The degree of degeneracy therefore is: Formula for general and [ being the dimension of the symmetric irreducible -th power representation of the unitary group ]: The special case = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in dimensions (as dimensions are distinguishable). For the case of bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer using integers less than or equal to . This arises due to the constraint of putting quanta into a state ket where and , which are the same constraints as in integer partition. Example: 3D isotropic harmonic oscillator The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with a different spherically symmetric potential where is the mass of the particle. Because will be used below for the magnetic quantum number, mass is indicated by , instead of , as earlier in this article. The solution to the equation is: where is a normalization constant; ; are generalized Laguerre polynomials; The order of the polynomial is a non-negative integer; is a spherical harmonic function; is the reduced Planck constant: The energy eigenvalue is The energy is usually described by the single quantum number Because is a non-negative integer, for every even we have and for every odd we have . The magnetic quantum number is an integer satisfying , so for every and ℓ there are 2ℓ + 1 different quantum states, labeled by . Thus, the degeneracy at level is where the sum starts from 0 or 1, according to whether is even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of , the relevant degeneracy group. Applications Harmonic oscillators lattice: phonons The notation of a harmonic oscillator can be extended to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions. As in the previous section, we denote the positions of the masses by , as measured from their equilibrium positions (i.e. if the particle is at its equilibrium position). In two or more dimensions, the are vector quantities. The Hamiltonian for this system is where is the (assumed uniform) mass of each atom, and and are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space. We introduce, then, a set of "normal coordinates" , defined as the discrete Fourier transforms of the s, and "conjugate momenta" defined as the Fourier transforms of the s, The quantity will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite. This preserves the desired commutation relations in either real space or wave vector space From the general result it is easy to show, through elementary trigonometry, that the potential energy term is where The Hamiltonian may be written in wave vector space as Note that the couplings between the position variables have been transformed away; if the s and s were hermitian (which they are not), the transformed Hamiltonian would describe uncoupled harmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, we impose periodic boundary conditions, defining the -th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is The upper bound to comes from the minimum wavelength, which is twice the lattice spacing , as discussed above. The harmonic oscillator eigenvalues or energy levels for the mode are If we ignore the zero-point energy then the levels are evenly spaced at So an exact amount of energy , must be supplied to the harmonic oscillator lattice to push it to the next energy level. In analogy to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon. All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described elsewhere. In the continuum limit, , , while is held fixed. The canonical coordinates devolve to the decoupled momentum modes of a scalar field, , whilst the location index (not the displacement dynamical variable) becomes the parameter argument of the scalar field, . Molecular vibrations The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by where is the reduced mass and and are the masses of the two atoms. The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator. Modelling phonons, as discussed above. A charge with mass in a uniform magnetic field is an example of a one-dimensional quantum harmonic oscillator: Landau quantization. See also Notes References Bibliography External links Quantum Harmonic Oscillator Rationale for choosing the ladder operators Live 3D intensity plots of quantum harmonic oscillator Driven and damped quantum harmonic oscillator (lecture notes of course "quantum optics in electric circuits") Quantum models Oscillators
0.769493
0.998265
0.768158
Brake
A brake is a mechanical device that inhibits motion by absorbing energy from a moving system. It is used for slowing or stopping a moving vehicle, wheel, axle, or to prevent its motion, most often accomplished by means of friction. Background Most brakes commonly use friction between two surfaces pressed together to convert the kinetic energy of the moving object into heat, though other methods of energy conversion may be employed. For example, regenerative braking converts much of the energy to electrical energy, which may be stored for later use. Other methods convert kinetic energy into potential energy in such stored forms as pressurized air or pressurized oil. Eddy current brakes use magnetic fields to convert kinetic energy into electric current in the brake disc, fin, or rail, which is converted into heat. Still other braking methods even transform kinetic energy into different forms, for example by transferring the energy to a rotating flywheel. Brakes are generally applied to rotating axles or wheels, but may also take other forms such as the surface of a moving fluid (flaps deployed into water or air). Some vehicles use a combination of braking mechanisms, such as drag racing cars with both wheel brakes and a parachute, or airplanes with both wheel brakes and drag flaps raised into the air during landing. Since kinetic energy increases quadratically with velocity, an object moving at 10 m/s has 100 times as much energy as one of the same mass moving at 1 m/s, and consequently the theoretical braking distance, when braking at the traction limit, is up to 100 times as long. In practice, fast vehicles usually have significant air drag, and energy lost to air drag rises quickly with speed. Almost all wheeled vehicles have a brake of some sort. Even baggage carts and shopping carts may have them for use on a moving ramp. Most fixed-wing aircraft are fitted with wheel brakes on the undercarriage. Some aircraft also feature air brakes designed to reduce their speed in flight. Notable examples include gliders and some World War II-era aircraft, primarily some fighter aircraft and many dive bombers of the era. These allow the aircraft to maintain a safe speed in a steep descent. The Saab B 17 dive bomber and Vought F4U Corsair fighter used the deployed undercarriage as an air brake. Friction brakes on automobiles store braking heat in the drum brake or disc brake while braking then conduct it to the air gradually. When traveling downhill some vehicles can use their engines to brake. When the brake pedal of a modern vehicle with hydraulic brakes is pushed against the master cylinder, ultimately a piston pushes the brake pad against the brake disc which slows the wheel down. On the brake drum it is similar as the cylinder pushes the brake shoes against the drum which also slows the wheel down. Types Brakes may be broadly described as using friction, pumping, or electromagnetics. One brake may use several principles: for example, a pump may pass fluid through an orifice to create friction: Frictional Frictional brakes are most common and can be divided broadly into "shoe" or "pad" brakes, using an explicit wear surface, and hydrodynamic brakes, such as parachutes, which use friction in a working fluid and do not explicitly wear. Typically the term "friction brake" is used to mean pad/shoe brakes and excludes hydrodynamic brakes, even though hydrodynamic brakes use friction. Friction (pad/shoe) brakes are often rotating devices with a stationary pad and a rotating wear surface. Common configurations include shoes that contract to rub on the outside of a rotating drum, such as a band brake; a rotating drum with shoes that expand to rub the inside of a drum, commonly called a "drum brake", although other drum configurations are possible; and pads that pinch a rotating disc, commonly called a "disc brake". Other brake configurations are used, but less often. For example, PCC trolley brakes include a flat shoe which is clamped to the rail with an electromagnet; the Murphy brake pinches a rotating drum, and the Ausco Lambert disc brake uses a hollow disc (two parallel discs with a structural bridge) with shoes that sit between the disc surfaces and expand laterally. A drum brake is a vehicle brake in which the friction is caused by a set of brake shoes that press against the inner surface of a rotating drum. The drum is connected to the rotating roadwheel hub. Drum brakes generally can be found on older car and truck models. However, because of their low production cost, drum brake setups are also installed on the rear of some low-cost newer vehicles. Compared to modern disc brakes, drum brakes wear out faster due to their tendency to overheat. The disc brake is a device for slowing or stopping the rotation of a road wheel. A brake disc (or rotor in U.S. English), usually made of cast iron or ceramic, is connected to the wheel or the axle. To stop the wheel, friction material in the form of brake pads (mounted in a device called a brake caliper) is forced mechanically, hydraulically, pneumatically or electromagnetically against both sides of the disc. Friction causes the disc and attached wheel to slow or stop. Pumping Pumping brakes are often used where a pump is already part of the machinery. For example, an internal-combustion piston motor can have the fuel supply stopped, and then internal pumping losses of the engine create some braking. Some engines use a valve override called a Jake brake to greatly increase pumping losses. Pumping brakes can dump energy as heat, or can be regenerative brakes that recharge a pressure reservoir called a hydraulic accumulator. Electromagnetic Electromagnetic brakes are likewise often used where an electric motor is already part of the machinery. For example, many hybrid gasoline/electric vehicles use the electric motor as a generator to charge electric batteries and also as a regenerative brake. Some diesel/electric railroad locomotives use the electric motors to generate electricity which is then sent to a resistor bank and dumped as heat. Some vehicles, such as some transit buses, do not already have an electric motor but use a secondary "retarder" brake that is effectively a generator with an internal short circuit. Related types of such a brake are eddy current brakes, and electro-mechanical brakes (which actually are magnetically driven friction brakes, but nowadays are often just called "electromagnetic brakes" as well). Electromagnetic brakes slow an object through electromagnetic induction, which creates resistance and in turn either heat or electricity. Friction brakes apply pressure on two separate objects to slow the vehicle in a controlled manner. Characteristics Brakes are often described according to several characteristics including: Peak force – The peak force is the maximum decelerating effect that can be obtained. The peak force is often greater than the traction limit of the tires, in which case the brake can cause a wheel skid. Continuous power dissipation – Brakes typically get hot in use and fail when the temperature gets too high. The greatest amount of power (energy per unit time) that can be dissipated through the brake without failure is the continuous power dissipation. Continuous power dissipation often depends on e.g., the temperature and speed of ambient cooling air. Fade – As a brake heats, it may become less effective, called brake fade. Some designs are inherently prone to fade, while other designs are relatively immune. Further, use considerations, such as cooling, often have a big effect on fade. Smoothness – A brake that is grabby, pulses, has chatter, or otherwise exerts varying brake force may lead to skids. For example, railroad wheels have little traction, and friction brakes without an anti-skid mechanism often lead to skids, which increases maintenance costs and leads to a "thump thump" feeling for riders inside. Power – Brakes are often described as "powerful" when a small human application force leads to a braking force that is higher than typical for other brakes in the same class. This notion of "powerful" does not relate to continuous power dissipation, and may be confusing in that a brake may be "powerful" and brake strongly with a gentle brake application, yet have lower (worse) peak force than a less "powerful" brake. Pedal feel – Brake pedal feel encompasses subjective perception of brake power output as a function of pedal travel. Pedal travel is influenced by the fluid displacement of the brake and other factors. Drag – Brakes have varied amount of drag in the off-brake condition depending on design of the system to accommodate total system compliance and deformation that exists under braking with ability to retract friction material from the rubbing surface in the off-brake condition. Durability – Friction brakes have wear surfaces that must be renewed periodically. Wear surfaces include the brake shoes or pads, and also the brake disc or drum. There may be tradeoffs, for example, a wear surface that generates high peak force may also wear quickly. Weight – Brakes are often "added weight" in that they serve no other function. Further, brakes are often mounted on wheels, and unsprung weight can significantly hurt traction in some circumstances. "Weight" may mean the brake itself, or may include additional support structure. Noise – Brakes usually create some minor noise when applied, but often create squeal or grinding noises that are quite loud. Foundation components Foundation components are the brake-assembly components at the wheels of a vehicle, named for forming the basis of the rest of the brake system. These mechanical parts contained around the wheels are controlled by the air brake system. The three types of foundation brake systems are “S” cam brakes, disc brakes and wedge brakes. Brake boost Most modern passenger vehicles, and light vans, use a vacuum assisted brake system that greatly increases the force applied to the vehicle's brakes by its operator. This additional force is supplied by the manifold vacuum generated by air flow being obstructed by the throttle on a running engine. This force is greatly reduced when the engine is running at fully open throttle, as the difference between ambient air pressure and manifold (absolute) air pressure is reduced, and therefore available vacuum is diminished. However, brakes are rarely applied at full throttle; the driver takes the right foot off the gas pedal and moves it to the brake pedal - unless left-foot braking is used. Because of low vacuum at high RPM, reports of unintended acceleration are often accompanied by complaints of failed or weakened brakes, as the high-revving engine, having an open throttle, is unable to provide enough vacuum to power the brake booster. This problem is exacerbated in vehicles equipped with automatic transmissions as the vehicle will automatically downshift upon application of the brakes, thereby increasing the torque delivered to the driven-wheels in contact with the road surface. Heavier road vehicles, as well as trains, usually boost brake power with compressed air, supplied by one or more compressors. Noise Although ideally a brake would convert all the kinetic energy into heat, in practice a significant amount may be converted into acoustic energy instead, contributing to noise pollution. For road vehicles, the noise produced varies significantly with tire construction, road surface, and the magnitude of the deceleration. Noise can be caused by different things. These are signs that there may be issues with brakes wearing out over time. Fires Railway brake malfunctions can produce sparks and cause forest fires. In some very extreme cases, disc brakes can become red hot and set on fire. This happened in the Tuscan GP, when the Mercedes car, the W11 had its front carbon disc brakes almost bursting into flames, due to low ventilation and high usage. These fires can also occur on some Mercedes Sprinter vans, when the load adjusting sensor seizes up and the rear brakes have to compensate for the fronts. Inefficiency A significant amount of energy is always lost while braking, even with regenerative braking which is not perfectly efficient. Therefore, a good metric of efficient energy use while driving is to note how much one is braking. If the majority of deceleration is from unavoidable friction instead of braking, one is squeezing out most of the service from the vehicle. Minimizing brake use is one of the fuel economy-maximizing behaviors. While energy is always lost during a brake event, a secondary factor that influences efficiency is "off-brake drag", or drag that occurs when the brake is not intentionally actuated. After a braking event, hydraulic pressure drops in the system, allowing the brake caliper pistons to retract. However, this retraction must accommodate all compliance in the system (under pressure) as well as thermal distortion of components like the brake disc or the brake system will drag until the contact with the disc, for example, knocks the pads and pistons back from the rubbing surface. During this time, there can be significant brake drag. This brake drag can lead to significant parasitic power loss, thus impacting fuel economy and overall vehicle performance. History Early brake system In the 1890s, Wooden block brakes became obsolete when Michelin brothers introduced rubber tires. During the 1960s, some car manufacturers replaced drum brakes with disc brakes. Electronic brake system In 1966, the ABS was fitted in the Jensen FF grand tourer. In 1978, Bosch and Mercedes updated their 1936 anti-lock brake system for the Mercedes S-Class. That ABS is a fully electronic, four-wheel and multi-channel system that later became standard. In 2005, ESC — which automatically applies the brakes to avoid a loss of steering control — become compulsory for carriers of dangerous goods without data recorders in the Canadian province of Quebec. Since 2017, numerous United Nations Economic Commission for Europe (UNECE) countries use Brake Assist System (BAS) a function of the braking system that deduces an emergency braking event from a characteristic of the driver's brake demand and under such conditions assist the driver to improve braking. In July 2013 UNECE vehicle regulation 131 was enacted. This regulation defines Advanced Emergency Braking Systems (AEBS) for heavy vehicles to automatically detect a potential forward collision and activate the vehicle braking system. On 23 January 2020 UNECE vehicle regulation 152 was enacted, defining Advanced Emergency Braking Systems for light vehicles. From May 2022, in the European Union, by law, new vehicles will have advanced emergency-braking system. See also Adapted automobile Air brake (rail) Air brake (road vehicle) Anchor Advanced Emergency Braking System Anti-lock braking system Archaic past tense of the verb 'to break' (see brake) Band brake Bicycle brake systems Brake-by-wire (or electromechanical braking) Brake bleeding Brake lining Brake tester Brake wear indicator Braking distance Breeching (tack) Bundy tube Caster brake Counter-pressure brake Disc brake Drum brake Dynamic braking Electromagnetic brake Regenerative brake Electronic Parking Brake Emergency brake (train) Engine braking Hand brake Line lock Overrun brake Parking brake Railway brake Retarder Threshold braking Trail braking Vacuum brake Wagon brake References External links How Stuff Works - Brakes Vehicle braking technologies
0.774086
0.992283
0.768113
Heat recovery ventilation
Heat recovery ventilation (HRV), also known as mechanical ventilation heat recovery (MVHR) is a ventilation system that recovers energy by operating between two air sources at different temperatures. It is used to reduce the heating and cooling demands of buildings. By recovering the residual heat in the exhaust gas, the fresh air introduced into the air conditioning system is preheated (or pre-cooled) before it enters the room, or the air cooler of the air conditioning unit performs heat and moisture treatment. A typical heat recovery system in buildings comprises a core unit, channels for fresh and exhaust air, and blower fans. Building exhaust air is used as either a heat source or heat sink, depending on the climate conditions, time of year, and requirements of the building. Heat recovery systems typically recover about 60–95% of the heat in the exhaust air and have significantly improved the energy efficiency of buildings. Energy recovery ventilation (ERV) is the energy recovery process in residential and commercial HVAC systems that exchanges the energy contained in normally exhausted air of a building or conditioned space, using it to treat (precondition) the incoming outdoor ventilation air. The specific equipment involved may be called an Energy Recovery Ventilator, also commonly referred to simply as an ERV. An ERV is a type of air-to-air heat exchanger that transfers latent heat as well as sensible heat. Because both temperature and moisture are transferred, ERVs are described as total enthalpic devices. In contrast, a heat recovery ventilator (HRV) can only transfer sensible heat. HRVs can be considered sensible only devices because they only exchange sensible heat. In other words, all ERVs are HRVs, but not all HRVs are ERVs. It is incorrect to use the terms HRV, AAHX (air-to-air heat exchanger), and ERV interchangeably. During the warmer seasons, an ERV system pre-cools and dehumidifies; during cooler seasons the system humidifies and pre-heats. An ERV system helps HVAC design meet ventilation and energy standards (e.g., ASHRAE), improves indoor air quality and reduces total HVAC equipment capacity, thereby reducing energy consumption. ERV systems enable an HVAC system to maintain a 40-50% indoor relative humidity, essentially in all conditions. ERV's must use power for a blower to overcome the pressure drop in the system, hence incurring a slight energy demand. Working principle A heat recovery system is designed to supply conditioned air to the occupied space to maintain a certain temperature. A heat recovery system helps keep a house ventilated while recovering heat being emitted from the inside environment. The purpose of heat recovery systems is to transfer the thermal energy from one fluid to another fluid, from one fluid to a solid, or from a solid surface to a fluid at different temperatures and in thermal contact. There is no direct interaction between fluid and fluid or fluid and solid in most heat recovery systems. In some heat recovery systems, fluid leakage is observed due to pressure differences between fluids, resulting in a mixture of the two fluids. Types Thermal wheel Fixed plate heat exchanger Fixed plate heat exchangers have no moving parts, and consist of alternating layers of plates that are separated and sealed. Typical flow is cross current and since the majority of plates are solid and non permeable, sensible only transfer is the result. The tempering of incoming fresh air is done by a heat or energy recovery core. In this case, the core is made of aluminum or plastic plates. Humidity levels are adjusted through the transferring of water vapor. This is done with a rotating wheel either containing a desiccant material or permeable plates. Enthalpy plates were introduced in 2006 by Paul, a special company for ventilation systems for passive houses. A crosscurrent countercurrent air-to-air heat exchanger built with a humidity permeable material. Polymer fixed-plate countercurrent energy recovery ventilators were introduced in 1998 by Building Performance Equipment (BPE), a residential, commercial, and industrial air-to-air energy recovery manufacturer. These heat exchangers can be both introduced as a retrofit for increased energy savings and fresh air as well as an alternative to new construction. In new construction situations, energy recovery will effectively reduce the required heating/cooling capacity of the system. The percentage of the total energy saved will depend on the efficiency of the device (up to 90% sensible) and the latitude of the building. Due to the need to use multiple sections, fixed plate energy exchangers are often associated with high pressure drop and larger footprints. Due to their inability to offer a high amount of latent energy transfer these systems also have a high chance of frosting in colder climates. The technology patented by Finnish company RecyclingEnergy Int. Corp. is based on a regenerative plate heat exchanger taking advantage of humidity of air by cyclical condensation and evaporation, e.g. latent heat, enabling not only high annual thermal efficiency but also microbe-free plates due to self-cleaning/washing method. Therefore, the unit is called an enthalpy recovery ventilator rather than heat or energy recovery ventilator. Company's patented LatentHeatPump is based on its enthalpy recovery ventilator having COP of 33 in the summer and 15 in the winter. Fixed plate heat exchangers are the most commonly used type of heat exchanger and have been developed for 40 years. Thin metal plates are stacked with a small spacing between plates. Two different air streams pass through these spaces, adjacent to each other. Heat transfer occurs as the temperature transfers through the plate from one air stream to the other. The efficiency of these devices has reached 90% sensible heat efficiency in transferring sensible heat from one air stream to another. The high levels of efficiency are attributed to the high heat transfer coefficients of the materials used, operational pressure and temperature range. Heat pipes Heat pipes are a heat recovery device that uses a multi-phase process to transfer heat from one air stream to another. Heat is transferred using an evaporator and condenser within a wicked, sealed pipe containing a fluid which undergoes a constant phase change to transfer heat. The fluid within the pipes changes from a fluid to a gas in the evaporator section, absorbing the thermal energy from the warm air stream. The gas condenses back to a fluid in the condenser section where the thermal energy is dissipated into the cooler air stream raising the temperature. The fluid/gas is transported from one side of the heat pipe to the other through pressure, wick forces or gravity, depending on the arrangement of the heat pipe. Run-around Run-around systems are hybrid heat recovery system that incorporates characteristics from other heat recovery technology to form a single device, capable of recovering heat from one air stream and delivering to another a significant distance away. The general case of run-around heat recovery, two fixed plate heat exchangers are located in two separate air streams and are linked by a closed loop containing a fluid that is continually pumped between the two heat exchangers. The fluid is heated and cooled constantly as it flows around the loop, providing heat recovery. The constant flow of the fluid through the loop requires pumps to move between the two heat exchangers. Though this is an additional energy demand, using pumps to circulate fluid is less energy intensive than fans to circulate air. Phase change materials Phase change materials, or PCMs, are a technology that is used to store sensible and latent heat within a building structure at a higher storage capacity than standard building materials. PCMs have been studied extensively due to their ability to store heat and transfer heating and cooling demands from conventional peak times to off-peak times. The concept of the thermal mass of a building for heat storage, that the physical structure of the building absorbs heat to help cool the air, has long been understood and investigated. A study of PCMs in comparison to traditional building materials has shown that the thermal storage capacity of PCMs is twelve times higher than standard building materials over the same temperature range. The pressure drop across PCMs has not been investigated to be able to comment on the effect that the material may have on air streams. However, as the PCM can be incorporated directly into the building structure, this would not affect the flow in the same way other heat exchanger technologies do, it can be suggested that there is no pressure loss created by the inclusion of PCMs in the building fabric. Applications Fixed plate heat exchangers Mardiana et al. integrated a fixed plate heat exchanger into a commercial wind tower, highlighting the advantages of this type of system as a means of zero energy ventilation which can be simply modified. Full scale laboratory testing was undertaken in order to determine the effects and efficiency of the combined system. A wind tower was integrated with a fixed plate heat exchanger and was mounted centrally in a sealed test room. The results from this study indicate that the combination of a wind tower passive ventilation system and a fixed plate heat recovery device could provide an effective combined technology to recover waste heat from exhaust air and cool incoming warm air with zero energy demand. Though no quantitative data for the ventilation rates within the test room was provided, it can be assumed that due to the high-pressure loss across the heat exchanger that these were significantly reduced from the standard operation of a wind tower. Further investigation of this combined technology is essential in understanding the air flow characteristics of the system. Heat pipes Due to the low-pressure loss of heat pipe systems, more research has been conducted into the integration of this technology into passive ventilation than other heat recovery systems. Commercial wind towers were again used as the passive ventilation system for integrating this heat recovery technology. This further enhances the suggestion that commercial wind towers provide a worthwhile alternative to mechanical ventilation, capable of supplying and exhausting air at the same time. Run-around systems Flaga-Maryanczyk et al. conducted a study in Sweden which examined a passive ventilation system which integrated a run-around system using a ground source heat pump as the heat source to warm incoming air. Experimental measurements and weather data were taken from the passive house used in the study. A CFD model of the passive house was created with the measurements taken from the sensors and weather station used as input data. The model was run to calculate the effectiveness of the run-around system and the capabilities of the ground source heat pump. Ground source heat pumps provide a reliable source of consistent thermal energy when buried 10–20 m below the ground surface. The ground temperature is warmer than the ambient air in winter and cooler than the ambient air in summer, providing both a heat source and a heat sink. It was found that in February, the coldest month in the climate, the ground source heat pump was capable of delivering almost 25% of the heating needs of the house and occupants. Phase change materials The majority of research interest in PCMs is the application of phase change material integration into traditional porous building materials such as concrete and wall boards. Kosny et al. analyzed the thermal performance of buildings that have PCM-enhanced construction materials within the structure. Analysis showed that the addition of PCMs is beneficial in terms of improving thermal performance. A significant drawback of PCM used in a passive ventilation system for heat recovery is the lack of instantaneous heat transfer across different airstreams. Phase change materials are a heat storage technology, whereby the heat is stored within the PCM until the air temperature has fallen to a significant level where it can be released back into the air stream. No research has been conducted into the use of PCMs between two airstreams of different temperatures where continuous, instantaneous heat transfer can occur. An investigation into this area would be beneficial for passive ventilation heat recovery research. Advantages and disadvantages Source: Types of energy recovery devices **Total energy exchange only available on hygroscopic units and condensate return units Environmental impacts Source: Energy saving is one of the key issues for both fossil fuel consumption and the protection of the global environment. The rising cost of energy and global warming underlined that developing improved energy systems is necessary to increase energy efficiency while reducing greenhouse gas emissions. One of the most effective ways to reduce energy demand is to use energy more efficiently. Therefore, waste heat recovery is becoming popular in recent years since it improves energy efficiency. About 26% of industrial energy is still wasted as hot gas or fluid in many countries. However, during last two decades there has been remarkable attention to recover waste heat from various industries and to optimize the units which are used to absorb heat from waste gases. Thus, these attempts enhance reducing of global warming as well as of energy demand. Energy consumption Energy recovery ventilation Importance Nearly half of global energy is used in buildings,and half of heating/cooling cost is caused by ventilation when it is done by the "open window" method according to the regulations. Secondly, energy generation and grid is made to meet the peak demand of power. To use proper ventilation; recovery is a cost-efficient, sustainable and quick way to reduce global energy consumption and give better indoor air quality (IAQ) and protect buildings, and environment. Methods of transfer During the cooling season, the system works to cool and dehumidify the incoming, outside air. To do this, the system takes the rejected heat and sends it into the exhaust airstream. Subsequently, this air cools the condenser coil at a lower temperature than if the rejected heat had not entered the exhaust airstream. During the heating seasons, the system works in reverse. Instead of discharging the heat into the exhaust airstream, the system draws heat from the exhaust airstream in order to pre-heat the incoming air. At this stage, the air passes through a primary unit and then into the space being conditioned. With this type of system, it is normal during the cooling seasons for the exhaust air to be cooler than the ventilation air and, during the heating seasons, warmer than the ventilation air. It is for this reason the system works efficiently and effectively. The coefficient of performance (COP) will increase as the conditions become more extreme (i.e., more hot and humid for cooling and colder for heating). Efficiency The efficiency of an ERV system is the ratio of energy transferred between the two air streams compared with the total energy transported through the heat exchanger. With the variety of products on the market, efficiency will vary as well. Some of these systems have been known to have heat exchange efficiencies as high as 70-80% while others have as low as 50%. Even though this lower figure is preferable to the basic HVAC system, it is not up to par with the rest of its class. Studies are being done to increase the heat transfer efficiency to 90%. The use of modern low-cost gas-phase heat exchanger technology will allow for significant improvements in efficiency. The use of high conductivity porous material is believed to produce an exchange effectiveness in excess of 90%, producing a five times improvement in energy recovery. The Home Ventilating Institute (HVI) has developed a standard test for any and all units manufactured within the United States. Regardless, not all have been tested. It is imperative to investigate efficiency claims, comparing data produced by HVI as well as that produced by the manufacturer. (Note: all units sold in Canada are placed through the R-2000 program, a standard test equivalent to the HVI test). Exhaust air heat pump An exhaust air heat pump (EAHP) extracts heat from the exhaust air of a building and transfers the heat to the supply air, hot tap water and/or hydronic heating system (underfloor heating, radiators). This requires at least mechanical exhaust but mechanical supply is optional; see mechanical ventilation. This type of heat pump requires a certain air exchange rate to maintain its output power. Since the inside air is approximately 20–22 degrees Celsius all year round, the maximum output power of the heat pump is not varying with the seasons and outdoor temperature. Air leaving the building when the heat pump's compressor is running is usually at around −1° in most versions. Thus, the unit is extracting heat from the air that needs to be changed (at a rate of around a half an air change per hour). Air entering the house is of course generally warmer than the air processed through the unit so there is a net 'gain'. Care must be taken that these are only used in the correct type of houses. Exhaust air heat pumps have minimum flow rates so that when installed in a small flat, the airflow chronically over-ventilates the flat and increases the heat loss by drawing in large amounts of unwanted outside air. There are some models though that can take in additional outdoor air to negate this and this air is also feed to the compressor to avoid over ventilation.For most earlier exhaust air heat pumps there will be a low heat output to the hot water and heating of just around 1.8 kW from the compressor/heat pump process, but if that falls short of the building's requirements additional heat will be automatically triggered in the form of immersion heaters or an external gas boiler. The immersion heater top-up could be substantial ( if you select the wrong unit), and when a unit with a 6 kW immersion heater operates at the full output it will cost £1 per hour to run. Issues Between 2009 and 2013, some 15,000 brand new social homes were built in the UK with NIBE EAHPs used as primary heating. Owners and housing association tenants reported crippling electric bills. High running costs are usual with exhaust air heat pumps and should be expected, due to the very small heat recovery with these units. Typically the ventilation air stream is around 31 litres per second and the heat recovery is 750W and no more. All additional heat necessary to provide heating and hot water is from electricity, either compressor electrical input or immersion heater. At outside temperatures below 0 degrees Celsius, this type of heat pump removes more heat from a home than it supplies. Over a year around 60% of the energy input to a property with an exhaust air heat pump will be from electricity. Many families are still battling with developers to have their EAHP systems replaced with more reliable and efficient heating, noting the success of residents in Coventry. See also Air Infiltration and Ventilation Centre Energy recycling Green building Heat exchanger HVAC List of low-energy building techniques Low energy building Low-energy house Passive cooling Passive house Renewable heat Seasonal thermal energy storage Solar air conditioning Solar air heat Sustainable architecture Sustainable design Water heat recycling Zero energy building References External links Animation explaining simply how HRV works Heat recovery in Industry Energy and Heat Recovery Ventilators (ERV/HRV) Write-up of Single Room MHRV (SRMHRV) in UK home Builder Insight Bulletin - Heat Recovery Ventilation http://www.engineeringtoolbox.com/heat-recovery-efficiency-d_201.html Energy and Heat Recovery Ventilators (ERV/HRV) Ventilation Heating, ventilation, and air conditioning Low-energy building Energy recovery Heating Residential heating Sustainable building Energy conservation Heat pumps pl:Rekuperator
0.772025
0.994883
0.768075
Mathematics of general relativity
When studying and formulating Albert Einstein's theory of general relativity, various mathematical structures and techniques are utilized. The main tools used in this geometrical theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a general description of the mathematics of general relativity. Note: General relativity articles using tensors will use the abstract index notation. Tensors The principle of general covariance was one of the central principles in the development of general relativity. It states that the laws of physics should take the same mathematical form in all reference frames. The term 'general covariance' was used in the early formulation of general relativity, but the principle is now often referred to as 'diffeomorphism covariance'. Diffeomorphism covariance is not the defining feature of general relativity,[1] and controversies remain regarding its present status in general relativity. However, the invariance property of physical laws implied in the principle, coupled with the fact that the theory is essentially geometrical in character (making use of non-Euclidean geometries), suggested that general relativity be formulated using the language of tensors. This will be discussed further below. Spacetime as a manifold Most modern approaches to mathematical general relativity begin with the concept of a manifold. More precisely, the basic physical construct representing a curved is modelled by a four-dimensional, smooth, connected, Lorentzian manifold. Other physical descriptors are represented by various tensors, discussed below. The rationale for choosing a manifold as the fundamental mathematical structure is to reflect desirable physical properties. For example, in the theory of manifolds, each point is contained in a (by no means unique) coordinate chart, and this chart can be thought of as representing the 'local spacetime' around the observer (represented by the point). The principle of local Lorentz covariance, which states that the laws of special relativity hold locally about each point of spacetime, lends further support to the choice of a manifold structure for representing spacetime, as locally around a point on a general manifold, the region 'looks like', or approximates very closely Minkowski space (flat spacetime). The idea of coordinate charts as 'local observers who can perform measurements in their vicinity' also makes good physical sense, as this is how one actually collects physical data - locally. For cosmological problems, a coordinate chart may be quite large. Local versus global structure An important distinction in physics is the difference between local and global structures. Measurements in physics are performed in a relatively small region of spacetime and this is one reason for studying the local structure of spacetime in general relativity, whereas determining the global spacetime structure is important, especially in cosmological problems. An important problem in general relativity is to tell when two spacetimes are 'the same', at least locally. This problem has its roots in manifold theory where determining if two Riemannian manifolds of the same dimension are locally isometric ('locally the same'). This latter problem has been solved and its adaptation for general relativity is called the Cartan–Karlhede algorithm. Tensors in general relativity One of the profound consequences of relativity theory was the abolition of privileged reference frames. The description of physical phenomena should not depend upon who does the measuring - one reference frame should be as good as any other. Special relativity demonstrated that no inertial reference frame was preferential to any other inertial reference frame, but preferred inertial reference frames over noninertial reference frames. General relativity eliminated preference for inertial reference frames by showing that there is no preferred reference frame (inertial or not) for describing nature. Any observer can make measurements and the precise numerical quantities obtained only depend on the coordinate system used. This suggested a way of formulating relativity using 'invariant structures', those that are independent of the coordinate system (represented by the observer) used, yet still have an independent existence. The most suitable mathematical structure seemed to be a tensor. For example, when measuring the electric and magnetic fields produced by an accelerating charge, the values of the fields will depend on the coordinate system used, but the fields are regarded as having an independent existence, this independence represented by the electromagnetic tensor . Mathematically, tensors are generalised linear operators - multilinear maps. As such, the ideas of linear algebra are employed to study tensors. At each point of a manifold, the tangent and cotangent spaces to the manifold at that point may be constructed. Vectors (sometimes referred to as contravariant vectors) are defined as elements of the tangent space and covectors (sometimes termed covariant vectors, but more commonly dual vectors or one-forms) are elements of the cotangent space. At , these two vector spaces may be used to construct type tensors, which are real-valued multilinear maps acting on the direct sum of copies of the cotangent space with copies of the tangent space. The set of all such multilinear maps forms a vector space, called the tensor product space of type at and denoted by If the tangent space is n-dimensional, it can be shown that In the general relativity literature, it is conventional to use the component syntax for tensors. A type tensor may be written as where is a basis for the i-th tangent space and a basis for the j-th cotangent space. As spacetime is assumed to be four-dimensional, each index on a tensor can be one of four values. Hence, the total number of elements a tensor possesses equals 4R, where R is the count of the number of covariant and contravariant indices on the tensor, (a number called the rank of the tensor). Symmetric and antisymmetric tensors Some physical quantities are represented by tensors not all of whose components are independent. Important examples of such tensors include symmetric and antisymmetric tensors. Antisymmetric tensors are commonly used to represent rotations (for example, the vorticity tensor). Although a generic rank R tensor in 4 dimensions has 4R components, constraints on the tensor such as symmetry or antisymmetry serve to reduce the number of distinct components. For example, a symmetric rank two tensor satisfies and possesses 10 independent components, whereas an antisymmetric (skew-symmetric) rank two tensor satisfies and has 6 independent components. For ranks greater than two, the symmetric or antisymmetric index pairs must be explicitly identified. Antisymmetric tensors of rank 2 play important roles in relativity theory. The set of all such tensors - often called bivectors - forms a vector space of dimension 6, sometimes called bivector space. The metric tensor The metric tensor is a central object in general relativity that describes the local geometry of spacetime (as a result of solving the Einstein field equations). Using the weak-field approximation, the metric tensor can also be thought of as representing the 'gravitational potential'. The metric tensor is often just called 'the metric'. The metric is a symmetric tensor and is an important mathematical tool. As well as being used to raise and lower tensor indices, it also generates the connections which are used to construct the geodesic equations of motion and the Riemann curvature tensor. A convenient means of expressing the metric tensor in combination with the incremental intervals of coordinate distance that it relates to is through the line element: This way of expressing the metric was used by the pioneers of differential geometry. While some relativists consider the notation to be somewhat old-fashioned, many readily switch between this and the alternative notation: The metric tensor is commonly written as a 4×4 matrix. This matrix is symmetric and thus has 10 independent components. Invariants One of the central features of GR is the idea of invariance of physical laws. This invariance can be described in many ways, for example, in terms of local Lorentz covariance, the general principle of relativity, or diffeomorphism covariance. A more explicit description can be given using tensors. The crucial feature of tensors used in this approach is the fact that (once a metric is given) the operation of contracting a tensor of rank R over all R indices gives a number - an invariant - that is independent of the coordinate chart one uses to perform the contraction. Physically, this means that if the invariant is calculated by any two observers, they will get the same number, thus suggesting that the invariant has some independent significance. Some important invariants in relativity include: The Ricci scalar: The Kretschmann scalar: Other examples of invariants in relativity include the electromagnetic invariants, and various other curvature invariants, some of the latter finding application in the study of gravitational entropy and the Weyl curvature hypothesis. Tensor classifications The classification of tensors is a purely mathematical problem. In GR, however, certain tensors that have a physical interpretation can be classified with the different forms of the tensor usually corresponding to some physics. Examples of tensor classifications useful in general relativity include the Segre classification of the energy–momentum tensor and the Petrov classification of the Weyl tensor. There are various methods of classifying these tensors, some of which use tensor invariants. Tensor fields in general relativity Tensor fields on a manifold are maps which attach a tensor to each point of the manifold. This notion can be made more precise by introducing the idea of a fibre bundle, which in the present context means to collect together all the tensors at all points of the manifold, thus 'bundling' them all into one grand object called the tensor bundle. A tensor field is then defined as a map from the manifold to the tensor bundle, each point being associated with a tensor at . The notion of a tensor field is of major importance in GR. For example, the geometry around a star is described by a metric tensor at each point, so at each point of the spacetime the value of the metric should be given to solve for the paths of material particles. Another example is the values of the electric and magnetic fields (given by the electromagnetic field tensor) and the metric at each point around a charged black hole to determine the motion of a charged particle in such a field. Vector fields are contravariant rank one tensor fields. Important vector fields in relativity include the four-velocity, , which is the coordinate distance travelled per unit of proper time, the four-acceleration and the four-current describing the charge and current densities. Other physically important tensor fields in relativity include the following: The stress–energy tensor , a symmetric rank-two tensor. The electromagnetic field tensor , a rank-two antisymmetric tensor. Although the word 'tensor' refers to an object at a point, it is common practice to refer to tensor fields on a spacetime (or a region of it) as just 'tensors'. At each point of a spacetime on which a metric is defined, the metric can be reduced to the Minkowski form using Sylvester's law of inertia. Tensorial derivatives Before the advent of general relativity, changes in physical processes were generally described by partial derivatives, for example, in describing changes in electromagnetic fields (see Maxwell's equations). Even in special relativity, the partial derivative is still sufficient to describe such changes. However, in general relativity, it is found that derivatives which are also tensors must be used. The derivatives have some common features including that they are derivatives along integral curves of vector fields. The problem in defining derivatives on manifolds that are not flat is that there is no natural way to compare vectors at different points. An extra structure on a general manifold is required to define derivatives. Below are described two important derivatives that can be defined by imposing an additional structure on the manifold in each case. Affine connections The curvature of a spacetime can be characterised by taking a vector at some point and parallel transporting it along a curve on the spacetime. An affine connection is a rule which describes how to legitimately move a vector along a curve on the manifold without changing its direction. By definition, an affine connection is a bilinear map , where is a space of all vector fields on the spacetime. This bilinear map can be described in terms of a set of connection coefficients (also known as Christoffel symbols) specifying what happens to components of basis vectors under infinitesimal parallel transport: Despite their appearance, the connection coefficients are not the components of a tensor. Generally speaking, there are independent connection coefficients at each point of spacetime. The connection is called symmetric or torsion-free, if . A symmetric connection has at most unique coefficients. For any curve and two points and on this curve, an affine connection gives rise to a map of vectors in the tangent space at into vectors in the tangent space at : and can be computed component-wise by solving the differential equation where is the vector tangent to the curve at the point . An important affine connection in general relativity is the Levi-Civita connection, which is a symmetric connection obtained from parallel transporting a tangent vector along a curve whilst keeping the inner product of that vector constant along the curve. The resulting connection coefficients (Christoffel symbols) can be calculated directly from the metric. For this reason, this type of connection is often called a metric connection. The covariant derivative Let be a point, a vector located at , and a vector field. The idea of differentiating at along the direction of in a physically meaningful way can be made sense of by choosing an affine connection and a parameterized smooth curve such that and . The formula for a covariant derivative of along associated with connection turns out to give curve-independent results and can be used as a "physical definition" of a covariant derivative. It can be expressed using connection coefficients: The expression in brackets, called a covariant derivative of (with respect to the connection) and denoted by , is more often used in calculations: A covariant derivative of can thus be viewed as a differential operator acting on a vector field sending it to a type tensor (increasing the covariant index by 1) and can be generalised to act on type tensor fields sending them to type tensor fields. Notions of parallel transport can then be defined similarly as for the case of vector fields. By definition, a covariant derivative of a scalar field is equal to the regular derivative of the field. In the literature, there are three common methods of denoting covariant differentiation: Many standard properties of regular partial derivatives also apply to covariant derivatives: In general relativity, one usually refers to "the" covariant derivative, which is the one associated with Levi-Civita affine connection. By definition, Levi-Civita connection preserves the metric under parallel transport, therefore, the covariant derivative gives zero when acting on a metric tensor (as well as its inverse). It means that we can take the (inverse) metric tensor in and out of the derivative and use it to raise and lower indices: The Lie derivative Another important tensorial derivative is the Lie derivative. Unlike the covariant derivative, the Lie derivative is independent of the metric, although in general relativity one usually uses an expression that seemingly depends on the metric through the affine connection. Whereas the covariant derivative required an affine connection to allow comparison between vectors at different points, the Lie derivative uses a congruence from a vector field to achieve the same purpose. The idea of Lie dragging a function along a congruence leads to a definition of the Lie derivative, where the dragged function is compared with the value of the original function at a given point. The Lie derivative can be defined for type tensor fields and in this respect can be viewed as a map that sends a type to a type tensor. The Lie derivative is usually denoted by , where is the vector field along whose congruence the Lie derivative is taken. The Lie derivative of any tensor along a vector field can be expressed through the covariant derivatives of that tensor and vector field. The Lie derivative of a scalar is just the directional derivative: Higher rank objects pick up additional terms when the Lie derivative is taken. For example, the Lie derivative of a type tensor is More generally, In fact in the above expression, one can replace the covariant derivative with any torsion free connection or locally, with the coordinate dependent derivative , showing that the Lie derivative is independent of the metric. The covariant derivative is convenient however because it commutes with raising and lowering indices. One of the main uses of the Lie derivative in general relativity is in the study of spacetime symmetries where tensors or other geometrical objects are preserved. In particular, Killing symmetry (symmetry of the metric tensor under Lie dragging) occurs very often in the study of spacetimes. Using the formula above, we can write down the condition that must be satisfied for a vector field to generate a Killing symmetry: The Riemann curvature tensor A crucial feature of general relativity is the concept of a curved manifold. A useful way of measuring the curvature of a manifold is with an object called the Riemann (curvature) tensor. This tensor measures curvature by use of an affine connection by considering the effect of parallel transporting a vector between two points along two curves. The discrepancy between the results of these two parallel transport routes is essentially quantified by the Riemann tensor. This property of the Riemann tensor can be used to describe how initially parallel geodesics diverge. This is expressed by the equation of geodesic deviation and means that the tidal forces experienced in a gravitational field are a result of the curvature of spacetime. Using the above procedure, the Riemann tensor is defined as a type tensor and when fully written out explicitly contains the Christoffel symbols and their first partial derivatives. The Riemann tensor has 20 independent components. The vanishing of all these components over a region indicates that the spacetime is flat in that region. From the viewpoint of geodesic deviation, this means that initially parallel geodesics in that region of spacetime will stay parallel. The Riemann tensor has a number of properties sometimes referred to as the symmetries of the Riemann tensor. Of particular relevance to general relativity are the algebraic and differential Bianchi identities. The connection and curvature of any Riemannian manifold are closely related, the theory of holonomy groups, which are formed by taking linear maps defined by parallel transport around curves on the manifold, providing a description of this relationship. What the Riemann tensor allows us to do is tell, mathematically, whether a space is flat or, if curved, how much curvature takes place in any given region. In order to derive the Riemann curvature tensor we must first recall the definition of the covariant derivative of a tensor with one and two indices; For the formation of the Riemann tensor, the covariant derivative is taken twice with the respect to a tensor of rank one. The equation is set up as follows; Similarly we have: Subtracting the two equations, swapping dummy indices and using the symmetry of Christoffel symbols leaves: or Finally the Riemann curvature tensor is written as You can contract indices to make the tensor covariant simply by multiplying by the metric, which will be useful when working with Einstein's field equations, and by further decomposition, This tensor is called the Ricci tensor which can also be derived by setting and in the Riemann tensor to the same indice and summing over them. Then the curvature scalar can be found by going one step further, So now we have 3 different objects, the Riemann curvature tensor: or the Ricci tensor: the scalar curvature: all of which are useful in calculating solutions to Einstein's field equations. The energy–momentum tensor The sources of any gravitational field (matter and energy) is represented in relativity by a type symmetric tensor called the energy–momentum tensor. It is closely related to the Ricci tensor. Being a second rank tensor in four dimensions, the energy–momentum tensor may be viewed as a 4 by 4 matrix. The various admissible matrix types, called Jordan forms cannot all occur, as the energy conditions that the energy–momentum tensor is forced to satisfy rule out certain forms. Energy conservation In special and general relativity, there is a local law for the conservation of energy–momentum. It can be succinctly expressed by the tensor equation: This illustrates the rule of thumb that 'partial derivatives go to covariant derivatives'. The Einstein field equations The Einstein field equations (EFE) are the core of general relativity theory. The EFE describe how mass and energy (as represented in the stress–energy tensor) are related to the curvature of space-time (as represented in the Einstein tensor). In abstract index notation, the EFE reads as follows: where is the Einstein tensor, is the cosmological constant, is the metric tensor, is the speed of light in vacuum and is the gravitational constant, which comes from Newton's law of universal gravitation. The solutions of the EFE are metric tensors. The EFE, being non-linear differential equations for the metric, are often difficult to solve. There are a number of strategies used to solve them. For example, one strategy is to start with an ansatz (or an educated guess) of the final metric, and refine it until it is specific enough to support a coordinate system but still general enough to yield a set of simultaneous differential equations with unknowns that can be solved for. Metric tensors resulting from cases where the resultant differential equations can be solved exactly for a physically reasonable distribution of energy–momentum are called exact solutions. Examples of important exact solutions include the Schwarzschild solution and the Friedman-Lemaître-Robertson–Walker solution. The EIH approximation plus other references (e.g. Geroch and Jang, 1975 - 'Motion of a body in general relativity', JMP, Vol. 16 Issue 1). The geodesic equations Once the EFE are solved to obtain a metric, it remains to determine the motion of inertial objects in the spacetime. In general relativity, it is assumed that inertial motion occurs along timelike and null geodesics of spacetime as parameterized by proper time. Geodesics are curves that parallel transport their own tangent vector ; i.e., . This condition, the geodesic equation, can be written in terms of a coordinate system with the tangent vector : where denotes the derivative by proper time, , with τ parametrising proper time along the curve and making manifest the presence of the Christoffel symbols. A principal feature of general relativity is to determine the paths of particles and radiation in gravitational fields. This is accomplished by solving the geodesic equations. The EFE relate the total matter (energy) distribution to the curvature of spacetime. Their nonlinearity leads to a problem in determining the precise motion of matter in the resultant spacetime. For example, in a system composed of one planet orbiting a star, the motion of the planet is determined by solving the field equations with the energy–momentum tensor the sum of that for the planet and the star. The gravitational field of the planet affects the total spacetime geometry and hence the motion of objects. It is therefore reasonable to suppose that the field equations can be used to derive the geodesic equations. When the energy–momentum tensor for a system is that of dust, it may be shown by using the local conservation law for the energy–momentum tensor that the geodesic equations are satisfied exactly. Lagrangian formulation The issue of deriving the equations of motion or the field equations in any physical theory is considered by many researchers to be appealing. A fairly universal way of performing these derivations is by using the techniques of variational calculus, the main objects used in this being Lagrangians. Many consider this approach to be an elegant way of constructing a theory, others as merely a formal way of expressing a theory (usually, the Lagrangian construction is performed after the theory has been developed). Mathematical techniques for analysing spacetimes Having outlined the basic mathematical structures used in formulating the theory, some important mathematical techniques that are employed in investigating spacetimes will now be discussed. Frame fields A frame field is an orthonormal set of 4 vector fields (1 timelike, 3 spacelike) defined on a spacetime. Each frame field can be thought of as representing an observer in the spacetime moving along the integral curves of the timelike vector field. Every tensor quantity can be expressed in terms of a frame field, in particular, the metric tensor takes on a particularly convenient form. When allied with coframe fields, frame fields provide a powerful tool for analysing spacetimes and physically interpreting the mathematical results. Symmetry vector fields Some modern techniques in analysing spacetimes rely heavily on using spacetime symmetries, which are infinitesimally generated by vector fields (usually defined locally) on a spacetime that preserve some feature of the spacetime. The most common type of such symmetry vector fields include Killing vector fields (which preserve the metric structure) and their generalisations called generalised Killing vector fields. Symmetry vector fields find extensive application in the study of exact solutions in general relativity and the set of all such vector fields usually forms a finite-dimensional Lie algebra. The Cauchy problem The Cauchy problem (sometimes called the initial value problem) is the attempt at finding a solution to a differential equation given initial conditions. In the context of general relativity, it means the problem of finding solutions to Einstein's field equations - a system of hyperbolic partial differential equations - given some initial data on a hypersurface. Studying the Cauchy problem allows one to formulate the concept of causality in general relativity, as well as 'parametrising' solutions of the field equations. Ideally, one desires global solutions, but usually local solutions are the best that can be hoped for. Typically, solving this initial value problem requires selection of particular coordinate conditions. Spinor formalism Spinors find several important applications in relativity. Their use as a method of analysing spacetimes using tetrads, in particular, in the Newman–Penrose formalism is important. Another appealing feature of spinors in general relativity is the condensed way in which some tensor equations may be written using the spinor formalism. For example, in classifying the Weyl tensor, determining the various Petrov types becomes much easier when compared with the tensorial counterpart. Regge calculus Regge calculus is a formalism which chops up a Lorentzian manifold into discrete 'chunks' (four-dimensional simplicial blocks) and the block edge lengths are taken as the basic variables. A discrete version of the Einstein–Hilbert action is obtained by considering so-called deficit angles of these blocks, a zero deficit angle corresponding to no curvature. This novel idea finds application in approximation methods in numerical relativity and quantum gravity, the latter using a generalisation of Regge calculus. Singularity theorems In general relativity, it was noted that, under fairly generic conditions, gravitational collapse will inevitably result in a so-called singularity. A singularity is a point where the solutions to the equations become infinite, indicating that the theory has been probed at inappropriate ranges. Numerical relativity Numerical relativity is the sub-field of general relativity which seeks to solve Einstein's equations through the use of numerical methods. Finite difference, finite element and pseudo-spectral methods are used to approximate the solution to the partial differential equations which arise. Novel techniques developed by numerical relativity include the excision method and the puncture method for dealing with the singularities arising in black hole spacetimes. Common research topics include black holes and neutron stars. Perturbation methods The nonlinearity of the Einstein field equations often leads one to consider approximation methods in solving them. For example, an important approach is to linearise the field equations. Techniques from perturbation theory find ample application in such areas. See also Notes [1] The defining feature (central physical idea) of general relativity is that matter and energy cause the surrounding spacetime geometry to be curved. References
0.777227
0.988223
0.768074
Orbital state vectors
In astrodynamics and celestial dynamics, the orbital state vectors (sometimes state vectors) of an orbit are Cartesian vectors of position and velocity that together with their time (epoch) uniquely determine the trajectory of the orbiting body in space. Orbital state vectors come in many forms including the traditional Position-Velocity vectors, Two-line element set (TLE), and Vector Covariance Matrix (VCM). Frame of reference State vectors are defined with respect to some frame of reference, usually but not always an inertial reference frame. One of the more popular reference frames for the state vectors of bodies moving near Earth is the Earth-centered inertial (ECI) system defined as follows: The origin is Earth's center of mass; The Z axis is coincident with Earth's rotational axis, positive northward; The X/Y plane coincides with Earth's equatorial plane, with the +X axis pointing toward the vernal equinox and the Y axis completing a right-handed set. The ECI reference frame is not truly inertial because of the slow, 26,000 year precession of Earth's axis, so the reference frames defined by Earth's orientation at a standard astronomical epoch such as B1950 or J2000 are also commonly used. Many other reference frames can be used to meet various application requirements, including those centered on the Sun or on other planets or moons, the one defined by the barycenter and total angular momentum of the solar system (in particular the ICRF), or even a spacecraft's own orbital plane and angular momentum. Position and velocity vectors The position vector describes the position of the body in the chosen frame of reference, while the velocity vector describes its velocity in the same frame at the same time. Together, these two vectors and the time at which they are valid uniquely describe the body's trajectory as detailed in Orbit determination. The principal reasoning is that Newton's law of gravitation yields an acceleration ; if the product of gravitational constant and attractive mass at the center of the orbit are known, position and velocity are the initial values for that second order differential equation for which has a unique solution. The body does not actually have to be in orbit for its state vectors to determine its trajectory; it only has to move ballistically, i.e., solely under the effects of its own inertia and gravity. For example, it could be a spacecraft or missile in a suborbital trajectory. If other forces such as drag or thrust are significant, they must be added vectorially to those of gravity when performing the integration to determine future position and velocity. For any object moving through space, the velocity vector is tangent to the trajectory. If is the unit vector tangent to the trajectory, then Derivation The velocity vector can be derived from position vector by differentiation with respect to time: An object's state vector can be used to compute its classical or Keplerian orbital elements and vice versa. Each representation has its advantages. The elements are more descriptive of the size, shape and orientation of an orbit, and may be used to quickly and easily estimate the object's state at any arbitrary time provided its motion is accurately modeled by the two-body problem with only small perturbations. On the other hand, the state vector is more directly useful in a numerical integration that accounts for significant, arbitrary, time-varying forces such as drag, thrust and gravitational perturbations from third bodies as well as the gravity of the primary body. The state vectors ( and ) can be easily used to compute the specific angular momentum vector as . Because even satellites in low Earth orbit experience significant perturbations from non-spherical Earth's figure, solar radiation pressure, lunar tide, and atmospheric drag, the Keplerian elements computed from the state vector at any moment are only valid for a short period of time and need to be recomputed often to determine a valid object state. Such element sets are known as osculating elements because they coincide with the actual orbit only at that moment. See also ECEF Earth-centered inertial Orbital plane Orbit determination State vector (navigation) Radial, transverse, normal References Orbits Vectors (mathematics and physics)
0.786653
0.976347
0.768046
Uncleftish Beholding
"Uncleftish Beholding" (1989) is a short text by Poul Anderson, included in his anthology "All One Universe". It is designed to illustrate what English might look like without its large number of words derived from languages such as French, Greek, and Latin, especially with regard to the proportion of scientific words with origins in those languages. Written as a demonstration of linguistic purism in English, the work explains atomic theory using Germanic words almost exclusively and coining new words when necessary; many of these new words have cognates in modern German, an important scientific language in its own right. The title phrase uncleftish beholding calques "atomic theory." To illustrate, the text begins: It goes on to define firststuffs (chemical elements), such as waterstuff (hydrogen), sourstuff (oxygen), and ymirstuff (uranium), as well as bulkbits (molecules), bindings (compounds), and several other terms important to uncleftish worldken (atomic science). and are the modern German words for hydrogen and oxygen, and in Dutch the modern equivalents are and . Sunstuff refers to helium, which derives from , the Ancient Greek word for 'sun'. Ymirstuff references Ymir, a giant in Norse mythology similar to Uranus in Greek mythology. Glossary The vocabulary used in "Uncleftish Beholding" does not completely derive from Anglo-Saxon. Around, from Old French (Modern French ), completely displaced Old English (modern English (now obsolete), cognate to German and Latin ) and left no "native" English word for this concept. The text also contains the French-derived words rest, ordinary and sort. The text gained increased exposure and popularity after being circulated around the Internet, and has served as inspiration for some inventors of Germanic English conlangs. Douglas Hofstadter, in discussing the piece in his book , jocularly refers to the use of only Germanic roots for scientific pieces as "Ander-Saxon." See also Anglish Thing Explainer References External links English language Atomic physics 1989 documents Works by Poul Anderson Linguistic purism Books written in fictional dialects
0.776461
0.989134
0.768023
Thermal physics
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics. Overview Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory. A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction. See also Heat transfer physics Information theory Philosophy of thermal and statistical physics Thermodynamic instruments References Further reading External links Thermal Physics Links on the Web Physics education Thermodynamics
0.787791
0.974904
0.768021
Dynamic pressure
In fluid dynamics, dynamic pressure (denoted by or and sometimes called velocity pressure) is the quantity defined by: where (in SI units): is the dynamic pressure in pascals (i.e., kg/(m*s2), (Greek letter rho) is the fluid mass density (e.g. in kg/m3), and is the flow speed in m/s. It can be thought of as the fluid's kinetic energy per unit volume. For incompressible flow, the dynamic pressure of a fluid is the difference between its total pressure and static pressure. From Bernoulli's law, dynamic pressure is given by where and are the total and static pressures, respectively. Physical meaning Dynamic pressure is the kinetic energy per unit volume of a fluid. Dynamic pressure is one of the terms of Bernoulli's equation, which can be derived from the conservation of energy for a fluid in motion. At a stagnation point the dynamic pressure is equal to the difference between the stagnation pressure and the static pressure, so the dynamic pressure in a flow field can be measured at a stagnation point. Another important aspect of dynamic pressure is that, as dimensional analysis shows, the aerodynamic stress (i.e. stress within a structure subject to aerodynamic forces) experienced by an aircraft travelling at speed is proportional to the air density and square of , i.e. proportional to . Therefore, by looking at the variation of during flight, it is possible to determine how the stress will vary and in particular when it will reach its maximum value. The point of maximum aerodynamic load is often referred to as max q and it is a critical parameter in many applications, such as launch vehicles. Dynamic pressure can also appear as a term in the incompressible Navier-Stokes equation which may be written: By a vector calculus identity so that for incompressible, irrotational flow, the second term on the left in the Navier-Stokes equation is just the gradient of the dynamic pressure. In hydraulics, the term is known as the hydraulic velocity head (hv) so that the dynamic pressure is equal to . Uses The dynamic pressure, along with the static pressure and the pressure due to elevation, is used in Bernoulli's principle as an energy balance on a closed system. The three terms are used to define the state of a closed system of an incompressible, constant-density fluid. When the dynamic pressure is divided by the product of fluid density and acceleration due to gravity, g, the result is called velocity head, which is used in head equations like the one used for pressure head and hydraulic head. In a venturi flow meter, the differential pressure head can be used to calculate the differential velocity head, which are equivalent in the adjacent picture. An alternative to velocity head is dynamic head. Compressible flow Many authors define dynamic pressure only for incompressible flows. (For compressible flows, these authors use the concept of impact pressure.) However, the definition of dynamic pressure can be extended to include compressible flows. For compressible flow the isentropic relations can be used (also valid for incompressible flow): Where: {| border="0" cellpadding="0" |- | style="text-align:right;" | || Mach number (non-dimensional), |- | style="text-align:right;" | || ratio of specific heats (non-dimensional; 1.4 for air at sea-level conditions), |} See also Pressure Pressure head Hydraulic head Total dynamic head Drag, lift and pitching moment coefficients Derivations of Bernoulli equation References L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. Houghton, E.L. and Carpenter, P.W. (1993), Aerodynamics for Engineering Students, Butterworth and Heinemann, Oxford UK. Notes External links Definition of dynamic pressure on Eric Weisstein's World of Science Fluid dynamics
0.773143
0.993332
0.767987
Dynamic stochastic general equilibrium
Dynamic stochastic general equilibrium modeling (abbreviated as DSGE, or DGE, or sometimes SDGE) is a macroeconomic method which is often employed by monetary and fiscal authorities for policy analysis, explaining historical time-series data, as well as future forecasting purposes. DSGE econometric modelling applies general equilibrium theory and microeconomic principles in a tractable manner to postulate economic phenomena, such as economic growth and business cycles, as well as policy effects and market shocks. Terminology As a practical matter, people often use the term "DSGE models" to refer to a particular class of classically quantitative econometric models of business cycles or economic growth called real business cycle (RBC) models. DSGE models were initially proposed by Kydland & Prescott, and Long & Plosser; Charles Plosser described RBC models as a precursor for DSGE modeling. As mentioned in the Introduction, DSGE models are the predominant framework of macroeconomic analysis. They are multifaceted, and their combination of micro-foundations and optimising economic behaviour of rational agents allows for a comprehensive analysis of macro effects. As indicated by their name, their defining characteristics are as follows: Dynamic: The effect of current choices on future uncertainty makes the models dynamic and assigns a certain relevance to the expectations of agents in forming macroeconomic outcomes. Stochastic: The models take into consideration the transmission of random shocks into the economy and the consequent economic fluctuations. General: referring to the entire economy as a whole (within the model) in that price levels and output levels are determined jointly. This is opposed to a partial equilibrium, where price levels are taken as given and only output levels are determined within the model economy. Equilibrium: In accordance with Léon Walras's General Competitive Equilibrium Theory, the model captures the interaction between policy actions and behaviour of agents. RBC modeling The formulation and analysis of monetary policy has undergone significant evolution in recent decades and the development of DSGE models has played a key role in this process. As was aforementioned DSGE models are seen to be an update of RBC (real business cycle) models. Early real business-cycle models postulated an economy populated by a representative consumer who operates in perfectly competitive markets. The only sources of uncertainty in these models are "shocks" in technology. RBC theory builds on the neoclassical growth model, under the assumption of flexible prices, to study how real shocks to the economy might cause business cycle fluctuations. The "representative consumer" assumption can either be taken literally or reflect a Gorman aggregation of heterogenous consumers who are facing idiosyncratic income shocks and complete markets in all assets. These models took the position that fluctuations in aggregate economic activity are actually an "efficient response" of the economy to exogenous shocks. The models were criticized on a number of issues: Microeconomic data cast doubt on some of the key assumptions of the model, such as: perfect credit- and insurance-markets; perfectly friction-less labour markets; etc. They had difficulty in accounting for some key properties of the aggregate data, such as: the observed volatility in hours worked; the equity premium; etc. Open-economy versions of these models failed to account for observations such as: the cyclical movement of consumption and output across countries; the extremely high correlation between nominal and real exchange rates; etc. They are mute on many policy related issues of importance to macroeconomists and policy makers, such as the consequences of different monetary policy rules for aggregate economic activity. The Lucas critique In a 1976 paper, Robert Lucas argued that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. Lucas claimed that the decision rules of Keynesian models, such as the fiscal multiplier, cannot be considered as structural, in the sense that they cannot be invariant with respect to changes in government policy variables, stating: Given that the structure of an econometric model consists of optimal decision-rules of economic agents, and that optimal decision-rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models. This meant that, because the parameters of the models were not structural, i.e. not indifferent to policy, they would necessarily change whenever policy was changed. The so-called Lucas critique followed similar criticism undertaken earlier by Ragnar Frisch, in his critique of Jan Tinbergen's 1939 book Statistical Testing of Business-Cycle Theories, where Frisch accused Tinbergen of not having discovered autonomous relations, but "coflux" relations, and by Jacob Marschak, in his 1953 contribution to the Cowles Commission Monograph, where he submitted that In predicting the effect of its decisions (policies), the government...has to take account of exogenous variables, whether controlled by it (the decisions themselves, if they are exogenous variables) or uncontrolled (e.g. weather), and of structural changes, whether controlled by it (the decisions themselves, if they change the structure) or uncontrolled (e.g. sudden changes in people's attitude). The Lucas critique is representative of the paradigm shift that occurred in macroeconomic theory in the 1970s towards attempts at establishing micro-foundations. Response to the Lucas critique In the 1980s, macro models emerged that attempted to directly respond to Lucas through the use of rational expectations econometrics. In 1982, Finn E. Kydland and Edward C. Prescott created a real business cycle (RBC) model to "predict the consequence of a particular policy rule upon the operating characteristics of the economy." The stated, exogenous, stochastic components in their model are "shocks to technology" and "imperfect indicators of productivity." The shocks involve random fluctuations in the productivity level, which shift up or down the trend of economic growth. Examples of such shocks include innovations, the weather, sudden and significant price increases in imported energy sources, stricter environmental regulations, etc. The shocks directly change the effectiveness of capital and labour, which, in turn, affects the decisions of workers and firms, who then alter what they buy and produce. This eventually affects output. The authors stated that, since fluctuations in employment are central to the business cycle, the "stand-in consumer [of the model] values not only consumption but also leisure," meaning that unemployment movements essentially reflect the changes in the number of people who want to work. "Household-production theory," as well as "cross-sectional evidence" ostensibly support a "non-time-separable utility function that admits greater inter-temporal substitution of leisure, something which is needed," according to the authors, "to explain aggregate movements in employment in an equilibrium model." For the K&P model, monetary policy is irrelevant for economic fluctuations. The associated policy implications were clear: There is no need for any form of government intervention since, ostensibly, government policies aimed at stabilizing the business cycle are welfare-reducing. Since microfoundations are based on the preferences of decision-makers in the model, DSGE models feature a natural benchmark for evaluating the welfare effects of policy changes. Furthermore, the integration of such microfoundations in DSGE modeling enables the model to accurately adjust to shifts in fundamental behaviour of agents and is thus regarded as an "impressive response" to the Lucas critique. The Kydland/Prescott 1982 paper is often considered the starting point of RBC theory and of DSGE modeling in general and its authors were awarded the 2004 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel. DSGE modeling Structure By applying dynamic principles, dynamic stochastic general equilibrium models contrast with the static models studied in applied general equilibrium models and some computable general equilibrium models. DSGE models employed by governments and central banks for policy analysis are relatively simple. Their structure is built around three interrelated sections including that of demand, supply, and the monetary policy equation. These three sections are formally defined by micro-foundations and make explicit assumptions about the behavior of the main economic agents in the economy, i.e. households, firms, and the government. The interaction of the agents in markets cover every period of the business cycle which ultimately qualifies the "general equilibrium" aspect of this model. The preferences (objectives) of the agents in the economy must be specified. For example, households might be assumed to maximize a utility function over consumption and labor effort. Firms might be assumed to maximize profits and to have a production function, specifying the amount of goods produced, depending on the amount of labor, capital and other inputs they employ. Technological constraints on firms' decisions might include costs of adjusting their capital stocks, their employment relations, or the prices of their products. Below is an example of the set of assumptions a DSGE is built upon: Perfect competition in all markets All prices adjust instantaneously Rational expectations No asymmetric information The competitive equilibrium is Pareto optimal Firms are identical and price takers Infinitely lived identical price-taking households to which the following frictions are added: Distortionary taxes (Labour taxes) – to account for not lump-sum taxation Habit persistence (the period utility function depends on a quasi-difference of consumption) Adjustment costs on investments – to make investments less volatile Labour adjustment costs – to account for costs firms face when changing the level of employment The models' general equilibrium nature is presumed to capture the interaction between policy actions and agents' behavior, while the models specify assumptions about the stochastic shocks that give rise to economic fluctuations. Hence, the models are presumed to "trace more clearly the shocks' transmission to the economy." This is exemplified in the below explanation of a simplified DSGE model. Demand defines real activity as a function of the nominal interest rate minus expected inflation, and of expectations regarding future real activity. The demand block confirms the general economic principle that temporarily high interest rates encourage people and firms to save instead of consuming/investing; as well as suggesting the likelihood of increased current spending under the expectation of promising future prospects, regardless of rate level. Supply is dependent on demand through the input of the level of activity, which impacts the determination of inflation. E.g. In times of high activity, firms are required increase the wage rate in order to encourage employees to work greater hours which leads to a general increase in marginal costs and thus a subsequent increase in future expectation and current inflation. The demand and supply sections simultaneously contribute to a determination of monetary policy. The formal equation specified in this section describes the conditions under which the central bank determines the nominal interest rate. As such, general central bank behaviour is reflected through this i.e. raising the bank rate (short-term interest rates) in periods of rapid or unsustainable growth and vice versa. There is a final flow from monetary policy towards demand representing the impact of adjustments in nominal interest rates on real activity and subsequently inflation. As such a complete simplified model of the relationship between three key features is defined. This dynamic interaction between the endogenous variables of output, inflation, and the nominal interest rate, is fundamental in DSGE modelling. Schools Two schools of analysis form the bulk of DSGE modeling: the classic RBC models, and the New-Keynesian DSGE models that build on a structure similar to RBC models, but instead assume that prices are set by monopolistically competitive firms, and cannot be instantaneously and costlessly adjusted. Rotemberg & Woodford introduced this framework in 1997. Introductory and advanced textbook presentations of DSGE modeling are given by Galí (2008) and Woodford (2003). Monetary policy implications are surveyed by Clarida, Galí, and Gertler (1999). The European Central Bank (ECB) has developed a DSGE model, called the Smets–Wouters model, which it uses to analyze the economy of the Eurozone as a whole. The Bank's analysts state that developments in the construction, simulation and estimation of DSGE models have made it possible to combine a rigorous microeconomic derivation of the behavioural equations of macro models with an empirically plausible calibration or estimation which fits the main features of the macroeconomic time series. The main difference between "empirical" DSGE models and the "more traditional macroeconometric models, such as the Area-Wide Model", according to the ECB, is that "both the parameters and the shocks to the structural equations are related to deeper structural parameters describing household preferences and technological and institutional constraints." The Smets-Wouters model uses seven Eurozone area macroeconomic series: real GDP; consumption; investment; employment; real wages; inflation; and the nominal, short-term interest rate. Using Bayesian estimation and validation techniques, the bank's modeling is ostensibly able to compete with "more standard, unrestricted time series models, such as vector autoregression, in out-of-sample forecasting." Criticism Bank of Lithuania Deputy Chairman Raimondas Kuodis disputes the very title of DSGE analysis: The models, he claims, are neither dynamic (since they contain no evolution of stocks of financial assets and liabilities), stochastic (because we live in the world of Knightian uncertainty and, since future outcomes or possible choices are unknown, then risk analysis or expected utility theory are not very helpful), general (they lack a full accounting framework, a stock-flow consistent framework, which would significantly reduce the number of degrees of freedom in the economy), or even about equilibrium (since markets clear only in a few quarters). Willem Buiter, Citigroup Chief Economist, has argued that DSGE models rely excessively on an assumption of complete markets, and are unable to describe the highly nonlinear dynamics of economic fluctuations, making training in 'state-of-the-art' macroeconomic modeling "a privately and socially costly waste of time and resources". Narayana Kocherlakota, President of the Federal Reserve Bank of Minneapolis, wrote that many modern macro models...do not capture an intermediate messy reality in which market participants can trade multiple assets in a wide array of somewhat segmented markets. As a consequence, the models do not reveal much about the benefits of the massive amount of daily or quarterly re-allocations of wealth within financial markets. The models also say nothing about the relevant costs and benefits of resulting fluctuations in financial structure (across bank loans, corporate debt, and equity). N. Gregory Mankiw, regarded as one of the founders of New Keynesian DSGE modeling, has argued that New classical and New Keynesian research has had little impact on practical macroeconomists who are charged with [...] policy. [...] From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn. In the 2010 United States Congress hearings on macroeconomic modeling methods, held on 20 July 2010, and aiming to investigate why macroeconomists failed to foresee the financial crisis of 2007-2010, MIT professor of Economics Robert Solow criticized the DSGE models currently in use: I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way... The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether. Commenting on the Congressional session, The Economist asked whether agent-based models might better predict financial crises than DSGE models. Former Chief Economist and Senior Vice President of the World Bank Paul Romer has criticized the "mathiness" of DSGE models and dismisses the inclusion of "imaginary shocks" in DSGE models that ignore "actions that people take." Romer submits a simplified presentation of real business cycle (RBC) modelling, which, as he states, essentially involves two mathematical expressions: The well known formula of the quantity theory of money, and an identity that defines the growth accounting residual as the difference between growth of output and growth of an index of inputs in production. Romer assigned to residual the label "phlogiston" while he criticized the lack of consideration given to monetary policy in DSGE analysis. Joseph Stiglitz finds "staggering" shortcomings in the "fantasy world" the models create and argues that "the failure [of macroeconomics] were the wrong microfoundations, which failed to incorporate key aspects of economic behavior". He suggested the models have failed to incorporate "insights from information economics and behavioral economics" and are "ill-suited for predicting or responding to a financial crisis." Oxford University's John Muellbauer put it this way: "It is as if the information economics revolution, for which George Akerlof, Michael Spence and Joe Stiglitz shared the Nobel Prize in 2001, had not occurred. The combination of assumptions, when coupled with the trivialisation of risk and uncertainty...render money, credit and asset prices largely irrelevant... [The models] typically ignore inconvenient truths." Nobel laureate Paul Krugman asked, "Were there any interesting predictions from DSGE models that were validated by events? If there were, I'm not aware of it." Austrian economists reject DSGE modelling. Critique of DSGE-style macromodeling is at the core of Austrian theory, where, as opposed to RBC and New Keynesian models where capital is homogeneous capital is heterogeneous and multi-specific and, therefore, production functions for the multi-specific capital are simply discovered over time. Lawrence H. White concludes that present-day mainstream macroeconomics is dominated by Walrasian DSGE models, with restrictions added to generate Keynesian properties: Mises consistently attributed the boom-initiating shock to unexpectedly expansive policy by a central bank trying to lower the market interest rate. Hayek added two alternate scenarios. [One is where] fresh producer-optimism about investment raises the demand for loanable funds, and thus raises the natural rate of interest, but the central bank deliberately prevents the market rate from rising by expanding credit. [Another is where,] in response to the same kind of increase the demand for loanable funds, but without central bank impetus, the commercial banking system by itself expands credit more than is sustainable. Hayek had criticized Wicksell for the confusion of thinking that establishing a rate of interest consistent with intertemporal equilibrium also implies a constant price level. Hayek posited that intertemporal equilibrium requires not a natural rate but the "neutrality of money," in the sense that money does not "distort" (influence) relative prices. Post-Keynesians reject the notions of macro-modelling typified by DSGE. They consider such attempts as "a chimera of authority," pointing to the 2003 statement by Lucas, the pioneer of modern DSGE modelling: Macroeconomics in [its] original sense [of preventing the recurrence of economic disasters] has succeeded. Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades. A basic Post Keynesian presumption, which Modern Monetary Theory proponents share, and which is central to Keynesian analysis, is that the future is unknowable and so, at best, we can make guesses about it that would be based broadly on habit, custom, gut-feeling, etc. In DSGE modeling, the central equation for consumption supposedly provides a way in which the consumer links decisions to consume now with decisions to consume later and thus achieves maximum utility in each period. Our marginal Utility from consumption today must equal our marginal utility from consumption in the future, with a weighting parameter that refers to the valuation that we place on the future relative to today. And since the consumer is supposed to always the equation for consumption, this means that all of us do it individually, if this approach is to reflect the DSGE microfoundational notions of consumption. However, post-Keynesians state that: no consumer is the same with another in terms of random shocks and uncertainty of income (since some consumers will spend every cent of any extra income they receive while others, typically higher-income earners, spend comparatively little of any extra income); no consumer is the same with another in terms of access to credit; not every consumer really considers what they will be doing at the end of their life in any coherent way, so there is no concept of a "permanent lifetime income", which is central to DSGE models; and, therefore, trying to "aggregate" all these differences into one, single "representative agent" is impossible. These assumptions are similar to the assumptions made in the so-called Ricardian equivalence, whereby consumers are assumed to be forward looking and to internalize the government's budget constraints when making consumption decisions, and therefore taking decisions on the basis of practically perfect evaluations of available information. Extrinsic unpredictability, post-Keynesians state, has "dramatic consequences" for the standard, macroeconomic, forecasting, DSGE models used by governments and other institutions around the world. The mathematical basis of every DSGE model fails when distributions shift, since general-equilibrium theories rely heavily on ceteris paribus assumptions. They point to the Bank of England's explicit admission that none of the models they used and evaluated coped well during the 2007–2008 financial crisis, which, for the Bank, "underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary." Christian Mueller points out that the fact that DSGE models evolve (see next section) constitutes a contradiction of the modelling approach in its own right and, ultimately, makes DSGE models subject to the Lucas critique. This contradiction arises because the economic agents in the DSGE models fail to account for the fact that the very models on the basis of which they form expectations evolve due to progress in economic research. While the evolution of DSGE models as such is predictable the direction of this evolution is not. In effect, Lucas' notion of the systematic instability of economic models carries over to DSGE models proving that they are not solving one of the key problems they are thought to be overcoming. Evolution of viewpoints Federal Reserve Bank of Minneapolis president Narayana Kocherlakota acknowledges that DSGE models were "not very useful" for analyzing the financial crisis of 2007-2010 but argues that the applicability of these models is "improving," and claims that there is growing consensus among macroeconomists that DSGE models need to incorporate both "price stickiness and financial market frictions." Despite his criticism of DSGE modelling, he states that modern models are useful: In the early 2000s, ...[the] problem of fit disappeared for modern macro models with sticky prices. Using novel Bayesian estimation methods, Frank Smets and Raf Wouters demonstrated that a sufficiently rich New Keynesian model could fit European data well. Their finding, along with similar work by other economists, has led to widespread adoption of New Keynesian models for policy analysis and forecasting by central banks around the world. Still, Kocherlakota observes that in "terms of fiscal policy (especially short-term fiscal policy), modern macro-modeling seems to have had little impact. ... [M]ost, if not all, of the motivation for the fiscal stimulus was based largely on the long-discarded models of the 1960s and 1970s. In 2010, Rochelle M. Edge, of the Federal Reserve System Board of Directors, contested that the work of Smets & Wouters has "led DSGE models to be taken more seriously by central bankers around the world" so that "DSGE models are now quite prominent tools for macroeconomic analysis at many policy institutions, with forecasting being one of the key areas where these models are used, in conjunction with other forecasting methods." University of Minnesota professor of economics V.V. Chari has pointed out that state-of-the-art DSGE models are more sophisticated than their critics suppose: The models have all kinds of heterogeneity in behavior and decisions... people's objectives differ, they differ by age, by information, by the history of their past experiences. Chari also argued that current DSGE models frequently incorporate frictional unemployment, financial market imperfections, and sticky prices and wages, and therefore imply that the macroeconomy behaves in a suboptimal way which monetary and fiscal policy may be able to improve. Columbia University's Michael Woodford concedes that policies considered by DSGE models might not be Pareto optimal and they may as well not satisfy some other social welfare criterion. Nonetheless, in replying to Mankiw, Woodford argues that the DSGE models commonly used by central banks today and strongly influencing policy makers like Ben Bernanke, do not provide an analysis so different from traditional Keynesian analysis: It is true that the modeling efforts of many policy institutions can reasonably be seen as an evolutionary development within the macroeconomic modeling program of the postwar Keynesians; thus if one expected, with the early New Classicals, that adoption of the new tools would require building anew from the ground up, one might conclude that the new tools have not been put to use. But in fact they have been put to use, only not with such radical consequences as had once been expected. See also Footnotes References Sources Further reading Software DYNARE, free software for handling economic models, including DSGE IRIS, free, open-source toolbox for macroeconomic modeling and forecasting External links Society for Economic Dynamics - Website of the Society for Economic Dynamics, dedicated to advances in DSGE modeling. DSGE-NET, an "international network for DSGE modeling, monetary and fiscal policy" General equilibrium theory New classical macroeconomics New Keynesian economics
0.775909
0.989769
0.76797
Yang–Mills existence and mass gap
The Yang–Mills existence and mass gap problem is an unsolved problem in mathematical physics and mathematics, and one of the seven Millennium Prize Problems defined by the Clay Mathematics Institute, which has offered a prize of US$1,000,000 for its solution. The problem is phrased as follows: Yang–Mills Existence and Mass Gap. Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in , and . In this statement, a quantum Yang–Mills theory is a non-abelian quantum field theory similar to that underlying the Standard Model of particle physics; is Euclidean 4-space; the mass gap Δ is the mass of the least massive particle predicted by the theory. Therefore, the winner must prove that: Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory, and The mass of all particles of the force field predicted by the theory are strictly positive. For example, in the case of G=SU(3)—the strong nuclear interaction—the winner must prove that glueballs have a lower mass bound, and thus cannot be arbitrarily light. The general problem of determining the presence of a spectral gap in a system is known to be undecidable. Background The problem requires the construction of a QFT satisfying the Wightman axioms and showing the existence of a mass gap. Both of these topics are described in sections below. The Wightman axioms The Millennium problem requires the proposed Yang–Mills theory to satisfy the Wightman axioms or similarly stringent axioms. There are four axioms: W0 (assumptions of relativistic quantum mechanics) Quantum mechanics is described according to von Neumann; in particular, the pure states are given by the rays, i.e. the one-dimensional subspaces, of some separable complex Hilbert space. The Wightman axioms require that the Poincaré group acts unitarily on the Hilbert space. In other words, they have position dependent operators called quantum fields which form covariant representations of the Poincaré group. The group of space-time translations is commutative, and so the operators can be simultaneously diagonalised. The generators of these groups give us four self-adjoint operators, , which transform under the homogeneous group as a four-vector, called the energy-momentum four-vector. The second part of the zeroth axiom of Wightman is that the representation U(a, A) fulfills the spectral condition—that the simultaneous spectrum of energy-momentum is contained in the forward cone: The third part of the axiom is that there is a unique state, represented by a ray in the Hilbert space, which is invariant under the action of the Poincaré group. It is called a vacuum. W1 (assumptions on the domain and continuity of the field) For each test function f, there exists a set of operators which, together with their adjoints, are defined on a dense subset of the Hilbert state space, containing the vacuum. The fields A are operator-valued tempered distributions. The Hilbert state space is spanned by the field polynomials acting on the vacuum (cyclicity condition). W2 (transformation law of the field) The fields are covariant under the action of Poincaré group, and they transform according to some representation S of the Lorentz group, or SL(2,C) if the spin is not integer: W3 (local commutativity or microscopic causality) If the supports of two fields are space-like separated, then the fields either commute or anticommute. Cyclicity of a vacuum, and uniqueness of a vacuum are sometimes considered separately. Also, there is the property of asymptotic completeness—that the Hilbert state space is spanned by the asymptotic spaces and , appearing in the collision S matrix. The other important property of field theory is the mass gap which is not required by the axioms—that the energy-momentum spectrum has a gap between zero and some positive number. Mass gap In quantum field theory, the mass gap is the difference in energy between the vacuum and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle. For a given real field , we can say that the theory has a mass gap if the two-point function has the property with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice. Importance of Yang–Mills theory Most known and nontrivial (i.e. interacting) quantum field theories in 4 dimensions are effective field theories with a cutoff scale. Since the beta function is positive for most models, it appears that most such models have a Landau pole as it is not at all clear whether or not they have nontrivial UV fixed points. This means that if such a QFT is well-defined at all scales, as it has to be to satisfy the axioms of axiomatic quantum field theory, it would have to be trivial (i.e. a free field theory). Quantum Yang–Mills theory with a non-abelian gauge group and no quarks is an exception, because asymptotic freedom characterizes this theory, meaning that it has a trivial UV fixed point. Hence it is the simplest nontrivial constructive QFT in 4 dimensions. (QCD is a more complicated theory because it involves quarks.) Quark confinement At the level of rigor of theoretical physics, it has been well established that the quantum Yang–Mills theory for a non-abelian Lie group exhibits a property known as confinement; though proper mathematical physics has more demanding requirements on a proof. A consequence of this property is that above the confinement scale, the color charges are connected by chromodynamic flux tubes leading to a linear potential between the charges. Hence isolated color charge and isolated gluons cannot exist. In the absence of confinement, we would expect to see massless gluons, but since they are confined, all we would see are color-neutral bound states of gluons, called glueballs. If glueballs exist, they are massive, which is why a mass gap is expected. References Further reading External links The Millennium Prize Problems: Yang–Mills and Mass Gap Millennium Prize Problems Gauge theories Quantum chromodynamics Unsolved problems in mathematics Unsolved problems in physics
0.771449
0.99547
0.767955
Hamiltonian (quantum mechanics)
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory. The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by , where the hat indicates that it is an operator. It can also be written as or . Introduction The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one. Schrödinger Hamiltonian One particle By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form where is the potential energy operator and is the kinetic energy operator in which is the mass of the particle, the dot denotes the dot product of vectors, and is the momentum operator where a is the del operator. The dot product of with itself is the Laplacian . In three dimensions using Cartesian coordinates the Laplace operator is Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation: which allows one to apply the Hamiltonian to systems described by a wave function . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics. One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields. Expectation value It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system. Consider computing the expectation value of kinetic energy: Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as: which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem. Many particles The formalism can be extended to particles: where is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and is the kinetic energy operator of particle , is the gradient for particle , and is the Laplacian for particle : Combining these yields the Schrödinger Hamiltonian for the -particle case: However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles: where denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below). For interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle. For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is The general form of the Hamiltonian in this case is: where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below. Schrödinger equation The Hamiltonian generates the time evolution of quantum states. If is the state of the system at time , then This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons is also called the Hamiltonian. Given the state at some initial time, we can solve it to obtain the state at any subsequent time. In particular, if is independent of time, then The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient. By the *-homomorphism property of the functional calculus, the operator is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance. Dirac formalism However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way: The eigenkets (eigenvectors) of , denoted , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted , solving the equation: Since is a Hermitian operator, the energy is always a real number. From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation. Expressions for the Hamiltonian Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by , and charges by . General forms for one particle Free particle The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension: and in higher dimensions: Constant-potential well For a particle in a region of constant potential (no dependence on space or time), in one dimension, the Hamiltonian is: in three dimensions This applies to the elementary "particle in a box" problem, and step potentials. Simple harmonic oscillator For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to: where the angular frequency , effective spring constant , and mass of the oscillator satisfy: so the Hamiltonian is: For three dimensions, this becomes where the three-dimensional position vector using Cartesian coordinates is , its magnitude is Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction: Rigid rotor For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is: where , , and are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and are the total angular momentum operators (components), about the , , and axes respectively. Electrostatic (Coulomb) potential The Coulomb potential energy for two point charges and (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism): However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For charges, the potential energy of charge due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges): where is the electrostatic potential of charge at . The total potential of the system is then the sum over : so the Hamiltonian is: Electric dipole in an electric field For an electric dipole moment constituting charges of magnitude , in a uniform, electrostatic field (time-independent) , positioned in one place, the potential is: the dipole moment itself is the operator Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: Magnetic dipole in a magnetic field For a magnetic dipole moment in a uniform, magnetostatic field (time-independent) , positioned in one place, the potential is: Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: For a spin- particle, the corresponding spin magnetic moment is: where is the "spin g-factor" (not to be confused with the gyromagnetic ratio), is the electron charge, is the spin operator vector, whose components are the Pauli matrices, hence Charged particle in an electromagnetic field For a particle with mass and charge in an electromagnetic field, described by the scalar potential and vector potential , there are two parts to the Hamiltonian to substitute for. The canonical momentum operator , which includes a contribution from the field and fulfils the canonical commutation relation, must be quantized; where is the kinetic momentum. The quantization prescription reads so the corresponding kinetic energy operator is and the potential energy, which is due to the field, is given by Casting all of these into the Hamiltonian gives Energy eigenket degeneracy, symmetry, and conservation laws In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the direction is a different state from one propagating in the direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate. It turns out that degeneracy occurs whenever a nontrivial unitary operator commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, since Since is nontrivial, at least one pair of and must represent distinct states. Therefore, has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape. The existence of a symmetry operator implies the existence of a conserved observable. Let be the Hermitian generator of : It is straightforward to show that if commutes with , then so does : Therefore, In obtaining this result, we have used the Schrödinger equation, as well as its dual, Thus, the expected value of the observable is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum. Hamilton's equations Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e., Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time. The instantaneous state of the system at time , , can be expanded in terms of these basis states: where The coefficients are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole. The expectation value of the Hamiltonian of this state, which is also the mean energy, is where the last step was obtained by expanding in terms of the basis states. Each actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use and its complex conjugate . With this choice of independent variables, we can calculate the partial derivative By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to Similarly, one can show that If we define "conjugate momentum" variables by then the above equations become which is precisely the form of Hamilton's equations, with the s as the generalized coordinates, the s as the conjugate momenta, and taking the place of the classical Hamiltonian. See also Hamiltonian mechanics Two-state quantum system Operator (physics) Bra–ket notation Quantum state Linear algebra Conservation of energy Potential theory Many-body problem Electrostatics Electric field Magnetic field Lieb–Thirring inequality References External links Hamiltonian mechanics Operator theory Quantum mechanics Quantum chemistry Theoretical chemistry Computational chemistry William Rowan Hamilton
0.769408
0.998109
0.767952
Eddington luminosity
The Eddington luminosity, also referred to as the Eddington limit, is the maximum luminosity a body (such as a star) can achieve when there is balance between the force of radiation acting outward and the gravitational force acting inward. The state of balance is called hydrostatic equilibrium. When a star exceeds the Eddington luminosity, it will initiate a very intense radiation-driven stellar wind from its outer layers. Since most massive stars have luminosities far below the Eddington luminosity, their winds are driven mostly by the less intense line absorption. The Eddington limit is invoked to explain the observed luminosities of accreting black holes such as quasars. Originally, Sir Arthur Eddington took only the electron scattering into account when calculating this limit, something that now is called the classical Eddington limit. Nowadays, the modified Eddington limit also takes into account other radiation processes such as bound–free and free–free radiation interaction. Derivation The Eddington limit is obtained by setting the outward radiation pressure equal to the inward gravitational force. Both forces decrease by inverse-square laws, so once equality is reached, the hydrodynamic flow is the same throughout the star. From Euler's equation in hydrostatic equilibrium, the mean acceleration is zero, where is the velocity, is the pressure, is the density, and is the gravitational potential. If the pressure is dominated by radiation pressure associated with an irradiance , Here is the opacity of the stellar material, defined as the fraction of radiation energy flux absorbed by the medium per unit density and unit length. For ionized hydrogen, , where is the Thomson scattering cross-section for the electron and is the mass of a proton. Note that is defined as the energy flux over a surface, which can be expressed with the momentum flux using for radiation. Therefore, the rate of momentum transfer from the radiation to the gaseous medium per unit density is , which explains the right-hand side of the above equation. The luminosity of a source bounded by a surface may be expressed with these relations as Now assuming that the opacity is a constant, it can be brought outside the integral. Using Gauss's theorem and Poisson's equation gives where is the mass of the central object. This result is called the Eddington luminosity. For pure ionized hydrogen, where is the mass of the Sun and is the luminosity of the Sun. The maximum possible luminosity of a source in hydrostatic equilibrium is the Eddington luminosity. If the luminosity exceeds the Eddington limit, then the radiation pressure drives an outflow. The mass of the proton appears because, in the typical environment for the outer layers of a star, the radiation pressure acts on electrons, which are driven away from the center. Because protons are negligibly pressured by the analog of Thomson scattering, due to their larger mass, the result is to create a slight charge separation and therefore a radially directed electric field, acting to lift the positive charges, which, under the conditions in stellar atmospheres, typically are free protons. When the outward electric field is sufficient to levitate the protons against gravity, both electrons and protons are expelled together. Different limits for different materials The derivation above for the outward light pressure assumes a hydrogen plasma. In other circumstances the pressure balance can be different from what it is for hydrogen. In an evolved star with a pure helium atmosphere, the electric field would have to lift a helium nucleus (an alpha particle), with nearly 4 times the mass of a proton, while the radiation pressure would act on 2 free electrons. Thus twice the usual Eddington luminosity would be needed to drive off an atmosphere of pure helium. At very high temperatures, as in the environment of a black hole or neutron star, high-energy photons can interact with nuclei, or even with other photons, to create an electron–positron plasma. In that situation the combined mass of the positive–negative charge carrier pair is approximately 918 times smaller (half of the proton-to-electron mass ratio), while the radiation pressure on the positrons doubles the effective upward force per unit mass, so the limiting luminosity needed is reduced by a factor of ≈ 918×2. The exact value of the Eddington luminosity depends on the chemical composition of the gas layer and the spectral energy distribution of the emission. A gas with cosmological abundances of hydrogen and helium is much more transparent than gas with solar abundance ratios. Atomic line transitions can greatly increase the effects of radiation pressure, and line-driven winds exist in some bright stars (e.g., Wolf–Rayet and O-type stars). Super-Eddington luminosities The role of the Eddington limit in today's research lies in explaining the very high mass loss rates seen in, for example, the series of outbursts of η Carinae in 1840–1860. The regular, line-driven stellar winds can only explain a mass loss rate of around ~ solar masses per year, whereas losses of up to per year are needed to understand the η Carinae outbursts. This can be done with the help of the super-Eddington winds driven by broad-spectrum radiation. Gamma-ray bursts, novae and supernovae are examples of systems exceeding their Eddington luminosity by a large factor for very short times, resulting in short and highly intensive mass loss rates. Some X-ray binaries and active galaxies are able to maintain luminosities close to the Eddington limit for very long times. For accretion-powered sources such as accreting neutron stars or cataclysmic variables (accreting white dwarfs), the limit may act to reduce or cut off the accretion flow, imposing an Eddington limit on accretion corresponding to that on luminosity. Super-Eddington accretion onto stellar-mass black holes is one possible model for ultraluminous X-ray sources (ULXSs). For accreting black holes, not all the energy released by accretion has to appear as outgoing luminosity, since energy can be lost through the event horizon, down the hole. Such sources effectively may not conserve energy. Then the accretion efficiency, or the fraction of energy actually radiated of that theoretically available from the gravitational energy release of accreting material, enters in an essential way. Other factors The Eddington limit is not a strict limit on the luminosity of a stellar object. The limit does not consider several potentially important factors, and super-Eddington objects have been observed that do not seem to have the predicted high mass-loss rate. Other factors that might affect the maximum luminosity of a star include: Porosity. A problem with steady winds driven by broad-spectrum radiation is that both the radiative flux and gravitational acceleration scale with r−2. The ratio between these factors is constant, and in a super-Eddington star, the whole envelope would become gravitationally unbound at the same time. This is not observed. A possible solution is introducing an atmospheric porosity, where we imagine the stellar atmosphere to consist of denser regions surrounded by regions of lower-density gas. This would reduce the coupling between radiation and matter, and the full force of the radiation field would be seen only in the more homogeneous outer, lower-density layers of the atmosphere. Turbulence. A possible destabilizing factor might be the turbulent pressure arising when energy in the convection zones builds up a field of supersonic turbulence. The importance of turbulence is being debated, however. Photon bubbles. Another factor that might explain some stable super-Eddington objects is the photon bubble effect. Photon bubbles would develop spontaneously in radiation-dominated atmospheres when the radiation pressure exceeds the gas pressure. We can imagine a region in the stellar atmosphere with a density lower than the surroundings, but with a higher radiation pressure. Such a region would rise through the atmosphere, with radiation diffusing in from the sides, leading to an even higher radiation pressure. This effect could transport radiation more efficiently than a homogeneous atmosphere, increasing the allowed total radiation rate. Accretion discs may exhibit luminosities as high as 10–100 times the Eddington limit without experiencing instabilities. Humphreys–Davidson limit Observations of massive stars show a clear upper limit to their luminosity, termed the Humphreys–Davidson limit after the researchers who first wrote about it. Only highly unstable objects are found, temporarily, at higher luminosities. Efforts to reconcile this with the theoretical Eddington limit have been largely unsuccessful. The H–D limit for cool supergiants is placed at around 320,000 . {| class="wikitable sortable" |+ Most luminous known K- and M-type supergiants ! Name ! Luminosity() !Effective temperature(K) ! Spectral type !class=unsortable|Notes !class=unsortable|References |- |NGC1313-310 |513,000 |3,780 |K5-M0 | | |- |NGC253-222 |501,000 |3,750 |M0-M2 |Uncertain extinction and thus uncertain luminosity. | |- | LGGS J013339.28+303118.8 | 479,000 |3,837 | M1Ia | | |- | Stephenson 2 DFK 49 | 390,000 |4,000 | K4 |Another paper estimate a much lower luminosity | |- | HD 269551 A |389,000 |3,800 |K/M | | |- |NGC7793-34 |380,000 |3,840 |M0-M2 | | |- | LGGS J013418.56+303808.6 | 363,000 |3,837 | | | |- | LGGS J004428.12+415502.9 | 339,000 |– | K2I | | |- |NGC247-154 |332,000 |3,570 |M2-M4 | | |- | AH Scorpii | 331,000 |3,682 | M5Ia | | |- |WLM 02 |331,000 |4,660 |K2-3I | | |- | SMC 18592 | 309,000 - 355,000 |4,050 | K5–M0Ia | | |- | LGGS J004539.99+415404.1 | 309,000 |– | M3I | | |- | LGGS J013350.62+303230.3 | 309,000 |3,800 | | | |- | HV 888 | 302,000 |3,442–3,500 | M4Ia | | |- | RW Cephei | 300,000 |4,400 | K2Ia-0 |A K-type yellow hypergiant. | |- | LGGS J013358.54+303419.9 | 295,000 |4,050 | | | |- | GCIRS 7 | 295,000 |3,600 | M1I | | |- | SP77 21-12 | 295,000 |4,050 | K5-M3 | | |- | EV Carinae | 288,000 |3,574 | M4.5Ia | | |- | HV 12463 | 288,000 |3,550 | M | Probably not a LMC member. | |- | LGGS J003951.33+405303.7 | 288,000 |– | | | |- | LGGS J013352.96+303816.0 | 282,000 |3,900 | | | |- |RSGC1-F13 |282,000 - 290,000 |3,590 - 4,200 |M3 - K2 | | |- | WOH G64 | 282,000 |3,400 | M5I | Likely the largest known star. | |- | LGGS J004731.12+422749.1 | 275,000 |– | | | |- | VY Canis Majoris | 270,000 |3,490 | M3–M4.5 | | |- | Mu Cephei | | 3,750 | M2 Ia | | |- | LGGS J004428.48+415130.9 | 269,000 | | M1I | | |- |RSGC1-F01 |263,000 - 335,000 |3,450 |M5 | | |- |NGC247-155 |263,000 |3,510 |M2-M4 |Uncertain extinction and thus uncertain luminosity. | |- | LGGS J013241.94+302047.5 | 257,000 |3,950 | | | |- |NGC300-125 |257,000 |3,350 |M2-M4 | | |- |Westerlund 1 W26 |256,000 - 312,000 |3,720 - 4,000 |M0.5-M6Ia | | |- |HD 143183 |254,000 |3,605 |M3 | | |- | LMC 145013 | 251,000 - 339,000 |3,950 | M2.5Ia–Ib | | |- | LMC 25320 | 251,000 |3,800 | M | | |- |V354 Cephei |250,000 |3,615 |M1Ia-M3.5Ib | | |} See also Hayashi limit List of most massive stars M82 X-1 M82 X-2 References Further reading External links Surpassing the Eddington Limit. Concepts in astrophysics Stellar astronomy
0.777641
0.987514
0.767931
Anfinsen's dogma
Anfinsen's dogma, also known as the thermodynamic hypothesis, is a postulate in molecular biology. It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence. The dogma was championed by the Nobel Prize Laureate Christian B. Anfinsen from his research on the folding of ribonuclease A. The postulate amounts to saying that, at the environmental conditions (temperature, solvent concentration and composition, etc.) at which folding occurs, the native structure is a unique, stable and kinetically accessible minimum of the free energy. In other words, there are three conditions for formation of a unique protein structure: Uniqueness – Requires that the sequence does not have any other configuration with a comparable free energy. Hence the free energy minimum must be unchallenged. Stability – Small changes in the surrounding environment cannot give rise to changes in the minimum configuration. This can be pictured as a free energy surface that looks more like a funnel (with the native state in the bottom of it) rather than like a soup plate (with several closely related low-energy states); the free energy surface around the native state must be rather steep and high, in order to provide stability. Kinetical accessibility – Means that the path in the free energy surface from the unfolded to the folded state must be reasonably smooth or, in other words, that the folding of the chain must not involve highly complex changes in the shape (like knots or other high order conformations). Basic changes in the shape of the protein happen dependent on their environment, shifting shape to suit their place. This creates multiple configurations for biomolecules to shift into. Challenges to Anfinsen's dogma Protein folding in a cell is a highly complex process that involves transport of the newly synthesized proteins to appropriate cellular compartments through targeting, permanent misfolding, temporarily unfolded states, post-translational modifications, quality control, and formation of protein complexes facilitated by chaperones. Some proteins need the assistance of chaperone proteins to fold properly. It has been suggested that this disproves Anfinsen's dogma. However, the chaperones do not appear to affect the final state of the protein; they seem to work primarily by preventing aggregation of several protein molecules prior to the final folded state of the protein. However, at least some chaperones are required for the proper folding of their subject proteins. Many proteins can also undergo aggregation and misfolding. For example, prions are stable conformations of proteins which differ from the native folding state. In bovine spongiform encephalopathy, native proteins re-fold into a different stable conformation, which causes fatal amyloid buildup. Other amyloid diseases, including Alzheimer's disease and Parkinson's disease, are also exceptions to Anfinsen's dogma. Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein complex switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB proteins switch folds. The switching between alternative structures is driven by interactions of the protein with small ligands or other proteins, by chemical modifications (such as phosphorylation) or by changed environmental conditions, such as temperature, pH or membrane potential. Each alternative structure may either correspond to the global minimum of free energy of the protein at the given conditions or be kinetically trapped in a higher local minimum of free energy. References Further reading Profiles in Science: The Christian B. Anfinsen Papers-Articles Molecular biology Protein structure Hypotheses
0.787403
0.975266
0.767927
Proper orthogonal decomposition
The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve. It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning. POD and PCA The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen and Loève, and their Karhunen–Loève theorem. Mathematical expression The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector field u(x, t) into a set of deterministic spatial functions Φk(x) modulated by random time coefficients ak(t) so that: The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method is averaging the samples over the space dimension n, and correlating them with each other along the time samples p: with n spatial elements, and p time samples The next step is to compute the covariance matrix C We then compute the eigenvalues and eigenvectors of C and we order them from the largest eigenvalue to the smallest. We obtain n eigenvalues λ1,...,λn and a set of n eigenvectors arranged as columns in an n × n matrix Φ: References External links MIT: http://web.mit.edu/6.242/www/images/lec6_6242_2004.pdf Stanford University - Charbel Farhat & David Amsallem https://web.stanford.edu/group/frg/course_work/CME345/CA-CME345-Ch4.pdf Weiss, Julien: A Tutorial on the Proper Orthogonal Decomposition. In: 2019 AIAA Aviation Forum. 17–21 June 2019, Dallas, Texas, United States. French course from CNRS https://www.math.u-bordeaux.fr/~mbergman/PDF/OuvrageSynthese/OCET06.pdf Applications of the Proper Orthogonal Decomposition Method http://www.cerfacs.fr/~cfdbib/repository/WN_CFD_07_97.pdf Continuum mechanics Numerical differential equations Partial differential equations Structural analysis Computational electromagnetics
0.780384
0.983997
0.767896
James Clerk Maxwell
James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish physicist and mathematician who was responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and light as different manifestations of the same phenomenon. Maxwell's equations for electromagnetism have been called the "second great unification in physics" where the first one had been realised by Isaac Newton. With the publication of "A Dynamical Theory of the Electromagnetic Field" in 1865, Maxwell demonstrated that electric and magnetic fields travel through space as waves moving at the speed of light. He proposed that light is an undulation in the same medium that is the cause of electric and magnetic phenomena. The unification of light and electrical phenomena led to his prediction of the existence of radio waves. Maxwell is also regarded as a founder of the modern field of electrical engineering. Maxwell was the first to derive the Maxwell–Boltzmann distribution, a statistical means of describing aspects of the kinetic theory of gases, which he worked on sporadically throughout his career. He is also known for presenting the first durable colour photograph in 1861 and for his foundational work on analysing the rigidity of rod-and-joint frameworks (trusses) like those in many bridges. He is responsible for modern dimensional analysis. Maxwell is also recognized for laying the groundwork for chaos theory. His discoveries helped usher in the era of modern physics, laying the foundation for such fields as special relativity and quantum mechanics. Many physicists regard Maxwell as the 19th-century scientist having the greatest influence on 20th-century physics. His contributions to the science are considered by many to be of the same magnitude as those of Isaac Newton and Albert Einstein. In the millennium poll—a survey of the 100 most prominent physicists—Maxwell was voted the third greatest physicist of all time, behind only Newton and Einstein. On the centenary of Maxwell's birthday, Einstein described Maxwell's work as the "most profound and the most fruitful that physics has experienced since the time of Newton". Einstein, when he visited the University of Cambridge in 1922, was told by his host that he had done great things because he stood on Newton's shoulders; Einstein replied: "No I don't. I stand on the shoulders of Maxwell." Tom Siegfried described Maxwell as "one of those once-in-a-century geniuses who perceived the physical world with sharper senses than those around him". Life Early life, 1831–1839 James Clerk Maxwell was born on 13 June 1831 at 14 India Street, Edinburgh, to John Clerk Maxwell of Middlebie, an advocate, and Frances Cay, daughter of Robert Hodshon Cay and sister of John Cay. (His birthplace now houses a museum operated by the James Clerk Maxwell Foundation.) His father was a man of comfortable means of the Clerk family of Penicuik, holders of the baronetcy of Clerk of Penicuik. His father's brother was the 6th baronet. He had been born "John Clerk", adding "Maxwell" to his own after he inherited (as an infant in 1793) the Middlebie estate, a Maxwell property in Dumfriesshire. James was a first cousin of both the artist Jemima Blackburn (the daughter of his father's sister) and the civil engineer William Dyce Cay (the son of his mother's brother). Cay and Maxwell were close friends and Cay acted as his best man when Maxwell married. Maxwell's parents met and married when they were well into their thirties; his mother was nearly 40 when he was born. They had had one earlier child, a daughter named Elizabeth, who died in infancy. When Maxwell was young his family moved to Glenlair, in Kirkcudbrightshire, which his parents had built on the estate which comprised . All indications suggest that Maxwell had maintained an unquenchable curiosity from an early age. By the age of three, everything that moved, shone, or made a noise drew the question: "what's the go o' that?" In a passage added to a letter from his father to his sister-in-law Jane Cay in 1834, his mother described this innate sense of inquisitiveness: Education, 1839–1847 Recognising the boy's potential, Maxwell's mother Frances took responsibility for his early education, which in the Victorian era was largely the job of the woman of the house. At eight he could recite long passages of John Milton and the whole of the 119th psalm (176 verses). Indeed, his knowledge of scripture was already detailed; he could give chapter and verse for almost any quotation from the Psalms. His mother was taken ill with abdominal cancer and, after an unsuccessful operation, died in December 1839 when he was eight years old. His education was then overseen by his father and his father's sister-in-law Jane, both of whom played pivotal roles in his life. His formal schooling began unsuccessfully under the guidance of a 16-year-old hired tutor. Little is known about the young man hired to instruct Maxwell, except that he treated the younger boy harshly, chiding him for being slow and wayward. The tutor was dismissed in November 1841. James' father took him to Robert Davidson's demonstration of electric propulsion and magnetic force on 12 February 1842, an experience with profound implications for the boy. Maxwell was sent to the prestigious Edinburgh Academy. He lodged during term times at the house of his aunt Isabella. During this time his passion for drawing was encouraged by his older cousin Jemima. The 10-year-old Maxwell, having been raised in isolation on his father's countryside estate, did not fit in well at school. The first year had been full, obliging him to join the second year with classmates a year his senior. His mannerisms and Galloway accent struck the other boys as rustic. Having arrived on his first day of school wearing a pair of homemade shoes and a tunic, he earned the unkind nickname of "Daftie". He never seemed to resent the epithet, bearing it without complaint for many years. Social isolation at the Academy ended when he met Lewis Campbell and Peter Guthrie Tait, two boys of a similar age who were to become notable scholars later in life. They remained lifelong friends. Maxwell was fascinated by geometry at an early age, rediscovering the regular polyhedra before he received any formal instruction. Despite his winning the school's scripture biography prize in his second year, his academic work remained unnoticed until, at the age of 13, he won the school's mathematical medal and first prize for both English and poetry. Maxwell's interests ranged far beyond the school syllabus and he did not pay particular attention to examination performance. He wrote his first scientific paper at the age of 14. In it, he described a mechanical means of drawing mathematical curves with a piece of twine, and the properties of ellipses, Cartesian ovals, and related curves with more than two foci. The work, of 1846, "On the description of oval curves and those having a plurality of foci" was presented to the Royal Society of Edinburgh by James Forbes, a professor of natural philosophy at the University of Edinburgh, because Maxwell was deemed too young to present the work himself. The work was not entirely original, since René Descartes had also examined the properties of such multifocal ellipses in the 17th century, but Maxwell had simplified their construction. University of Edinburgh, 1847–1850 Maxwell left the Academy in 1847 at age 16 and began attending classes at the University of Edinburgh. He had the opportunity to attend the University of Cambridge, but decided, after his first term, to complete the full course of his undergraduate studies at Edinburgh. The academic staff of the university included some highly regarded names; his first-year tutors included Sir William Hamilton, who lectured him on logic and metaphysics, Philip Kelland on mathematics, and James Forbes on natural philosophy. He did not find his classes demanding, and was, therefore, able to immerse himself in private study during free time at the university and particularly when back home at Glenlair. There he would experiment with improvised chemical, electric, and magnetic apparatus; however, his chief concerns regarded the properties of polarised light. He constructed shaped blocks of gelatine, subjected them to various stresses, and with a pair of polarising prisms given to him by William Nicol, viewed the coloured fringes that had developed within the jelly. Through this practice he discovered photoelasticity, which is a means of determining the stress distribution within physical structures. At age 18, Maxwell contributed two papers for the Transactions of the Royal Society of Edinburgh. One of these, "On the Equilibrium of Elastic Solids", laid the foundation for an important discovery later in his life, which was the temporary double refraction produced in viscous liquids by shear stress. His other paper was "Rolling Curves" and, just as with the paper "Oval Curves" that he had written at the Edinburgh Academy, he was again considered too young to stand at the rostrum to present it himself. The paper was delivered to the Royal Society by his tutor Kelland instead. University of Cambridge, 1850–1856 In October 1850, already an accomplished mathematician, Maxwell left Scotland for the University of Cambridge. He initially attended Peterhouse, but before the end of his first term transferred to Trinity, where he believed it would be easier to obtain a fellowship. At Trinity he was elected to the elite secret society known as the Cambridge Apostles. Maxwell's intellectual understanding of his Christian faith and of science grew rapidly during his Cambridge years. He joined the "Apostles", an exclusive debating society of the intellectual elite, where through his essays he sought to work out this understanding. In the summer of his third year, Maxwell spent some time at the Suffolk home of the Rev. C. B. Tayler, the uncle of a classmate, G. W. H. Tayler. The love of God shown by the family impressed Maxwell, particularly after he was nursed back from ill health by the minister and his wife. On his return to Cambridge, Maxwell writes to his recent host a chatty and affectionate letter including the following testimony, In November 1851, Maxwell studied under William Hopkins, whose success in nurturing mathematical genius had earned him the nickname of "senior wrangler-maker". In 1854, Maxwell graduated from Trinity with a degree in mathematics. He scored second highest in the final examination, coming behind Edward Routh and earning himself the title of Second Wrangler. He was later declared equal with Routh in the more exacting ordeal of the Smith's Prize examination. Immediately after earning his degree, Maxwell read his paper "On the Transformation of Surfaces by Bending" to the Cambridge Philosophical Society. This is one of the few purely mathematical papers he had written, demonstrating his growing stature as a mathematician. Maxwell decided to remain at Trinity after graduating and applied for a fellowship, which was a process that he could expect to take a couple of years. Buoyed by his success as a research student, he would be free, apart from some tutoring and examining duties, to pursue scientific interests at his own leisure. The nature and perception of colour was one such interest which he had begun at the University of Edinburgh while he was a student of Forbes. With the coloured spinning tops invented by Forbes, Maxwell was able to demonstrate that white light would result from a mixture of red, green, and blue light. His paper "Experiments on Colour" laid out the principles of colour combination and was presented to the Royal Society of Edinburgh in March 1855. Maxwell was this time able to deliver it himself. Maxwell was made a fellow of Trinity on 10 October 1855, sooner than was the norm, and was asked to prepare lectures on hydrostatics and optics and to set examination papers. The following February he was urged by Forbes to apply for the newly vacant Chair of Natural Philosophy at Marischal College, Aberdeen. His father assisted him in the task of preparing the necessary references, but died on 2 April at Glenlair before either knew the result of Maxwell's candidacy. He accepted the professorship at Aberdeen, leaving Cambridge in November 1856. Marischal College, Aberdeen, 1856–1860 The 25-year-old Maxwell was a good 15 years younger than any other professor at Marischal. He engaged himself with his new responsibilities as head of a department, devising the syllabus and preparing lectures. He committed himself to lecturing 15 hours a week, including a weekly pro bono lecture to the local working men's college. He lived in Aberdeen with his cousin William Dyce Cay, a Scottish civil engineer, during the six months of the academic year and spent the summers at Glenlair, which he had inherited from his father. Later, his former student described Maxwell as follows: In the late 1850s shortly before 9 am any winter’s morning you might well have seen the young James Clerk Maxwell, in his mid to late 20s, a man of middling height, with frame strongly knit, and a certain spring and elasticity in his gait; dressed for comfortable ease rather than elegance; a face expressive at once of sagacity and good humour, but overlaid with a deep shade of thoughtfulness; features boldly put pleasingly marked; eyes dark and glowing; hair and beard perfectly black, and forming a strong contrast to the pallor of his complexion. He focused his attention on a problem that had eluded scientists for 200 years: the nature of Saturn's rings. It was unknown how they could remain stable without breaking up, drifting away or crashing into Saturn. The problem took on a particular resonance at that time because St John's College, Cambridge, had chosen it as the topic for the 1857 Adams Prize. Maxwell devoted two years to studying the problem, proving that a regular solid ring could not be stable, while a fluid ring would be forced by wave action to break up into blobs. Since neither was observed, he concluded that the rings must be composed of numerous small particles he called "brick-bats", each independently orbiting Saturn. Maxwell was awarded the £130 Adams Prize in 1859 for his essay "On the stability of the motion of Saturn's rings"; he was the only entrant to have made enough headway to submit an entry. His work was so detailed and convincing that when George Biddell Airy read it he commented, "It is one of the most remarkable applications of mathematics to physics that I have ever seen." It was considered the final word on the issue until direct observations by the Voyager flybys of the 1980s confirmed Maxwell's prediction that the rings were composed of particles. It is now understood, however, that the rings' particles are not totally stable, being pulled by gravity onto Saturn. The rings are expected to vanish entirely over the next 300 million years. In 1857 Maxwell befriended the Reverend Daniel Dewar, who was then the Principal of Marischal. Through him Maxwell met Dewar's daughter, Katherine Mary Dewar. They were engaged in February 1858 and married in Aberdeen on 2 June 1858. On the marriage record, Maxwell is listed as Professor of Natural Philosophy in Marischal College, Aberdeen. Katherine was seven years Maxwell's senior. Comparatively little is known of her, although it is known that she helped in his lab and worked on experiments in viscosity. Maxwell's biographer and friend, Lewis Campbell, adopted an uncharacteristic reticence on the subject of Katherine, though describing their married life as "one of unexampled devotion". In 1860 Marischal College merged with the neighbouring King's College to form the University of Aberdeen. There was no room for two professors of Natural Philosophy, so Maxwell, despite his scientific reputation, found himself laid off. He was unsuccessful in applying for Forbes's recently vacated chair at Edinburgh, the post instead going to Tait. Maxwell was granted the Chair of Natural Philosophy at King's College, London, instead. After recovering from a near-fatal bout of smallpox in 1860, he moved to London with his wife. King's College, London, 1860–1865 Maxwell's time at King's was probably the most productive of his career. He was awarded the Royal Society's Rumford Medal in 1860 for his work on colour and was later elected to the Society in 1861. This period of his life would see him display the world's first light-fast colour photograph, further develop his ideas on the viscosity of gases, and propose a system of defining physical quantities—now known as dimensional analysis. Maxwell would often attend lectures at the Royal Institution, where he came into regular contact with Michael Faraday. The relationship between the two men could not be described as being close, because Faraday was 40 years Maxwell's senior and showed signs of senility. They nevertheless maintained a strong respect for each other's talents. This time is especially noteworthy for the advances Maxwell made in the fields of electricity and magnetism. He examined the nature of both electric and magnetic fields in his two-part paper "On physical lines of force", which was published in 1861. In it, he provided a conceptual model for electromagnetic induction, consisting of tiny spinning cells of magnetic flux. Two more parts were later added to and published in that same paper in early 1862. In the first additional part, he discussed the nature of electrostatics and displacement current. In the second additional part, he dealt with the rotation of the plane of the polarisation of light in a magnetic field, a phenomenon that had been discovered by Faraday and is now known as the Faraday effect. Later years, 1865–1879 In 1865 Maxwell resigned the chair at King's College, London, and returned to Glenlair with Katherine. In his paper "On governors" (1868) he mathematically described the behaviour of governors—devices that control the speed of steam engines—thereby establishing the theoretical basis of control engineering. In his paper "On reciprocal figures, frames and diagrams of forces" (1870) he discussed the rigidity of various designs of lattice. He wrote the textbook Theory of Heat (1871) and the treatise Matter and Motion (1876). Maxwell was also the first to make explicit use of dimensional analysis, in 1871. In 1871 he returned to Cambridge to become the first Cavendish Professor of Physics. Maxwell was put in charge of the development of the Cavendish Laboratory, supervising every step in the progress of the building and of the purchase of the collection of apparatus. One of Maxwell's last great contributions to science was the editing (with copious original notes) of the research of Henry Cavendish, from which it appeared that Cavendish researched, amongst other things, such questions as the density of the Earth and the composition of water. He was elected as a member to the American Philosophical Society in 1876. Death In April 1879 Maxwell began to have difficulty in swallowing, the first symptom of his fatal illness. Maxwell died in Cambridge of abdominal cancer on 5 November 1879 at the age of 48. His mother had died at the same age of the same type of cancer. The minister who regularly visited him in his last weeks was astonished at his lucidity and the immense power and scope of his memory, but comments more particularly, As death approached Maxwell told a Cambridge colleague, Maxwell is buried at Parton Kirk, near Castle Douglas in Galloway close to where he grew up. The extended biography The Life of James Clerk Maxwell, by his former schoolfellow and lifelong friend Professor Lewis Campbell, was published in 1882. His collected works were issued in two volumes by the Cambridge University Press in 1890. The executors of Maxwell's estate were his physician George Edward Paget, G. G. Stokes, and Colin Mackenzie, who was Maxwell's cousin. Overburdened with work, Stokes passed Maxwell's papers to William Garnett, who had effective custody of the papers until about 1884. There is a memorial inscription to him near the choir screen at Westminster Abbey. Personal life As a great lover of Scottish poetry, Maxwell memorised poems and wrote his own. The best known is Rigid Body Sings, closely based on "Comin' Through the Rye" by Robert Burns, which he apparently used to sing while accompanying himself on a guitar. It has the opening lines A collection of his poems was published by his friend Lewis Campbell in 1882. Descriptions of Maxwell remark upon his remarkable intellectual qualities being matched by social awkwardness. Maxwell wrote the following aphorism for his own conduct as a scientist: He that would enjoy life and act with freedom must have the work of the day continually before his eyes. Not yesterday's work, lest he fall into despair, not to-morrow's, lest he become a visionary not that which ends with the day, which is a worldly work, nor yet that only which remains to eternity, for by it he cannot shape his action. Happy is the man who can recognize in the work of to-day a connected portion of the work of life, and an embodiment of the work of eternity. The foundations of his confidence are unchangeable, for he has been made a partaker of Infinity. He strenuously works out his daily enterprises, because the present is given him for a possession. Maxwell was an evangelical Presbyterian and in his later years became an Elder of the Church of Scotland. Maxwell's religious beliefs and related activities have been the focus of a number of papers. Attending both Church of Scotland (his father's denomination) and Episcopalian (his mother's denomination) services as a child, Maxwell underwent an evangelical conversion in April 1853. One facet of this conversion may have aligned him with an antipositivist position. Scientific legacy Electromagnetism Maxwell had studied and commented on electricity and magnetism as early as 1855 when his paper "On Faraday's lines of force" was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work and how electricity and magnetism are related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as "On Physical Lines of Force" in March 1861. Around 1862, while lecturing at King's College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light. He considered this to be more than just a coincidence, commenting, "We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena. Working on the problem further, Maxwell showed that the equations predict the existence of waves of oscillating electric and magnetic fields that travel through empty space at a speed that could be predicted from simple electrical experiments; using the data available at the time, Maxwell obtained a velocity of . In his 1865 paper "A Dynamical Theory of the Electromagnetic Field", Maxwell wrote, "The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws". His famous twenty equations, in their modern form of partial differential equations, first appeared in fully developed form in his textbook A Treatise on Electricity and Magnetism in 1873. Most of this work was done by Maxwell at Glenlair during the period between holding his London post and his taking up the Cavendish chair. Oliver Heaviside reduced the complexity of Maxwell's theory down to four partial differential equations, known now collectively as Maxwell's Laws or Maxwell's equations. Although potentials became much less popular in the nineteenth century, the use of scalar and vector potentials is now standard in the solution of Maxwell's equations. As Barrett and Grimes (1995) describe: Maxwell expressed electromagnetism in the algebra of quaternions and made the electromagnetic potential the centerpiece of his theory. In 1881 Heaviside replaced the electromagnetic potential field by force fields as the centerpiece of electromagnetic theory. According to Heaviside, the electromagnetic potential field was arbitrary and needed to be "assassinated". (sic) A few years later there was a debate between Heaviside and [Peter Guthrie] Tate (sic) about the relative merits of vector analysis and quaternions. The result was the realization that there was no need for the greater physical insights provided by quaternions if the theory was purely local, and vector analysis became commonplace. Maxwell was proved correct, and his quantitative connection between light and electromagnetism is considered one of the great accomplishments of 19th-century mathematical physics. Maxwell also introduced the concept of the electromagnetic field in comparison to force lines that Faraday described. By understanding the propagation of electromagnetism as a field emitted by active particles, Maxwell could advance his work on light. At that time, Maxwell believed that the propagation of light required a medium for the waves, dubbed the luminiferous aether. Over time, the existence of such a medium, permeating all space and yet apparently undetectable by mechanical means, proved impossible to reconcile with experiments such as the Michelson–Morley experiment. Moreover, it seemed to require an absolute frame of reference in which the equations were valid, with the distasteful result that the equations changed form for a moving observer. These difficulties inspired Albert Einstein to formulate the theory of special relativity; in the process, Einstein dispensed with the requirement of a stationary luminiferous aether. Colour vision Along with most physicists of the time, Maxwell had a strong interest in psychology. Following in the steps of Isaac Newton and Thomas Young, he was particularly interested in the study of colour vision. From 1855 to 1872, Maxwell published at intervals a series of investigations concerning the perception of colour, colour-blindness, and colour theory, and was awarded the Rumford Medal for "On the Theory of Colour Vision". Isaac Newton had demonstrated, using prisms, that white light, such as sunlight, is composed of a number of monochromatic components which could then be recombined into white light. Newton also showed that an orange paint made of yellow and red could look exactly like a monochromatic orange light, although being composed of two monochromatic yellow and red lights. Hence the paradox that puzzled physicists of the time: two complex lights (composed of more than one monochromatic light) could look alike but be physically different, called metameres. Thomas Young later proposed that this paradox could be explained by colours being perceived through a limited number of channels in the eyes, which he proposed to be threefold, the trichromatic colour theory. Maxwell used the recently developed linear algebra to prove Young's theory. Any monochromatic light stimulating three receptors should be able to be equally stimulated by a set of three different monochromatic lights (in fact, by any set of three different lights). He demonstrated that to be the case, inventing colour matching experiments and Colourimetry. Maxwell was also interested in applying his theory of colour perception, namely in colour photography. Stemming directly from his psychological work on colour perception: if a sum of any three lights could reproduce any perceivable colour, then colour photographs could be produced with a set of three coloured filters. In the course of his 1855 paper, Maxwell proposed that, if three black-and-white photographs of a scene were taken through red, green, and blue filters, and transparent prints of the images were projected onto a screen using three projectors equipped with similar filters, when superimposed on the screen the result would be perceived by the human eye as a complete reproduction of all the colours in the scene. During an 1861 Royal Institution lecture on colour theory, Maxwell presented the world's first demonstration of colour photography by this principle of three-colour analysis and synthesis. Thomas Sutton, inventor of the single-lens reflex camera, took the picture. He photographed a tartan ribbon three times, through red, green, and blue filters, also making a fourth photograph through a yellow filter, which, according to Maxwell's account, was not used in the demonstration. Because Sutton's photographic plates were insensitive to red and barely sensitive to green, the results of this pioneering experiment were far from perfect. It was remarked in the published account of the lecture that "if the red and green images had been as fully photographed as the blue", it "would have been a truly-coloured image of the riband. By finding photographic materials more sensitive to the less refrangible rays, the representation of the colours of objects might be greatly improved." Researchers in 1961 concluded that the seemingly impossible partial success of the red-filtered exposure was due to ultraviolet light, which is strongly reflected by some red dyes, not entirely blocked by the red filter used, and within the range of sensitivity of the wet collodion process Sutton employed. Kinetic theory and thermodynamics Maxwell also investigated the kinetic theory of gases. Originating with Daniel Bernoulli, this theory was advanced by the successive labours of John Herapath, John James Waterston, James Joule, and particularly Rudolf Clausius, to such an extent as to put its general accuracy beyond a doubt; but it received enormous development from Maxwell, who in this field appeared as an experimenter (on the laws of gaseous friction) as well as a mathematician. Between 1859 and 1866, he developed the theory of the distributions of velocities in particles of a gas, work later generalised by Ludwig Boltzmann. The formula, called the Maxwell–Boltzmann distribution, gives the fraction of gas molecules moving at a specified velocity at any given temperature. In the kinetic theory, temperatures and heat involve only molecular movement. This approach generalised the previously established laws of thermodynamics and explained existing observations and experiments in a better way than had been achieved previously. His work on thermodynamics led him to devise the thought experiment that came to be known as Maxwell's demon, where the second law of thermodynamics is violated by an imaginary being capable of sorting particles by energy. In 1871, he established Maxwell's thermodynamic relations, which are statements of equality among the second derivatives of the thermodynamic potentials with respect to different thermodynamic variables. In 1874, he constructed a plaster thermodynamic visualisation as a way of exploring phase transitions, based on the American scientist Josiah Willard Gibbs's graphical thermodynamics papers. Control theory Maxwell published the paper "On governors" in the Proceedings of the Royal Society, vol. 16 (1867–1868). This paper is considered a central paper of the early days of control theory. Here "governors" refers to the governor or the centrifugal governor used to regulate steam engines. Honours Publications Three of Maxwell's contributions to Encyclopædia Britannica appeared in the Ninth Edition (1878): Atom, Attraction, and Ether; and three in the Eleventh Edition (1911): Capillary Action, Diagram, and Faraday, Michael Notes References External links James Clerk Maxwell, "Experiments on colour as perceived by the Eye, with remarks on colour-blindness". Proceedings of the Royal Society of Edinburgh, vol. 3, no. 45, pp. 299–301. (digital facsimile from the Linda Hall Library) Maxwell, BBC Radio 4 discussion with Simon Schaffer, Peter Harman & Joanna Haigh (In Our Time, 2 October 2003) Scotland's Einstein: James Clerk Maxwell – The Man Who Changed the World, BBC Two documentary 2015. 1831 births 1879 deaths 19th-century Scottish mathematicians 19th-century British physicists 19th-century Scottish scientists Academics of King's College London Academics of the University of Aberdeen Alumni of the University of Edinburgh Alumni of Trinity College, Cambridge Alumni of Peterhouse, Cambridge Burials in Dumfries and Galloway Color scientists Deaths from stomach cancer in England People educated at Edinburgh Academy Elders of the Church of Scotland Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Fellows of King's College London People associated with electricity Scientists from Edinburgh Optical physicists Scottish Presbyterians Calvinist and Reformed elders Scottish evangelicals Scottish inventors Scottish physicists Second Wranglers British textbook writers Thermodynamicists Mathematical physicists British theoretical physicists Magneticians Scottish Engineering Hall of Fame inductees Cavendish Professors of Physics Presidents of the Cambridge Philosophical Society
0.76822
0.999576
0.767895
Quicksand
Quicksand (also known as sinking sand) is a colloid consisting of fine granular material (such as sand, silt or clay) and water. It forms in saturated loose sand when the sand is suddenly agitated. When water in the sand cannot escape, it creates a liquefied soil that loses strength and cannot support weight. Quicksand can form in standing water or in upward flowing water (as from an artesian spring). In the case of upward-flowing water, forces oppose the force of gravity and suspend the soil particle. The cushioning of water gives quicksand, and other liquefied sediments, a spongy, fluid-like texture. Objects in liquefied sand sink to the level at which the weight of the object is equal to the weight of the displaced soil/water mix and the submerged object floats due to its buoyancy. Soil liquefaction may occur in partially saturated soil when it is shaken by an earthquake or similar forces. The movement combined with an increase in pore pressure (of groundwater) leads to the loss of particle cohesion, causing buildings or other objects on that surface to sink. Properties Quicksand is a shear thinning non-Newtonian fluid: when undisturbed, it often appears to be solid ("gel" form), but a less than 1% change in the stress on the quicksand will cause a sudden decrease in its viscosity ("sol" form). After an initial disturbance—such as a person attempting to walk on it—the water and sand in the quicksand separate and dense regions of sand sediment form; it is because of the formation of these high volume fraction regions that the viscosity of the quicksand seems to decrease suddenly. Someone stepping on it will start to sink. To move within the quicksand, a person or object must apply sufficient pressure on the compacted sand to re-introduce enough water to liquefy it. The forces required to do this are quite large: to remove a foot from quicksand at a speed of 1 cm/s would require the same amount of force as that needed to lift a car. It is impossible for a human to sink entirely into quicksand, due to the higher density of the fluid. Quicksand has a density of about 2 grams per cubic centimeter, whereas the density of the human body is only about 1 gram per cubic centimeter. At that level of density, sinking beyond about waist height in quicksand is impossible. Even objects with a higher density than quicksand will float on it if stationary. Aluminium, for example, has a density of about 2.7 grams per cubic centimeter, but a piece of aluminium will float on top of quicksand until motion causes the sand to liquefy. Continued or panicked movement, however, may cause a person to sink further in the quicksand. Since this increasingly impairs movement, it can lead to a situation where other factors such as exposure (i.e., sun stroke, dehydration and hypothermia), drowning in a rising tide or attacks by predatory or otherwise aggressive animals may harm a trapped person. Quicksand may be escaped by slow movement of the legs in order to increase viscosity of the fluid, and rotation of the body so as to float in the supine position (lying horizontally with the face and torso facing up). In literature In popular culture Quicksand is a trope of adventure fiction, particularly in film, where it is typically and unrealistically depicted with a suction effect that causes anyone or anything that walks into it to sink until fully submerged and risk drowning. This has led to the common misconception that humans can be completely immersed and drown in quicksand, which is impossible. According to a 2010 article by Slate, this gimmick had its heyday in the 1960s, when almost 3% of all films showed characters sinking in clay, mud, or sand. See also Bulldust Dry quicksand Grain entrapment Quick condition Sapric Tar pit Thixotropy References External links How quicksand works at HowStuffWorks Video showing quicksand in a sandpit at YouTube Geological hazards Sand Sediments Soil mechanics
0.769271
0.998209
0.767893
Entropy (statistical thermodynamics)
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems. Boltzmann's principle Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena. A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium. Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain. Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number: The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer. Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate. Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view. Boltzmann's principle is regarded as the foundation of statistical mechanics. Gibbs entropy formula The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if is the energy of microstate i, and is the probability that it occurs during the system's fluctuations, then the entropy of the system is Entropy changes for systems in a canonical state A system with a well-defined temperature, i.e., one in thermal equilibrium with a thermal reservoir, has a probability of being in a microstate i given by Boltzmann's distribution. Changes in the entropy caused by changes in the external constraints are then given by: where we have twice used the conservation of probability, . Now, is the expectation value of the change in the total energy of the system. If the changes are sufficiently slow, so that the system remains in the same microscopic state, but the state slowly (and reversibly) changes, then is the expectation value of the work done on the system through this reversible process, dwrev. But from the first law of thermodynamics, . Therefore, In the thermodynamic limit, the fluctuation of the macroscopic quantities from their average values becomes negligible; so this reproduces the definition of entropy from classical thermodynamics, given above. The quantity is the Boltzmann constant. The remaining factor of the equation, the entire summation is dimensionless, since the value is a probability and therefore dimensionless, and is the natural logarithm. Hence the SI derived units on both sides of the equation are same as heat capacity: This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) on which the sum is done is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article). Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas. This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case. It has been shown that the Gibbs Entropy is equal to the classical "heat engine" entropy characterized by , and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs Entropy is the only entropy that is equivalent to the classical "heat engine" entropy under the following postulates: Ensembles The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations: is the microcanonical partition function is the canonical partition function is the grand canonical partition function Order through chaos and the second law of thermodynamics We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are (100 choose 50) ≈ 1029 possible microstates. Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system. This is an example illustrating the second law of thermodynamics: the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value. Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21) Counting of microstates In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.) To avoid coarse graining one can take the entropy as defined by the H-theorem. However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and . In the thermodynamical limit, the specific entropy becomes independent on the choice of δE. An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of , because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration). The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy: where is the Planck constant, is the characteristic frequency of the vibration, and is the vibrational quantum number. Even when (the zero-point energy), does not equal 0, in adherence to the Heisenberg uncertainty principle. See also Boltzmann constant Configuration entropy Conformational entropy Enthalpy Entropy Entropy (classical thermodynamics) Entropy (energy dispersal) Entropy of mixing Entropy (order and disorder) Entropy (information theory) History of entropy Information theory Thermodynamic free energy Tsallis entropy References Thermodynamic entropy
0.775818
0.989766
0.767878
Jefimenko's equations
In electromagnetism, Jefimenko's equations (named after Oleg D. Jefimenko) give the electric field and magnetic field due to a distribution of electric charges and electric current in space, that takes into account the propagation delay (retarded time) of the fields due to the finite speed of light and relativistic effects. Therefore, they can be used for moving charges and currents. They are the particular solutions to Maxwell's equations for any arbitrary distribution of charges and currents. Equations Electric and magnetic fields Jefimenko's equations give the electric field E and magnetic field B produced by an arbitrary charge or current distribution, of charge density ρ and current density J: where r′ is a point in the charge distribution, r is a point in space, and is the retarded time. There are similar expressions for D and H. These equations are the time-dependent generalization of Coulomb's law and the Biot–Savart law to electrodynamics, which were originally true only for electrostatic and magnetostatic fields, and steady currents. Origin from retarded potentials Jefimenko's equations can be found from the retarded potentials φ and A: which are the solutions to Maxwell's equations in the potential formulation, then substituting in the definitions of the electromagnetic potentials themselves: and using the relation replaces the potentials φ and A by the fields E and B. Heaviside–Feynman formula The Heaviside–Feynman formula, also known as the Jefimenko–Feynman formula, can be seen as the point-like electric charge version of Jefimenko's equations. Actually, it can be (non trivially) deduced from them using Dirac functions, or using the Liénard-Wiechert potentials. It is mostly known from The Feynman Lectures on Physics, where it was used to introduce and describe the origin of electromagnetic radiation. The formula provides a natural generalization of the Coulomb's law for cases where the source charge is moving: Here, and are the electric and magnetic fields respectively, is the electric charge, is the vacuum permittivity (electric field constant) and is the speed of light. The vector is a unit vector pointing from the observer to the charge and is the distance between observer and charge. Since the electromagnetic field propagates at the speed of light, both these quantities are evaluated at the retarded time . The first term in the formula for represents the Coulomb's law for the static electric field. The second term is the time derivative of the first Coulombic term multiplied by which is the propagation time of the electric field. Heuristically, this can be regarded as nature "attempting" to forecast what the present field would be by linear extrapolation to the present time. The last term, proportional to the second derivative of the unit direction vector , is sensitive to charge motion perpendicular to the line of sight. It can be shown that the electric field generated by this term is proportional to , where is the transverse acceleration in the retarded time. As it decreases only as with distance compared to the standard Coulombic behavior, this term is responsible for the long-range electromagnetic radiation caused by the accelerating charge. The Heaviside–Feynman formula can be derived from Maxwell's equations using the technique of the retarded potential. It allows, for example, the derivation of the Larmor formula for overall radiation power of the accelerating charge. Discussion There is a widespread interpretation of Maxwell's equations indicating that spatially varying electric and magnetic fields can cause each other to change in time, thus giving rise to a propagating electromagnetic wave (electromagnetism). However, Jefimenko's equations show an alternative point of view. Jefimenko says, "...neither Maxwell's equations nor their solutions indicate an existence of causal links between electric and magnetic fields. Therefore, we must conclude that an electromagnetic field is a dual entity always having an electric and a magnetic component simultaneously created by their common sources: time-variable electric charges and currents." As pointed out by McDonald, Jefimenko's equations seem to appear first in 1962 in the second edition of Panofsky and Phillips's classic textbook. David Griffiths, however, clarifies that "the earliest explicit statement of which I am aware was by Oleg Jefimenko, in 1966" and characterizes equations in Panofsky and Phillips's textbook as only "closely related expressions". According to Andrew Zangwill, the equations analogous to Jefimenko's but in the Fourier frequency domain were first derived by George Adolphus Schott in his treatise Electromagnetic Radiation (University Press, Cambridge, 1912). Essential features of these equations are easily observed which is that the right hand sides involve "retarded" time which reflects the "causality" of the expressions. In other words, the left side of each equation is actually "caused" by the right side, unlike the normal differential expressions for Maxwell's equations where both sides take place simultaneously. In the typical expressions for Maxwell's equations there is no doubt that both sides are equal to each other, but as Jefimenko notes, "... since each of these equations connects quantities simultaneous in time, none of these equations can represent a causal relation." See also Liénard–Wiechert potential Notes Electrodynamics Electromagnetism Eponymous equations of physics
0.781782
0.982213
0.767876
Delphi method
The Delphi method or Delphi technique ( ; also known as Estimate-Talk-Estimate or ETE) is a structured communication technique or method, originally developed as a systematic, interactive forecasting method that relies on a panel of experts. Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach, prediction markets. Delphi can also be used to help reach expert consensus and develop professional guidelines. It is used for such purposes in many health-related fields, including clinical medicine, public health, and research. Delphi is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups. The experts answer questionnaires in two or more rounds. After each round, a facilitator or change agent provides an anonymised summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. Thus, experts are encouraged to revise their earlier answers in light of the replies of other members of their panel. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Finally, the process is stopped after a predefined stopping criterion (e.g., number of rounds, achievement of consensus, stability of results), and the mean or median scores of the final rounds determine the results. Special attention has to be paid to the formulation of the Delphi theses and the definition and selection of the experts in order to avoid methodological weaknesses that severely threaten the validity and reliability of the results. History The name Delphi derives from the Oracle of Delphi, although the authors of the method were unhappy with the oracular connotation of the name, "smacking a little of the occult". The Delphi method assumes that group judgments are more valid than individual judgments. The Delphi method was developed at the beginning of the Cold War to forecast the impact of technology on warfare. In 1944, General Henry H. Arnold ordered the creation of the report for the U.S. Army Air Corps on the future technological capabilities that might be used by the military. Different approaches were tried, but the shortcomings of traditional forecasting methods, such as theoretical approach, quantitative models or trend extrapolation, quickly became apparent in areas where precise scientific laws have not been established yet. To combat these shortcomings, the Delphi method was developed by Project RAND during the 1950-1960s (1959) by Olaf Helmer, Norman Dalkey, and Nicholas Rescher. It has been used ever since, together with various modifications and reformulations, such as the Imen-Delphi procedure. Experts were asked to give their opinion on the probability, frequency, and intensity of possible enemy attacks. Other experts could anonymously give feedback. This process was repeated several times until a consensus emerged. In 2021, a cross-disciplinary study by Beiderbeck et al. focused on new directions and advancements of the Delphi method, including Real-time Delphi formats. The authors provide a methodological toolbox for designing Delphi surveys including among others sentiment analyses of the field of psychology. Key characteristics The following key characteristics of the Delphi method help the participants to focus on the issues at hand and separate Delphi from other methodologies: in this technique a panel of experts is drawn from both inside and outside the organisation. The panel consists of experts having knowledge of the area requiring decision making. Each expert is asked to make anonymous predictions. Anonymity of the participants Usually all participants remain anonymous. Their identity is not revealed, even after the completion of the final report. This prevents the authority, personality, or reputation of some participants from dominating others in the process. Arguably, it also frees participants (to some extent) from their personal biases, minimizes the "bandwagon effect" or "halo effect", allows free expression of opinions, encourages open critique, and facilitates admission of errors when revising earlier judgments. Structuring of information flow The initial contributions from the experts are collected in the form of answers to questionnaires and their comments to these answers. The panel director controls the interactions among the participants by processing the information and filtering out irrelevant content. This avoids the negative effects of face-to-face panel discussions and solves the usual problems of group dynamics. Regular feedback The Delphi method allows participants to comment on the responses of others, the progress of the panel as a whole, and to revise their own forecasts and opinions in real time. Role of the facilitator The person coordinating the Delphi method is usually known as a facilitator or Leader, and facilitates the responses of their panel of experts, who are selected for a reason, usually that they hold knowledge on an opinion or view. The facilitator sends out questionnaires, surveys etc. and if the panel of experts accept, they follow instructions and present their views. Responses are collected and analyzed, then common and conflicting viewpoints are identified. If consensus is not reached, the process continues through thesis and antithesis, to gradually work towards synthesis, and building consensus. During the past decades, facilitators have used many different measures and thresholds to measure the degree of consensus or dissent. A comprehensive literature review and summary is compiled in an article by von der Gracht. Applications Use in forecasting First applications of the Delphi method were in the field of science and technology forecasting. The objective of the method was to combine expert opinions on likelihood and expected development time, of the particular technology, in a single indicator. One of the first such reports, prepared in 1964 by Gordon and Helmer, assessed the direction of long-term trends in science and technology development, covering such topics as scientific breakthroughs, population control, automation, space progress, war prevention and weapon systems. Other forecasts of technology were dealing with vehicle-highway systems, industrial robots, intelligent internet, broadband connections, and technology in education. Later the Delphi method was applied in other places, especially those related to public policy issues, such as economic trends, health and education. It was also applied successfully and with high accuracy in business forecasting. For example, in one case reported by Basu and Schroeder (1977), the Delphi method predicted the sales of a new product during the first two years with inaccuracy of 3–4% compared with actual sales. Quantitative methods produced errors of 10–15%, and traditional unstructured forecast methods had errors of about 20%. (This is only one example; the overall accuracy of the technique is mixed.) The Delphi method has also been used as a tool to implement multi-stakeholder approaches for participative policy-making in developing countries. The governments of Latin America and the Caribbean have successfully used the Delphi method as an open-ended public-private sector approach to identify the most urgent challenges for their regional ICT-for-development eLAC Action Plans. As a result, governments have widely acknowledged the value of collective intelligence from civil society, academic and private sector participants of the Delphi, especially in a field of rapid change, such as technology policies. Use in patent participation identification In the early 1980s Jackie Awerman of Jackie Awerman Associates, Inc. designed a modified Delphi method for identifying the roles of various contributors to the creation of a patent-eligible product. (Epsilon Corporation, Chemical Vapor Deposition Reactor) The results were then used by patent attorneys to determine bonus distribution percentage to the general satisfaction of all team members. Use in policy-making From the 1970s, the use of the Delphi technique in public policy-making introduces a number of methodological innovations. In particular: the need to examine several types of items (not only forecasting items but, typically, issue items, goal items, and option items) leads to introducing different evaluation scales which are not used in the standard Delphi. These often include desirability, feasibility (technical and political) and probability, which the analysts can use to outline different scenarios: the desired scenario (from desirability), the potential scenario (from feasibility) and the expected scenario (from probability); the complexity of issues posed in public policy-making tends to increase weighting of panelists’ arguments, such as soliciting pros and cons for each item along with new items for panel consideration; likewise, methods measuring panel evaluations tend to increase sophistication such as multi-dimensional scaling. Further innovations come from the use of computer-based (and later web-based) Delphi conferences. According to Turoff and Hiltz, in computer-based Delphis: the iteration structure used in the paper Delphis, which is divided into three or more discrete rounds, can be replaced by a process of continuous (roundless) interaction, enabling panelists to change their evaluations at any time; the statistical group response can be updated in real-time, and shown whenever a panelist provides a new evaluation. According to Bolognini, web-based Delphis offer two further possibilities, relevant in the context of interactive policy-making and e-democracy. These are: the involvement of a large number of participants, the use of two or more panels representing different groups (such as policy-makers, experts, citizens), which the administrator can give tasks reflecting their diverse roles and expertise, and make them to interact within ad hoc communication structures. For example, the policy community members (policy-makers and experts) may interact as part of the main conference panel, while they receive inputs from a virtual community (citizens, associations etc.) involved in a side conference. These web-based variable communication structures, which he calls Hyperdelphi (HD), are designed to make Delphi conferences "more fluid and adapted to the hypertextual and interactive nature of digital communication". One successful example of a (partially) web-based policy Delphi is the five-round Delphi exercise (with 1,454 contributions) for the creation of the eLAC Action Plans in Latin America. It is believed to be the most extensive online participatory policy-making foresight exercise in the history of intergovernmental processes in the developing world at this time. In addition to the specific policy guidance provided, the authors list the following lessons learned: "(1) the potential of Policy Delphi methods to introduce transparency and accountability into public decision-making, especially in developing countries; (2) the utility of foresight exercises to foster multi-agency networking in the development community; (3) the usefulness of embedding foresight exercises into established mechanisms of representative democracy and international multilateralism, such as the United Nations; (4) the potential of online tools to facilitate participation in resource-scarce developing countries; and (5) the resource-efficiency stemming from the scale of international foresight exercises, and therefore its adequacy for resource-scarce regions." Use in health settings The Delphi technique is widely used to help reach expert consensus in health-related settings. For example, it is frequently employed in the development of medical guidelines and protocols. Use in public health Some examples of its application in public health contexts include non-alcoholic fatty liver disease, iodine deficiency disorders, building responsive health systems for communities affected by migration, the role of health systems in advancing well-being for those living with HIV, and in creating a 2022 paper on recommendations to end the COVID-19 pandemic. Use in reporting guidelines Use of the Delphi method in the development of guidelines for the reporting of health research is recommended, especially for experienced developers. Since this advice was made in 2010, two systematic reviews have found that fewer than 30% of published reporting guidelines incorporated Delphi methods into the development process. Online Delphi systems A number of Delphi forecasts are conducted using web sites that allow the process to be conducted in real-time. For instance, the TechCast Project uses a panel of 100 experts worldwide to forecast breakthroughs in all fields of science and technology. Another example is the Horizon Project, where educational futurists collaborate online using the Delphi method to come up with the technological advancements to look out for in education for the next few years. Variations Traditionally the Delphi method has aimed at a consensus of the most probable future by iteration. Other versions, such as the Policy Delphi, offer decision support methods aiming at structuring and discussing the diverse views of the preferred future. In Europe, more recent web-based experiments have used the Delphi method as a communication technique for interactive decision-making and e-democracy. The Argument Delphi, developed by Osmo Kuusi, focuses on ongoing discussion and finding relevant arguments rather than focusing on the output. The Disaggregative Policy Delphi, developed by Petri Tapio, uses cluster analysis as a systematic tool to construct various scenarios of the future in the latest Delphi round. The respondent's view on the probable and the preferable future are dealt with as separate cases. The computerization of Argument Delphi is relatively difficult because of several problems like argument resolution, argument aggregation and argument evaluation. The computerization of Argument Delphi, developed by Sadi Evren Seker, proposes solutions to such problems. Accuracy Today the Delphi method is a widely accepted forecasting tool and has been used successfully for thousands of studies in areas varying from technology forecasting to drug abuse. Overall the track record of the Delphi method is mixed. There have been many cases when the method produced poor results. Still, some authors attribute this to poor application of the method and not to the weaknesses of the method itself. The RAND Methodological Guidance for Conducting and Critically Appraising Delphi Panels is a manual for doing Delphi research which provides guidance for doing research and offers a appraisal tool.This manual gives guidance on best practices that will help to avoid, or mitigate, potential drawbacks of Delphi Method Research; it also helps to understand the confidence that can be given to study results. It must also be realized that in areas such as science and technology forecasting, the degree of uncertainty is so great that exact and always correct predictions are impossible, so a high degree of error is to be expected. An important challenge for the method is ensuring sufficiently knowledgeable panelists. If panelists are misinformed about a topic, the use of Delphi may only add confidence to their ignorance. One of the initial problems of the method was its inability to make complex forecasts with multiple factors. Potential future outcomes were usually considered as if they had no effect on each other. Later on, several extensions to the Delphi method were developed to address this problem, such as cross impact analysis, that takes into consideration the possibility that the occurrence of one event may change probabilities of other events covered in the survey. Still the Delphi method can be used most successfully in forecasting single scalar indicators. Delphi vs. prediction markets Delphi has characteristics similar to prediction markets as both are structured approaches that aggregate diverse opinions from groups. Yet, there are differences that may be decisive for their relative applicability for different problems. Some advantages of prediction markets derive from the possibility to provide incentives for participation. They can motivate people to participate over a long period of time and to reveal their true beliefs. They aggregate information automatically and instantly incorporate new information in the forecast. Participants do not have to be selected and recruited manually by a facilitator. They themselves decide whether to participate if they think their private information is not yet incorporated in the forecast. Delphi seems to have these advantages over prediction markets: Participants reveal their reasoning It is easier to maintain confidentiality Potentially quicker forecasts if experts are readily available. Delphi is applicable in situations where the bets involved might affect the value of the currency used in bets (e.g. a bet on the collapse of the dollar made in dollars might have distorted odds). More recent research has also focused on combining both, the Delphi technique and prediction markets. More specifically, in a research study at Deutsche Börse elements of the Delphi method had been integrated into a prediction market. See also Computer supported brainstorming DARPA's Policy Analysis Market Horizon scanning Nominal group technique Planning poker Reference class forecasting Wideband delphi The Wisdom of Crowds References Further reading This article provides a detailed description of the use of modified Delphi for qualitative, participatory action research. A cross-validation study replicating one completed in the Netherlands and Belgium, and exploring US experts' views on the diagnosis and treatment of older adults with personality disorders. External links RAND publications on the Delphi Method Downloadable documents from RAND concerning applications of the Delphi Technique. Estimation methods Forecasting Systems thinking Futures techniques
0.770373
0.996744
0.767865
Pendulum (mechanics)
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations. Simple gravity pendulum A simple gravity pendulum is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions: The rod or cord is massless, inextensible and always remains under tension. The bob is a point mass. The motion occurs in two dimensions. The motion does not lose energy to external friction or air resistance. The gravitational field is uniform. The support is immobile. The differential equation which governs the motion of a simple pendulum is where is the magnitude of the gravitational field, is the length of the rod or cord, and is the angle from the vertical to the pendulum. Small-angle approximation The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or then substituting for into using the small-angle approximation, yields the equation for a harmonic oscillator, The error due to the approximation is of order (from the Taylor expansion for ). Let the starting angle be . If it is assumed that the pendulum is released with zero angular velocity, the solution becomes The motion is simple harmonic motion where is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude ; this is the property of isochronism that Galileo discovered. Rule of thumb for pendulum length gives If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then , and (0.994 is the approximation to 3 decimal places). Therefore, relatively reasonable approximations for the length and period are: where is the number of seconds between two beats (one beat for each side of the swing), and is measured in metres. Arbitrary-amplitude period For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method, and then integrating over one complete cycle, or twice the half-cycle or four times the quarter-cycle which leads to Note that this integral diverges as approaches the vertical so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.) This integral can be rewritten in terms of elliptic integrals as where is the incomplete elliptic integral of the first kind defined by Or more concisely by the substitution expressing in terms of , Here is the complete elliptic integral of the first kind defined by For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth ( = ) at an initial angle of 10 degrees is The linear approximation gives The difference between the two values, less than 0.2%, is much less than that caused by the variation of with geographical location. From here there are many ways to proceed to calculate the elliptic integral. Legendre polynomial solution for the elliptic integral Given and the Legendre polynomial solution for the elliptic integral: where denotes the double factorial, an exact solution to the period of a simple pendulum is: Figure 4 shows the relative errors using the power series. is the linear approximation, and to include respectively the terms up to the 2nd to the 10th powers. Power series solution for the elliptic integral Another formulation of the above solution can be found if the following Maclaurin series: is used in the Legendre polynomial solution above. The resulting power series is: more fractions available in the On-Line Encyclopedia of Integer Sequences with having the numerators and having the denominators. Arithmetic-geometric mean solution for elliptic integral Given and the arithmetic–geometric mean solution of the elliptic integral: where is the arithmetic-geometric mean of and . This yields an alternative and faster-converging formula for the period: The first iteration of this algorithm gives This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since the expression can be written more concisely as The second order expansion of reduces to A second iteration of this algorithm gives This second approximation has a relative error of less than 1% for angles up to 163.10 degrees. Approximate formulae for the nonlinear pendulum period Though the exact period can be determined, for any finite amplitude rad, by evaluating the corresponding complete elliptic integral , where , this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows: ‘Not so large-angle’ formulae, i.e. those yielding good estimates for amplitudes below rad (a natural limit for a bob on the end of a flexible string), though the deviation with respect to the exact period increases monotonically with amplitude, being unsuitable for amplitudes near to rad. One of the simplest formulae found in literature is the following one by Lima (2006): , where . ‘Very large-angle’ formulae, i.e. those which approximate the exact period asymptotically for amplitudes near to rad, with an error that increases monotonically for smaller amplitudes (i.e., unsuitable for small amplitudes). One of the better such formulae is that by Cromer, namely: . Of course, the increase of with amplitude is more apparent when , as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic: where , which presents a maximum error of only 0.6% (at ). Arbitrary-amplitude angular displacement The Fourier series expansion of is given by where is the elliptic nome, and the angular frequency. If one defines can be approximated using the expansion (see ). Note that for , thus the approximation is applicable even for large amplitudes. Equivalently, the angle can be given in terms of the Jacobi elliptic function with modulus For small , , and , so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation. Examples The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot. Compound pendulum A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot . In this case the pendulum's period depends on its moment of inertia around the pivot point. The equation of torque gives: where: is the angular acceleration. is the torque The torque is generated by gravity so: where: is the total mass of the rigid body (rod and bob) is the distance from the pivot point to the system's centre-of-mass is the angle from the vertical Hence, under the small-angle approximation, (or equivalently when ), where is the moment of inertia of the body about the pivot point . The expression for is of the same form as the conventional simple pendulum and gives a period of And a frequency of If the initial angle is taken into consideration (for large amplitudes), then the expression for becomes: and gives a period of: where is the maximum angle of oscillation (with respect to the vertical) and is the complete elliptic integral of the first kind. An important concept is the equivalent length, , the length of a simple pendulums that has the same angular frequency as the compound pendulum: Consider the following cases: The simple pendulum is the special case where all the mass is located at the bob swinging at a distance from the pivot. Thus, and , so the expression reduces to: . Notice , as expected (the definition of equivalent length). A homogeneous rod of mass and length swinging from its end has and , so the expression reduces to: . Notice , a homogeneous rod oscillates as if it were a simple pendulum of two-thirds its length. A heavy simple pendulum: combination of a homogeneous rod of mass and length swinging from its end, and a bob at the other end. Then the system has a total mass of , and the other parameters being (by definition of centre-of-mass) and , so the expression reduces to: Where . Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio, . An approximation can be made for : Notice how similar it is to the angular frequency in a spring-mass system with effective mass. Damped, driven pendulum The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic. Equation (1) can be written as (see the Torque derivation of Equation (1) above). A damping term and forcing term can be added to the right hand side to get where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)). and are constants defining the amplitude of forcing and the degree of damping respectively. is the angular frequency of the driving oscillations. Dividing through by : For a physical pendulum: This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution. Physical interpretation of the imaginary period The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if is the maximum angle of one pendulum and is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other. Coupled pendula Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics. The kinetic energy of the system is: where is the mass of the bobs, is the length of the strings, and , are the angular displacements of the two bobs from equilibrium. The potential energy of the system is: where is the gravitational acceleration, and is the spring constant. The displacement of the spring from its equilibrium position assumes the small angle approximation. The Lagrangian is then which leads to the following set of coupled differential equations: Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables and : with the corresponding solutions where and , , , are constants of integration. Expressing the solutions in terms of and alone: If the bobs are not given an initial push, then the condition requires , which gives (after some rearranging): See also Harmonograph Conical pendulum Cycloidal pendulum Double pendulum Inverted pendulum Kapitza's pendulum Rayleigh–Lorentz pendulum Elastic pendulum Mathieu function Pendulum equations (software) References Further reading External links Mathworld article on Mathieu Function Differential equations Dynamical systems Horology Mathematical physics Mathematics
0.771405
0.995402
0.767858
Bohr–Van Leeuwen theorem
The Bohr–Van Leeuwen theorem states that when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. This makes magnetism in solids solely a quantum mechanical effect and means that classical physics cannot account for paramagnetism, diamagnetism and ferromagnetism. Inability of classical physics to explain triboelectricity also stems from the Bohr–Van Leeuwen theorem. History What is today known as the Bohr–Van Leeuwen theorem was discovered by Niels Bohr in 1911 in his doctoral dissertation and was later rediscovered by Hendrika Johanna van Leeuwen in her doctoral thesis in 1919. In 1932, J. H. Van Vleck formalized and expanded upon Bohr's initial theorem in a book he wrote on electric and magnetic susceptibilities. The significance of this discovery is that classical physics does not allow for such things as paramagnetism, diamagnetism and ferromagnetism and thus quantum physics is needed to explain the magnetic events. This result, "perhaps the most deflationary publication of all time," may have contributed to Bohr's development of a quasi-classical theory of the hydrogen atom in 1913. Proof An intuitive proof The Bohr–Van Leeuwen theorem applies to an isolated system that cannot rotate. If the isolated system is allowed to rotate in response to an externally applied magnetic field, then this theorem does not apply. If, in addition, there is only one state of thermal equilibrium in a given temperature and field, and the system is allowed time to return to equilibrium after a field is applied, then there will be no magnetization. The probability that the system will be in a given state of motion is predicted by Maxwell–Boltzmann statistics to be proportional to , where is the energy of the system, is the Boltzmann constant, and is the absolute temperature. This energy is equal to the sum of the kinetic energy ( for a particle with mass and speed ) and the potential energy. The magnetic field does not contribute to the potential energy. The Lorentz force on a particle with charge and velocity is where is the electric field and is the magnetic flux density. The rate of work done is and does not depend on . Therefore, the energy does not depend on the magnetic field, so the distribution of motions does not depend on the magnetic field. In zero field, there will be no net motion of charged particles because the system is not able to rotate. There will therefore be an average magnetic moment of zero. Since the distribution of motions does not depend on the magnetic field, the moment in thermal equilibrium remains zero in any magnetic field. A more formal proof So as to lower the complexity of the proof, a system with electrons will be used. This is appropriate, since most of the magnetism in a solid is carried by electrons, and the proof is easily generalized to more than one type of charged particle. Each electron has a negative charge and mass . If its position is and velocity is , it produces a current and a magnetic moment The above equation shows that the magnetic moment is a linear function of the velocity coordinates, so the total magnetic moment in a given direction must be a linear function of the form where the dot represents a time derivative and are vector coefficients depending on the position coordinates . Maxwell–Boltzmann statistics gives the probability that the nth particle has momentum and coordinate as where is the Hamiltonian, the total energy of the system. The thermal average of any function of these generalized coordinates is then In the presence of a magnetic field, where is the magnetic vector potential and is the electric scalar potential. For each particle the components of the momentum and position are related by the equations of Hamiltonian mechanics: Therefore, so the moment is a linear function of the momenta . The thermally averaged moment, is the sum of terms proportional to integrals of the form where represents one of the momentum coordinates. The integrand is an odd function of , so it vanishes. Therefore, . Applications The Bohr–Van Leeuwen theorem is useful in several applications including plasma physics: "All these references base their discussion of the Bohr–Van Leeuwen theorem on Niels Bohr's physical model, in which perfectly reflecting walls are necessary to provide the currents that cancel the net contribution from the interior of an element of plasma, and result in zero net diamagnetism for the plasma element." Diamagnetism of a purely classical nature occurs in plasmas but is a consequence of thermal disequilibrium, such as a gradient in plasma density. Electromechanics and electrical engineering also see practical benefit from the Bohr–Van Leeuwen theorem. References External links The early 20th century: Relativity and quantum mechanics bring understanding at last Classical mechanics Electric and magnetic fields in matter Eponymous theorems of physics Statistical mechanics Articles containing proofs Statistical mechanics theorems
0.793463
0.967694
0.76783
Continuum (measurement)
Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states. In physics In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths). In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts. In mathematics and philosophy A good introduction to the philosophical issues involved is John Lane Bell's essa in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's. Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis". In social sciences, psychology and psychiatry In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those levels include dichotomous (a person either has a personality trait or not) and non-dichotomous approaches. While the non-dichotomous approach allows for understanding that everyone lies somewhere on a particular personality dimension, the dichotomous (nominal categorical and ordinal) approaches only seek to confirm that a particular person either has or does not have a particular mental disorder. Expert witnesses particularly are trained to help courts in translating the data into the legal (e.g. 'guilty' vs. 'not guilty') dichotomy, which apply to law, sociology and ethics. In linguistics In linguistics, the range of dialects spoken over a geographical area that differ slightly between neighboring areas is known as a dialect continuum. A language continuum is a similar description for the merging of neighboring languages without a clear defined boundary. Examples of dialect or language continuums include the varieties of Italian or German; and the Romance languages, Arabic languages, or Bantu languages. References External links Continuity and infinitesimals, John Bell, Stanford Encyclopedia of Philosophy Concepts in metaphysics Concepts in physics Concepts in the philosophy of science Mathematical concepts
0.782731
0.980897
0.767779
Derivative
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. Derivatives can be generalized to functions of several real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. Definition As a limit A function of a real variable is differentiable at a point of its domain, if its domain contains an open interval containing , and the limit exists. This means that, for every positive real number , there exists a positive real number such that, for every such that and then is defined, and where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit. If the function is differentiable at , that is if the limit exists, then this limit is called the derivative of at . Multiple notations for the derivative exist. The derivative of at can be denoted , read as " prime of "; or it can be denoted , read as "the derivative of with respect to at " or " by (or over) at ". See below. If is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point to the value of the derivative of at . This function is written and is called the derivative function or the derivative of . The function sometimes has a derivative at most, but not all, points of its domain. The function whose value at equals whenever is defined and elsewhere is undefined is also called the derivative of . It is still a function, but its domain may be smaller than the domain of . For example, let be the squaring function: . Then the quotient in the definition of the derivative is The division in the last step is valid as long as . The closer is to , the closer this expression becomes to the value . The limit exists, and for every input the limit is . So, the derivative of the squaring function is the doubling function: . The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function , specifically the points and . As is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of at . In other words, the derivative is the slope of the tangent. Using infinitesimals One way to think of the derivative is as the ratio of an infinitesimal change in the output of the function to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the in the Leibniz notation. Thus, the derivative of becomes for an arbitrary infinitesimal , where denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function as an example again, Continuity and differentiability If is differentiable at , then must also be continuous at . As an example, choose a point and let be the step function that returns the value 1 for all less than , and returns a different value 10 for all greater than or equal to . The function cannot have a derivative at . If is negative, then is on the low part of the step, so the secant line from to is very steep; as tends to zero, the slope tends to infinity. If is positive, then is on the high part of the step, so the secant line from to has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by is continuous at , but it is not differentiable there. If is positive, then the slope of the secant line from 0 to is one; if is negative, then the slope of the secant line from to is . This can be seen graphically as a "kink" or a "cusp" in the graph at . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by is not differentiable at . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point. Notation One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as and . It is still commonly used when the equation is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by , read as "the derivative of with respect to ". This derivative can alternately be treated as the application of a differential operator to a function, Higher derivatives are expressed using the notation for the -th derivative of . These are abbreviations for multiple applications of the derivative operator; for example, Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if and then Another common notation for differentiation is by using the prime mark in the symbol of a function . This is known as prime notation, due to Joseph-Louis Lagrange. The first derivative is written as , read as " prime of , or , read as " prime". Similarly, the second and the third derivatives can be written as and , respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as or . The latter notation generalizes to yield the notation for the th derivative of . In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If is a function of , then the first and second derivatives can be written as and , respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. Another notation is D-notation, which represents the differential operator by the symbol . The first derivative is written and higher derivatives are written with a superscript, so the -th derivative is . This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function , its partial derivative with respect to can be written or . Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. and . Rules of computation In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation. Rules for basic functions The following are the rules for the derivatives of the most common basic functions. Here, is a real number, and is the base of the natural logarithm, approximately . Derivatives of powers: Functions of exponential, natural logarithm, and logarithm with general base: , for , for , for Trigonometric functions: Inverse trigonometric functions: , for , for Rules for combined functions Given that the and are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. Constant rule: if is constant, then for all , Sum rule: for all functions and and all real numbers and . Product rule: for all functions and . As a special case, this rule includes the fact whenever is a constant because by the constant rule. Quotient rule: for all functions and at all inputs where . Chain rule for composite functions: If , then Computation example The derivative of the function given by is Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions , , , , and , as well as the constant , were also used. Higher-order derivatives Higher order derivatives are the result of differentiating a function repeatedly. Given that is a differentiable function, the derivative of is the first derivative, denoted as . The derivative of is the second derivative, denoted as , and the derivative of is the third derivative, denoted as . By continuing this process, if it exists, the th derivative is the derivative of the th derivative or the derivative of order . As has been discussed above, the generalization of derivative of a function may be denoted as . A function that has successive derivatives is called times differentiable. If the th derivative is continuous, then the function is said to be of differentiability class . A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero. One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. In other dimensions Vector-valued functions A vector-valued function of a real variable sends real numbers to vectors in some vector space . A vector-valued function can be split up into its coordinate functions , meaning that . This includes, for example, parametric curves in or . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of exists for every value of , then is another vector-valued function. Partial derivatives Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function with respect to the variable is variously denoted by among other possibilities. It can be thought of as the rate of change of the function in the -direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let , then the partial derivative of function with respect to both variables and are, respectively: In general, the partial derivative of a function in the direction at the point is defined to be: This is fundamental for the study of the functions of several real variables. Let be such a real-valued function. If all partial derivatives with respect to are defined at the point , these partial derivatives define the vector which is called the gradient of at . If is differentiable at every point in some domain, then the gradient is a vector-valued function that maps the point to the vector . Consequently, the gradient determines a vector field. Directional derivatives If is a real-valued function on , then the partial derivatives of measure its variation in the direction of the coordinate axes. For example, if is a function of and , then its partial derivatives measure the variation in in the and direction. However, they do not directly measure the variation of in any other direction, such as along the diagonal line . These are measured using directional derivatives. Given a vector , then the directional derivative of in the direction of at the point is: If all the partial derivatives of exist and are continuous at , then they determine the directional derivative of in the direction by the formula: Total derivative, total differential and Jacobian matrix When is a function from an open subset of to , then the directional derivative of in a chosen direction is the best linear approximation to at that point and in that direction. However, when , no single directional derivative can give a complete picture of the behavior of . The total derivative gives a complete picture by considering all directions at once. That is, for any vector starting at , the linear approximation formula holds: Similarly with the single-variable derivative, is chosen so that the error in this approximation is as small as possible. The total derivative of at is the unique linear transformation such that Here is a vector in , so the norm in the denominator is the standard length on . However, is a vector in , and the norm in the numerator is the standard length on . If is a vector starting at , then is called the pushforward of by . If the total derivative exists at , then all the partial derivatives and directional derivatives of exist at , and for all , is the directional derivative of in the direction . If is written using coordinate functions, so that , then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of at : Generalizations The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers to . The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If is identified with by writing a complex number as then a differentiable function from to is certainly differentiable as a function from to (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions. Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold is a space that can be approximated near each point by a vector space called its tangent space: the prototypical example is a smooth surface in . The derivative (or differential) of a (differentiable) map between manifolds, at a point in , is then a linear map from the tangent space of at to the tangent space of at . The derivative function becomes a map between the tangent bundles of and . This definition is used in differential geometry. Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative. One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average". Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on. The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus. The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule. See also Covariant derivative Derivation Exterior derivative Functional derivative Integral Lie derivative Notes References . See the English version here. External links Khan Academy: "Newton, Leibniz, and Usain Bolt" Online Derivative Calculator from Wolfram Alpha. Mathematical analysis Differential calculus Functions and mappings Linear operators in calculus Rates Change
0.768193
0.999458
0.767776
Metastability
In chemistry and physics, metastability is an intermediate energetic state within a dynamical system other than the system's state of least energy. A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope. Bowling pins show similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science is isomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in the potential energy. During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation: the state of least energy is the only one the system will inhabit for an indefinite length of time, until more external energy is added to the system (unique "absolutely stable" state); the system will spontaneously leave any other state (of higher energy) to eventually return (after a sequence of transitions) to the least energetic state. The metastability concept originated in the physics of first-order phase transitions. It then acquired new meaning in the study of aggregated subatomic particles (in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems. Metastability is common in physics and chemistry – from an atom (many-body assembly) to statistical ensembles of molecules (viscous fluids, amorphous solids, liquid crystals, minerals, etc.) at molecular levels or as a whole (see Metastable states of matter and grain piles below). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse. In dynamic systems (with feedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, the time-invariance of the active or reactive patterns with respect to the external influences defines stability and metastability (see brain metastability below). In these systems, the equivalent of thermal fluctuations in molecular systems is the "white noise" that affects signal propagation and the decision-making. Statistical physics and thermodynamics Non-equilibrium thermodynamics is a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion or kinetics of the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives. States of matter Metastable states of matter (also referred as metastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) and sublimating solids to supercooled liquids or superheated liquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiates crystallization centers. This is a common situation for the droplets of atmospheric clouds. Condensed matter and macromolecules Metastable phases are common in condensed matter and crystallography. This is the case for anatase, a metastable polymorph of titanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lower surface energy, is always metastable, with rutile being the most stable phase at all temperatures and pressures. As another example, diamond is a stable phase only at very high pressures, but is a metastable form of carbon at standard temperature and pressure. It can be converted to graphite (plus leftover kinetic energy), but only after overcoming an activation energy – an intervening hill. Martensite is a metastable phase used to control the hardness of most steel. Metastable polymorphs of silica are commonly observed. In some cases, such as in the allotropes of solid boron, acquiring a sample of the stable phase is difficult. The bonds between the building blocks of polymers such as DNA, RNA, and proteins are also metastable. Adenosine triphosphate (ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology. Generally speaking, emulsions/colloidal systems and glasses are metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098 years (as compared with the lifetime of the universe, which is thought to be around years). Sandpiles are one system which can exhibit metastability if a steep slope or tunnel is present. Sand grains form a pile due to friction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse. The avalanche is a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration. Quantum mechanics Aggregated systems of subatomic particles described by quantum mechanics (quarks inside nucleons, nucleons inside atomic nuclei, electrons inside atoms, molecules, or atomic clusters) are found to have many distinguishable states. Of these, one (or a small degenerate set) is indefinitely stable: the ground state or global minimum. All other states besides the ground state (or those degenerate with it) have higher energies. Of all these other states, the metastable states are the ones having lifetimes lasting at least 102 to 103 times longer than the shortest lived states of the set. A metastable state is then long-lived (locally stable with respect to configurations of 'neighbouring' energies) but not eternal (as the global minimum is). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, above absolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is through tunnelling. Nuclear physics Some energetic states of an atomic nucleus (having distinct spatial mass, charge, spin, isospin distributions) are much longer-lived than others (nuclear isomers of the same isotope), e.g. technetium-99m. The isotope tantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least years, over 3 million times the current age of the universe. Atomic and molecular physics Some atomic energy levels are metastable. Rydberg atoms are an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipole selection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation). This slow-decay property of a metastable state is apparent in phosphorescence, the kind of photoluminescence seen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8 seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting. Chemistry In chemical systems, a system of atoms or molecules involving a change in chemical bond can be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations and thermal motion make chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energetic valleys which are not the lowest possible valley (point 1 in illustration). A common type of metastability is isomerism. The stability or metastability of a given chemical system depends on its environment, particularly temperature and pressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystal polymorph can result in failure of a drug while in storage between manufacture and administration. The map of which state is the most stable as a function of pressure, temperature and/or composition is known as a phase diagram. In regions where a particular state is not the most stable, it may still be metastable. Reaction intermediates are relatively short-lived, and are usually thermodynamically unstable rather than metastable. The IUPAC recommends referring to these as transient rather than metastable. Metastability is also used to refer to specific situations in mass spectrometry and spectrochemistry. Electronic circuits A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However, if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as a flip-flop) can enter a metastable state and take an unbounded length of time to finally settle into a fully stable digital state. Computational neuroscience Metastability in the brain is a phenomenon studied in computational neuroscience to elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state. In Philosophy Gilbert Simondon invokes a notion of metastability for his understanding of systems that rather than resolve their tensions and potentials for transformation into a single final state rather, 'conserves the tensions in the equilibrium of metastability instead of nullifying them in the equilibrium of stability' as a critique of cybernetic notions of homeostasis. See also False vacuum Hysteresis Metastate References Chemical properties Dynamical systems
0.775331
0.990244
0.767767
Solar sail
Solar sails (also known as lightsails, light sails, and photon sails) are a method of spacecraft propulsion using radiation pressure exerted by sunlight on large surfaces. A number of spaceflight missions to test solar propulsion and navigation have been proposed since the 1980s. The first spacecraft to make use of the technology was IKAROS, launched in 2010. A useful analogy to solar sailing may be a sailing boat; the light exerting a force on the large surface is akin to a sail being blown by the wind. High-energy laser beams could be used as an alternative light source to exert much greater force than would be possible using sunlight, a concept known as beam sailing. Solar sail craft offer the possibility of low-cost operations combined with high speeds (relative to chemical rockets) and long operating lifetimes. Since they have few moving parts and use no propellant, they can potentially be used numerous times for the delivery of payloads. Solar sails use a phenomenon that has a proven, measured effect on astrodynamics. Solar pressure affects all spacecraft, whether in interplanetary space or in orbit around a planet or small body. A typical spacecraft going to Mars, for example, will be displaced thousands of kilometers by solar pressure, so the effects must be accounted for in trajectory planning, which has been done since the time of the earliest interplanetary spacecraft of the 1960s. Solar pressure also affects the orientation of a spacecraft, a factor that must be included in spacecraft design. The total force exerted on an solar sail, for example, is about at Earth's distance from the Sun, making it a low-thrust propulsion system, similar to spacecraft propelled by electric engines, but as it uses no propellant, that force is exerted almost constantly and the collective effect over time is great enough to be considered a potential manner of propelling spacecraft. History of concept Johannes Kepler observed that comet tails point away from the Sun and suggested that the Sun caused the effect. In a letter to Galileo in 1610, he wrote, "Provide ships or sails adapted to the heavenly breezes, and there will be some who will brave even that void." He might have had the comet tail phenomenon in mind when he wrote those words, although his publications on comet tails came several years later. James Clerk Maxwell, in 1861–1864, published his theory of electromagnetic fields and radiation, which shows that light has momentum and thus can exert pressure on objects. Maxwell's equations provide the theoretical foundation for sailing with light pressure. So by 1864, the physics community and beyond knew sunlight carried momentum that would exert a pressure on objects. Jules Verne, in From the Earth to the Moon, published in 1865, wrote "there will some day appear velocities far greater than these [of the planets and the projectile], of which light or electricity will probably be the mechanical agent ... we shall one day travel to the moon, the planets, and the stars." This is possibly the first published recognition that light could move ships through space. Pyotr Lebedev was first to successfully demonstrate light pressure, which he did in 1899 with a torsional balance; Ernest Nichols and Gordon Hull conducted a similar independent experiment in 1901 using a Nichols radiometer. Svante Arrhenius predicted in 1908 the possibility of solar radiation pressure distributing life spores across interstellar distances, providing one means to explain the concept of panspermia. He was apparently the first scientist to state that light could move objects between stars. Konstantin Tsiolkovsky first proposed using the pressure of sunlight to propel spacecraft through space and suggested, "using tremendous mirrors of very thin sheets to utilize the pressure of sunlight to attain cosmic velocities". Friedrich Zander (Tsander) published a technical paper in 1925 that included technical analysis of solar sailing. Zander wrote of "applying small forces" using "light pressure or transmission of light energy to distances by means of very thin mirrors". JBS Haldane speculated in 1927 about the invention of tubular spaceships that would take humanity to space and how "wings of metallic foil of a square kilometre or more in area are spread out to catch the Sun's radiation pressure". J. D. Bernal wrote in 1929, "A form of space sailing might be developed which used the repulsive effect of the Sun's rays instead of wind. A space vessel spreading its large, metallic wings, acres in extent, to the full, might be blown to the limit of Neptune's orbit. Then, to increase its speed, it would tack, close-hauled, down the gravitational field, spreading full sail again as it rushed past the Sun." Arthur C. Clarke wrote Sunjammer, a science fiction short story originally published in the March 1964 issue of Boys' Life depicting a yacht race between solar sail spacecraft. Carl Sagan, in the 1970s, popularized the idea of sailing on light using a giant structure which would reflect photons in one direction, creating momentum. He brought up his ideas in college lectures, books, and television shows. He was fixated on quickly launching this spacecraft in time to perform a rendezvous with Halley's Comet. Unfortunately, the mission didn't take place in time and he would never live to finally see it through. The first formal technology and design effort for a solar sail began in 1976 at Jet Propulsion Laboratory for a proposed mission to rendezvous with Halley's Comet. Types Reflective Most solar sails are based on reflection. The surface of the sail is highly reflective, like a mirror, and light reflecting off of the surface imparts a force. Diffractive In 2018, diffraction was proposed as a different solar sail propulsion mechanism, which is claimed to have several advantages. Alternatives Electric solar wind Pekka Janhunen from FMI has proposed a type of solar sail called the electric solar wind sail. Mechanically it has little in common with the traditional solar sail design. The sails are replaced with straightened conducting tethers (wires) placed radially around the host ship. The wires are electrically charged to create an electric field around the wires. The electric field extends a few tens of metres into the plasma of the surrounding solar wind. The solar electrons are reflected by the electric field (like the photons on a traditional solar sail). The radius of the sail is from the electric field rather than the actual wire itself, making the sail lighter. The craft can also be steered by regulating the electric charge of the wires. A practical electric sail would have 50–100 straightened wires with a length of about 20 km each. Electric solar wind sails can adjust their electrostatic fields and sail attitudes. Magnetic A magnetic sail would also employ the solar wind. However, the magnetic field deflects the electrically charged particles in the wind. It uses wire loops, and runs a static current through them instead of applying a static voltage. All these designs maneuver, though the mechanisms are different. Magnetic sails bend the path of the charged protons that are in the solar wind. By changing the sails' attitudes, and the size of the magnetic fields, they can change the amount and direction of the thrust. Physical principles for reflective sails Solar radiation pressure The force imparted to a solar sail arises from the momentum of photons. The momentum of a photon or an entire flux is given by Einstein's relation: where p is the momentum, E is the energy (of the photon or flux), and c is the speed of light. Specifically, the momentum of a photon depends on its wavelength Solar radiation pressure can be related to the irradiance (solar constant) value of 1361 W/m2 at 1 AU (Earth-Sun distance), as revised in 2011: perfect absorbance: F = 4.54 μN per square metre (4.54 μPa) in the direction of the incident beam (a perfectly inelastic collision) perfect reflectance: F = 9.08 μN per square metre (9.08 μPa) in the direction normal to surface (an elastic collision) An ideal sail is flat and has 100% specular reflection. An actual sail will have an overall efficiency of about 90%, about 8.17 μN/m2, due to curvature (billow), wrinkles, absorbance, re-radiation from front and back, non-specular effects, and other factors. The force on a sail and the actual acceleration of the craft vary by the inverse square of distance from the Sun (unless extremely close to the Sun), and by the square of the cosine of the angle between the sail force vector and the radial from the Sun, so (for an ideal sail) where R is distance from the Sun in AU. An actual square sail can be modelled as: Note that the force and acceleration approach zero generally around θ = 60° rather than 90° as one might expect with an ideal sail. If some of the energy is absorbed, the absorbed energy will heat the sail, which re-radiates that energy from the front and rear surfaces, depending on the emissivity of those two surfaces. Solar wind, the flux of charged particles blown out from the Sun, exerts a nominal dynamic pressure of about 3 to 4 nPa, three orders of magnitude less than solar radiation pressure on a reflective sail. Sail parameters Sail loading (areal density) is an important parameter, which is the total mass divided by the sail area, expressed in g/m2. It is represented by the Greek letter σ (sigma). A sail craft has a characteristic acceleration, ac, which it would experience at 1 AU when facing the Sun. Note this value accounts for both the incident and reflected momentums. Using the value from above of 9.08 μN per square metre of radiation pressure at 1 AU, ac is related to areal density by: ac = 9.08(efficiency) / σ mm/s2 Assuming 90% efficiency, ac = 8.17 / σ mm/s2 The lightness number, λ, is the dimensionless ratio of maximum vehicle acceleration divided by the Sun's local gravity. Using the values at 1 AU: λ = ac / 5.93 The lightness number is also independent of distance from the Sun because both gravity and light pressure fall off as the inverse square of the distance from the Sun. Therefore, this number defines the types of orbit maneuvers that are possible for a given vessel. The table presents some example values. Payloads are not included. The first two are from the detailed design effort at JPL in the 1970s. The third, the lattice sailer, might represent about the best possible performance level. The dimensions for square and lattice sails are edges. The dimension for heliogyro is blade tip to blade tip. Attitude control An active attitude control system (ACS) is essential for a sail craft to achieve and maintain a desired orientation. The required sail orientation changes slowly (often less than 1 degree per day) in interplanetary space, but much more rapidly in a planetary orbit. The ACS must be capable of meeting these orientation requirements. Attitude control is achieved by a relative shift between the craft's center of pressure and its center of mass. This can be achieved with control vanes, movement of individual sails, movement of a control mass, or altering reflectivity. Holding a constant attitude requires that the ACS maintain a net torque of zero on the craft. The total force and torque on a sail, or set of sails, is not constant along a trajectory. The force changes with solar distance and sail angle, which changes the billow in the sail and deflects some elements of the supporting structure, resulting in changes in the sail force and torque. Sail temperature also changes with solar distance and sail angle, which changes sail dimensions. The radiant heat from the sail changes the temperature of the supporting structure. Both factors affect total force and torque. To hold the desired attitude the ACS must compensate for all of these changes. Constraints In Earth orbit, solar pressure and drag pressure are typically equal at an altitude of about 800 km, which means that a sail craft would have to operate above that altitude. Sail craft must operate in orbits where their turn rates are compatible with the orbits, which is generally a concern only for spinning disk configurations. Sail operating temperatures are a function of solar distance, sail angle, reflectivity, and front and back emissivities. A sail can be used only where its temperature is kept within its material limits. Generally, a sail can be used rather close to the Sun, around 0.25 AU, or even closer if carefully designed for those conditions. Applications Potential applications for sail craft range throughout the Solar System, from near the Sun to the comet clouds beyond Neptune. The craft can make outbound voyages to deliver loads or to take up station keeping at the destination. They can be used to haul cargo and possibly also used for human travel. Inner planets For trips within the inner Solar System, they can deliver payloads and then return to Earth for subsequent voyages, operating as an interplanetary shuttle. For Mars in particular, the craft could provide economical means of routinely supplying operations on the planet. According to Jerome Wright, "The cost of launching the necessary conventional propellants from Earth are enormous for manned missions. Use of sailing ships could potentially save more than $10 billion in mission costs." Solar sail craft can approach the Sun to deliver observation payloads or to take up station keeping orbits. They can operate at 0.25 AU or closer. They can reach high orbital inclinations, including polar. Solar sails can travel to and from all of the inner planets. Trips to Mercury and Venus are for rendezvous and orbit entry for the payload. Trips to Mars could be either for rendezvous or swing-by with release of the payload for aerodynamic braking. Outer planets Minimum transfer times to the outer planets benefit from using an indirect transfer (solar swing-by). However, this method results in high arrival speeds. Slower transfers have lower arrival speeds. The minimum transfer time to Jupiter for ac of 1 mm/s2 with no departure velocity relative to Earth is 2 years when using an indirect transfer (solar swing-by). The arrival speed (V∞) is close to 17 km/s. For Saturn, the minimum trip time is 3.3 years, with an arrival speed of nearly 19 km/s. Oort Cloud/Sun's inner gravity focus The Sun's inner gravitational focus point lies at minimum distance of 550 AU from the Sun, and is the point to which light from distant objects is focused by gravity as a result of it passing by the Sun. This is thus the distant point to which solar gravity will cause the region of deep space on the other side of the Sun to be focused, thus serving effectively as a very large telescope objective lens. It has been proposed that an inflated sail, made of beryllium, that starts at 0.05 AU from the Sun would gain an initial acceleration of 36.4 m/s2, and reach a speed of 0.00264c (about 950 km/s) in less than a day. Such proximity to the Sun could prove to be impractical in the near term due to the structural degradation of beryllium at high temperatures, diffusion of hydrogen at high temperatures as well as an electrostatic gradient, generated by the ionization of beryllium from the solar wind, posing a burst risk. A revised perihelion of 0.1 AU would reduce the aforementioned temperature and solar flux exposure. Such a sail would take "Two and a half years to reach the heliopause, six and a half years to reach the Sun’s inner gravitational focus, with arrival at the inner Oort Cloud in no more than thirty years." "Such a mission could perform useful astrophysical observations en route, explore gravitational focusing techniques, and image Oort Cloud objects while exploring particles and fields in that region that are of galactic rather than solar origin." Satellites Robert L. Forward has commented that a solar sail could be used to modify the orbit of a satellite about the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits such that they are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a "statite". This is possible because the propulsion provided by the sail offsets the gravitational attraction of the Sun. Such an orbit could be useful for studying the properties of the Sun for long durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar solar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to counteract the planet's gravity. In his book The Case for Mars, Robert Zubrin points out that the reflected sunlight from a large statite, placed near the polar terminator of the planet Mars, could be focused on one of the Martian polar ice caps to significantly warm the planet's atmosphere. Such a statite could be made from asteroid material. A group of satellites designed to act as sails has been proposed to measure Earth's Energy Imbalance which is the most fundamental measure of the planet's rate of global warming. On-board state-of-the-art accelerometers would measure shifts in the pressure differential between incoming solar and outgoing thermal radiation on opposing sides of each satellite. Measurement accuracy has been projected to be better than that achievable with compact radiometric detectors. Trajectory corrections The MESSENGER probe orbiting Mercury used light pressure on its solar panels to perform fine trajectory corrections on the way to Mercury. By changing the angle of the solar panels relative to the Sun, the amount of solar radiation pressure was varied to adjust the spacecraft trajectory more delicately than possible with thrusters. Minor errors are greatly amplified by gravity assist maneuvers, so using radiation pressure to make very small corrections saved large amounts of propellant. Interstellar flight In the 1970s, Robert Forward proposed two beam-powered propulsion schemes using either lasers or masers to push giant sails to a significant fraction of the speed of light. In the science fiction novel Rocheworld, Forward described a light sail propelled by super lasers. As the starship neared its destination, the outer portion of the sail would detach. The outer sail would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to stop the ship in the destination star system. Both methods pose monumental engineering challenges. The lasers would have to operate for years continuously at gigawatt strength. Forward's solution to this requires enormous solar panel arrays to be built at or near the planet Mercury. A planet-sized mirror or Fresnel lens would need to be located at several dozen astronomical units from the Sun to keep the lasers focused on the sail. The giant braking sail would have to act as a precision mirror to focus the braking beam onto the inner "deceleration" sail. A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves directed at the sail, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use microwaves, rather than visible light, to push it. Masers spread out more rapidly than optical lasers owing to their longer wavelength, and so would not have as great an effective range. Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion. To further focus the energy on a distant solar sail, Forward proposed a lens designed as a large zone plate. This would be placed at a location between the laser or maser and the spacecraft. Another more physically realistic approach would be to use the light from the Sun to accelerate the spacecraft. The ship would first drop into an orbit making a close pass to the Sun, to maximize the solar energy input on the sail, then it would begin to accelerate away from the system using the light from the Sun. Acceleration will drop approximately as the inverse square of the distance from the Sun, and beyond some distance, the ship would no longer receive enough light to accelerate it significantly, but would maintain the final velocity attained. When nearing the target star, the ship could turn its sails toward it and begin to use the outward pressure of the destination star to decelerate. Rockets could augment the solar thrust. Similar solar sailing launch and capture were suggested for directed panspermia to expand life in other solar systems. Velocities of 0.05% the speed of light could be obtained by solar sails carrying 10 kg payloads, using thin solar sail vehicles with effective areal densities of 0.1 g/m2 with thin sails of 0.1 μm thickness and sizes on the order of one square kilometer. Alternatively, swarms of 1 mm capsules could be launched on solar sails with radii of 42 cm, each carrying 10,000 capsules of a hundred million extremophile microorganisms to seed life in diverse target environments. Theoretical studies suggest relativistic speeds if the solar sail harnesses a supernova. Deorbiting artificial satellites Small solar sails have been proposed to accelerate the deorbiting of small artificial satellites from Earth orbits. Satellites in low Earth orbit can use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry. A de-orbit sail developed at Cranfield University is part of the UK satellite TechDemoSat-1, launched in 2014. The sail deployed at the end of the satellite's five-year useful life in May 2019. The sail's purpose is to bring the satellite out of orbit over a period of about 25 years. In July 2015 British 3U CubeSat called DeorbitSail was launched into space with the purpose of testing 16 m2 deorbit structure, but eventually it failed to deploy it. A student 2U CubeSat mission called PW-Sat2, launched in December 2018 and tested a 4 m2 deorbit sail. It successfully deorbited in February 2021. In June 2017, a second British 3U CubeSat called InflateSail deployed a 10 m2 deorbit sail at an altitude of . In June 2017 the 3U Cubesat URSAMAIOR has been launched in low Earth orbit to test the deorbiting system ARTICA developed by Spacemind. The device, which occupies only 0.4 U of the cubesat, shall deploy a sail of 2.1 m2 to deorbit the satellite at the end of the operational life. Sail configurations IKAROS, launched in 2010, was the first practical solar sail vehicle. As of 2015, it was still under thrust, proving the practicality of a solar sail for long-duration missions. It is spin-deployed, with tip-masses in the corners of its square sail. The sail is made of thin polyimide film, coated with evaporated aluminium. It steers with electrically controlled liquid crystal panels. The sail slowly spins, and these panels turn on and off to control the attitude of the vehicle. When on, they diffuse light, reducing the momentum transfer to that part of the sail. When off, the sail reflects more light, transferring more momentum. In that way, they turn the sail. Thin-film solar cells are also integrated into the sail, powering the spacecraft. The design is very reliable, because spin deployment, which is preferable for large sails, simplified the mechanisms to unfold the sail and the LCD panels have no moving parts. Parachutes have very low mass, but a parachute is not a workable configuration for a solar sail. Analysis shows that a parachute configuration would collapse from the forces exerted by shroud lines, since radiation pressure does not behave like aerodynamic pressure, and would not act to keep the parachute open. The highest thrust-to-mass designs for ground-assembled deploy-able structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guy-wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the Sun. This form can, therefore, go close to the Sun for maximum thrust. Most designs steer with small moving sails on the ends of the spars. In the 1970s JPL studied many rotating blade and ring sails for a mission to rendezvous with Halley's Comet. The intention was to stiffen the structures using angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. The difference in the thrust-to-mass ratio between practical designs was almost nil, and the static designs were easier to control. JPL's reference design was called the "heliogyro". It had plastic-film blades deployed from rollers and held out by centrifugal forces as it rotated. The spacecraft's attitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cyclic and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design. The CubeSail (UltraSail) is an active project aiming to deploy a heliogyro sail. Heliogyro design is similar to the blades on a helicopter. The design is faster to manufacture due to lightweight centrifugal stiffening of sails. Also, they are highly efficient in cost and velocity because the blades are lightweight and long. Unlike the square and spinning disk designs, heliogyro is easier to deploy because the blades are compacted on a reel. The blades roll out when they are deploying after the ejection from the spacecraft. As the heliogyro travels through space the system spins around because of the centrifugal acceleration. Finally, payloads for the space flights are placed in the center of gravity to even out the distribution of weight to ensure stable flight. JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Masses in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large crewed structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars. A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metalization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light. Reflective sail making Materials The most common material in current designs is a thin layer of aluminum coating on a polymer (plastic) sheet, such as aluminized 2 μm Kapton film. The polymer provides mechanical support as well as flexibility, while the thin metal layer provides the reflectivity. Such material resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminum reflecting film is on the Sun side. The sails of Cosmos 1 were made of aluminized PET film (Mylar). Eric Drexler developed a concept for a sail in which the polymer was removed. He proposed very high thrust-to-mass solar sails, and made prototypes of the sail material. His sail would use panels of thin aluminium film (30 to 100 nanometres thick) supported by a tensile structure. The sail would rotate and would have to be continually under thrust. He made and handled samples of the film in the laboratory, but the material was too delicate to survive folding, launch, and deployment. The design planned to rely on space-based production of the film panels, joining them to a deployable tension structure. Sails in this class would offer high area per unit mass and hence accelerations up to "fifty times higher" than designs based on deploy-able plastic films. The material developed for the Drexler solar sail was a thin aluminium film with a baseline thickness of 0.1 μm, to be fabricated by vapor deposition in a space-based system. Drexler used a similar process to prepare films on the ground. As anticipated, these films demonstrated adequate strength and robustness for handling in the laboratory and for use in space, but not for folding, launch, and deployment. Research by Geoffrey Landis in 1998–1999, funded by the NASA Institute for Advanced Concepts, showed that various materials such as alumina for laser lightsails and carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminium or Kapton films. In 2000, Energy Science Laboratories developed a new carbon fiber material that might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same mass. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures. There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than half the wavelength of light impinging on the sail. While such materials have so far only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could mass less than 0.1 g/m2, making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material mass 7 g/m2, aluminized Kapton films have a mass as much as 12 g/m2, and Energy Science Laboratories' new carbon fiber material masses 3 g/m2. The least dense metal is lithium, about 5 times less dense than aluminium. Fresh, unoxidized surfaces are reflective. At a thickness of 20 nm, lithium has an area density of 0.011 g/m2. A high-performance sail could be made of lithium alone at 20 nm (no emission layer). It would have to be fabricated in space and not used to approach the Sun. In the limit, a sail craft might be constructed with a total areal density of around 0.02 g/m2, giving it a lightness number of 67 and ac of about 400 mm/s2. Magnesium and beryllium are also potential materials for high-performance sails. These 3 metals can be alloyed with each other and with aluminium. Reflection and emissivity layers Aluminium is the common choice for the reflection layer. It typically has a thickness of at least 20 nm, with a reflectivity of 0.88 to 0.90. Chromium is a good choice for the emission layer on the face away from the Sun. It can readily provide emissivity values of 0.63 to 0.73 for thicknesses from 5 to 20 nm on plastic film. Usable emissivity values are empirical because thin-film effects dominate; bulk emissivity values do not hold up in these cases because material thickness is much thinner than the emitted wavelengths. Fabrication Sails are fabricated on Earth on long tables where ribbons are unrolled and joined to create the sails. Sail material needed to have as little weight as possible because it would require the use of the shuttle to carry the craft into orbit. Thus, these sails are packed, launched, and unfurled in space. In the future, fabrication could take place in orbit inside large frames that support the sail. This would result in lower mass sails and elimination of the risk of deployment failure. Operations Changing orbits Sailing operations are simplest in interplanetary orbits, where altitude changes are done at low rates. For outward bound trajectories, the sail force vector is oriented forward of the Sun line, which increases orbital energy and angular momentum, resulting in the craft moving farther from the Sun. For inward trajectories, the sail force vector is oriented behind the Sun line, which decreases orbital energy and angular momentum, resulting in the craft moving in toward the Sun. It is worth noting that only the Sun's gravity pulls the craft toward the Sun—there is no analog to a sailboat's tacking to windward. To change orbital inclination, the force vector is turned out of the plane of the velocity vector. In orbits around planets or other bodies, the sail is oriented so that its force vector has a component along the velocity vector, either in the direction of motion for an outward spiral, or against the direction of motion for an inward spiral. Trajectory optimizations can often require intervals of reduced or zero thrust. This can be achieved by rolling the craft around the Sun line with the sail set at an appropriate angle to reduce or remove the thrust. Swing-by maneuvers A close solar passage can be used to increase a craft's energy. The increased radiation pressure combines with the efficacy of being deep in the Sun's gravity well to substantially increase the energy for runs to the outer Solar System. The optimal approach to the Sun is done by increasing the orbital eccentricity while keeping the energy level as high as practical. The minimum approach distance is a function of sail angle, thermal properties of the sail and other structure, load effects on structure, and sail optical characteristics (reflectivity and emissivity). A close passage can result in substantial optical degradation. Required turn rates can increase substantially for a close passage. A sail craft arriving at a star can use a close passage to reduce energy, which also applies to a sail craft on a return trip from the outer Solar System. A lunar swing-by can have important benefits for trajectories leaving from or arriving at Earth. This can reduce trip times, especially in cases where the sail is heavily loaded. A swing-by can also be used to obtain favorable departure or arrival directions relative to Earth. A planetary swing-by could also be employed similar to what is done with coasting spacecraft, but good alignments might not exist due to the requirements for overall optimization of the trajectory. Laser powered The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: Interstellar travel catalog to use photogravitational assists for a full stop Successive assists at α Cen A and B could allow travel times to 75 yr to both stars. Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail. Area of the Lightsail, about 105 m2 = (316 m)2 Velocity up to 37,300 km s−1 (12.5% c) . Ref: Projects operating or completed Attitude (orientation) control Both the Mariner 10 mission, which flew by the planets Mercury and Venus, and the MESSENGER mission to Mercury demonstrated the use of solar pressure as a method of attitude control in order to conserve attitude-control propellant. Hayabusa also used solar pressure on its solar paddles as a method of attitude control to compensate for broken reaction wheels and chemical thruster. MTSAT-1R (Multi-Functional Transport Satellite)'s solar sail counteracts the torque produced by sunlight pressure on the solar array. The trim tab on the solar array makes small adjustments to the torque balance. Ground deployment tests NASA has successfully tested deployment technologies on small scale sails in vacuum chambers. In 1999, a full-scale deployment of a solar sail was tested on the ground at DLR/ESA in Cologne. Suborbital tests Cosmos 1, a joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science attempted to launch a suborbital prototype vehicle in 2005, which was destroyed due to a rocket failure. A 15-meter-diameter solar sail (SSP, solar sail sub payload, soraseiru sabupeiro-do) was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely. On August 9, 2004, the Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover-shaped sail was deployed at 122 km altitude and a fan-shaped sail was deployed at 169 km altitude. Both sails used 7.5-micrometer film. The experiment purely tested the deployment mechanisms, not propulsion. Znamya 2 On February 4, 1993, the Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully deployed from the Russian Mir space station. It was the first thin film reflector of such type successfully deployed in space using the mechanism based on centrifugal force. Although the deployment succeeded, propulsion was not demonstrated. A second test in 1999, Znamya 2.5, failed to deploy properly. IKAROS 2010 On 21 May 2010, Japan Aerospace Exploration Agency (JAXA) launched the world's first interplanetary solar sail spacecraft "IKAROS" (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) to Venus. Using a new solar-photon propulsion method, it was the first true solar sail spacecraft fully propelled by sunlight, and was the first spacecraft to succeed in solar sail flight. JAXA successfully tested IKAROS in 2010. The goal was to deploy and control the sail and, for the first time, to determine the minute orbit perturbations caused by light pressure. Orbit determination was done by the nearby AKATSUKI probe from which IKAROS detached after both had been brought into a transfer orbit to Venus. The total effect over the six month flight was 100 m/s. Until 2010, no solar sails had been successfully used in space as primary propulsion systems. On 21 May 2010, the Japan Aerospace Exploration Agency (JAXA) launched the IKAROS spacecraft, which deployed a 200 m2 polyimide experimental solar sail on June 10. In July, the next phase for the demonstration of acceleration by radiation began. On 9 July 2010, it was verified that IKAROS collected radiation from the Sun and began photon acceleration by the orbit determination of IKAROS by range-and-range-rate (RARR) that is newly calculated in addition to the data of the relativization accelerating speed of IKAROS between IKAROS and the Earth that has been taken since before the Doppler effect was utilized. The data showed that IKAROS appears to have been solar-sailing since 3 June when it deployed the sail. IKAROS has a diagonal spinning square sail 14×14 m (196 m2) made of a thick sheet of polyimide. The polyimide sheet had a mass of about 10 grams per square metre. A thin-film solar array is embedded in the sail. Eight LCD panels are embedded in the sail, whose reflectance can be adjusted for attitude control. IKAROS spent six months traveling to Venus, and then began a three-year journey to the far side of the Sun. NanoSail-D 2010 A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D, which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The second backup version, NanoSail-D2, also sometimes called simply NanoSail-D, was launched with FASTSAT on a Minotaur IV on November 19, 2010, becoming NASA's first solar sail deployed in low earth orbit. The objectives of the mission were to test sail deployment technologies, and to gather data about the use of solar sails as a simple, "passive" means of de-orbiting dead satellites and space debris. The NanoSail-D structure was made of aluminium and plastic, with the spacecraft massing less than . The sail has about of light-catching surface. After some initial problems with deployment, the solar sail was deployed and over the course of its 240-day mission reportedly produced a "wealth of data" concerning the use of solar sails as passive deorbit devices. NASA launched the second NanoSail-D unit stowed inside the FASTSAT satellite on the Minotaur IV on November 19, 2010. The ejection date from the FASTSAT microsatellite was planned for December 6, 2010, but deployment only occurred on January 20, 2011. Planetary Society LightSail Projects On June 21, 2005, a joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science launched a prototype sail Cosmos 1 from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. They intended to use the sail to gradually raise the spacecraft to a higher Earth orbit over a mission duration of one month. The launch attempt sparked public interest according to Louis Friedman. Despite the failed launch attempt of Cosmos 1, The Planetary Society received applause for their efforts from the space community and sparked a rekindled interest in solar sail technology. On Carl Sagan's 75th birthday (November 9, 2009) the Planetary Society announced plans to make three further attempts, dubbed LightSail-1, -2, and -3. The new design will use a 32 m2 Mylar sail, deployed in four triangular segments like NanoSail-D. The launch configuration is a 3U CubeSat format, and as of 2015, it was scheduled as a secondary payload for a 2016 launch on the first SpaceX Falcon Heavy launch. "LightSail-1" was launched on 20 May 2015. The purpose of the test was to allow a full checkout of the satellite's systems in advance of LightSail-2. Its deployment orbit was not high enough to escape Earth's atmospheric drag and demonstrate true solar sailing. "LightSail-2" was launched on 25 June 2019, and deployed into a much higher low Earth orbit. Its solar sails were deployed on 23 July 2019. It reentered the atmosphere on 17 November 2022. NEA Scout The Near-Earth Asteroid Scout (NEA Scout) was a mission jointly developed by NASA's Marshall Space Flight Center (MSFC) and the Jet Propulsion Laboratory (JPL), consisting of a controllable low-cost CubeSat solar sail spacecraft capable of encountering near-Earth asteroids (NEA). Four booms were to deploy, unfurling the aluminized polyimide solar sail. In 2015, NASA announced it had selected NEA Scout to launch as one of several secondary payloads aboard Artemis 1, the first flight of the agency's heavy-lift SLS launch vehicle. However, the craft was considered lost with the failure to establish communications shortly after launch in 2022. Advanced Composite Solar Sail System (ACS3) The NASA Advanced Composite Solar Sail System (ACS3) is a technology demonstration of solar sail technology for future small spacecraft. It was selected in 2019 by NASA's CubeSat Launch Initiative (CSLI) to be launched as part of the ELaNa program. ACS3 consists of a 12U CubeSat small satellite (23 cm x 23 cm x 34 cm; 16 kg) that unfolds a quadratic 80m2 solar sail consisting of a polyethylene naphthalate film coated on one side with aluminum for reflectivity and on the other side with chromium to increase thermal emissivity. The sail is held by a novel unfolding system of four 7 m-long carbon fiber reinforced polymer booms that roll-up for storage. ACS3 was launched on 23 April 2024 on the Electron "Beginning Of The Swarm" mission. The ACS3 successfully made contact with ground stations following deployment in early May. The solar sail was confirmed as successfully operational by mission operators on 29 August 2024. Projects proposed or cancelled or not selected Despite the losses of Cosmos 1 and NanoSail-D (about 23cm x 23cm x 34cm) which were due to failure of their launchers, scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km2) surfaces in space and the sail making advancements. Development of solar sails for crewed space flight is still in its infancy. Sunjammer 2015 A technology demonstration sail craft, dubbed Sunjammer, was in development with the intent to prove the viability and value of sailing technology. Sunjammer had a square sail, wide on each side, giving it an effective area of . It would have traveled from the Sun-Earth Lagrangian point from Earth to a distance of . The demonstration was expected to launch on a Falcon 9 in January 2015. It would have been a secondary payload, released after the placement of the DSCOVR climate satellite at the L1 point. Citing a lack of confidence in the ability of its contractor L'Garde to deliver, the mission was cancelled by NASA in October 2014. OKEANOS OKEANOS (Outsized Kite-craft for Exploration and Astronautics in the Outer Solar System) was a proposed mission concept by Japan's JAXA to Jupiter's Trojan asteroids using a hybrid solar sail for propulsion; the sail would have been covered with thin solar panels to power an ion engine. In-situ analysis of the collected samples would have been performed by either direct contact or using a lander carrying a high-resolution mass spectrometer. A lander and a sample-return to Earth were options under study. The OKEANOS Jupiter Trojan Asteroid Explorer was a finalist for Japan's ISAS 2nd Large-class mission to be launched in the late 2020s. However, it was not selected. Solar Cruiser In August 2019, NASA awarded the Solar Cruiser team $400,000 for nine-month mission concept studies. The spacecraft would have a solar sail and would orbit the Sun in a polar orbit, while the coronagraph instrument would enable simultaneous measurements of the Sun's magnetic field structure and velocity of coronal mass ejections. If selected for further development, it would have launched in 2025. However, Solar Cruiser was not approved to advance to phase C of its development cycle and was subsequently discontinued. Projects still in development or unknown status Gossamer deorbit sail , the European Space Agency (ESA) has a proposed deorbit sail, named "Gossamer", that would be intended to be used to accelerate the deorbiting of small (less than ) artificial satellites from low Earth orbits. The launch mass is with a launch volume of only . Once deployed, the sail would expand to and would use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry. Breakthrough Starshot The well-funded Breakthrough Starshot project announced on April 12, 2016, aims to develop a fleet of 1000 light sail nanocraft carrying miniature cameras, propelled by ground-based lasers and send them to Alpha Centauri at 20% the speed of light. The trip would take 20 years. In popular culture Cordwainer Smith gives a description of solar-sail-powered spaceships in "The Lady Who Sailed The Soul", published first in April 1960. Jack Vance wrote a short story about a training mission on a solar-sail-powered spaceship in "Sail 25", published in 1961. Arthur C. Clarke and Poul Anderson (writing as Winston P. Sanders) independently published stories featuring solar sails, both stories titled "Sunjammer," in 1964. Clarke retitled his story "The Wind from the Sun" when it was reprinted, in order to avoid confusion. In Larry Niven and Jerry Pournelle's 1974 novel The Mote in God's Eye, aliens are discovered when their laser-sail propelled probe enters human space. A similar technology was the theme in the Star Trek: Deep Space Nine episode "Explorers". In the episode, Lightships are described as an ancient technology used by Bajorans to travel beyond their solar system by using light from the Bajoran sun and specially constructed sails to propel them through space. In the 2002 Star Wars film Attack of the Clones, the main villain Count Dooku was seen using a spacecraft with solar sails. In the 2009 film Avatar, the spacecraft which transports the protagonist Jake Sully to the Alpha Centauri system, the ISV Venture Star, uses solar sails as a means of propulsion to accelerate the vehicle away from the Earth towards Alpha Centauri. In the third season of Apple TV+'s alternate history TV show For All Mankind, the fictional NASA spaceship Sojourner 1 utilises solar sails for additional propulsion on its way to Mars. In the final episode of the first season of 2024 Netflix 2024 TV show, 3 Body Problem, one of the protagonists, Will Downing, has his cryogenically frozen brain launched into space toward the oncoming Trisolarian spaceship, using solar sails and nuclear pulse propulsion to accelerate it to a fraction of the speed of light. See also References Bibliography G. Vulpetti, Fast Solar Sailing: Astrodynamics of Special Sailcraft Trajectories, Space Technology Library Vol. 30, Springer, August 2012, (Hardcover) https://www.springer.com/engineering/mechanical+engineering/book/978-94-007-4776-0, (Kindle-edition), ASIN: B00A9YGY4I G. Vulpetti, L. Johnson, G. L. Matloff, Solar Sails: A Novel Approach to Interplanetary Flight, Springer, August 2015, J. L. Wright, Space Sailing, Gordon and Breach Science Publishers, London, 1992; Wright was involved with JPL's effort to use a solar sail for a rendezvous with Halley's comet. NASA/CR 2002-211730, Chapter IV— presents an optimized escape trajectory via the H-reversal sailing mode G. Vulpetti, The Sailcraft Splitting Concept, JBIS, Vol. 59, pp. 48–53, February 2006 G. L. Matloff, Deep-Space Probes: To the Outer Solar System and Beyond, 2nd ed., Springer-Praxis, UK, 2005, T. Taylor, D. Robinson, T. Moton, T. C. Powell, G. Matloff, and J. Hall, "Solar Sail Propulsion Systems Integration and Analysis (for Option Period)", Final Report for NASA/MSFC, Contract No. H-35191D Option Period, Teledyne Brown Engineering Inc., Huntsville, AL, May 11, 2004 G. Vulpetti, "Sailcraft Trajectory Options for the Interstellar Probe: Mathematical Theory and Numerical Results", the Chapter IV of NASA/CR-2002-211730, The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography, June 2002 G. Vulpetti, Sailcraft-Based Mission to The Solar Gravitational Lens, STAIF-2000, Albuquerque (New Mexico, USA), 30 January – 3 February 2000 G. Vulpetti, "General 3D H-Reversal Trajectories for High-Speed Sailcraft", Acta Astronautica, Vol. 44, No. 1, pp. 67–73, 1999 C. R. McInnes, Solar Sailing: Technology, Dynamics, and Mission Applications, Springer-Praxis Publishing Ltd, Chichester, UK, 1999, Genta, G., and Brusa, E., "The AURORA Project: a New Sail Layout", Acta Astronautica, 44, No. 2–4, pp. 141–146 (1999) S. Scaglione and G. Vulpetti, "The Aurora Project: Removal of Plastic Substrate to Obtain an All-Metal Solar Sail", special issue of Acta Astronautica, vol. 44, No. 2–4, pp. 147–150, 1999 External links "Deflecting Asteroids" by Gregory L. Matloff, IEEE Spectrum, April 2012 Planetary Society's solar sailing project The Solar Photon Sail Comes of Age by Gregory L. Matloff NASA Mission Site for NanoSail-D NanoSail-D mission: Dana Coulter, "NASA to Attempt Historic Solar Sail Deployment", NASA, June 28, 2008 Far-out Pathways to Space: Solar Sails from NASA Solar Sails Comprehensive collection of solar sail information and references, maintained by Benjamin Diedrich. Good diagrams showing how light sailors must tack. U3P Multilingual site with news and flight simulators ISAS Deployed Solar Sail Film in Space Suggestion of a solar sail with roller reefing, hybrid propulsion and a central docking and payload station. Interview with NASA's JPL about solar sail technology and missions Website with technical pdf-files about solar-sailing, including NASA report and lectures at Aerospace Engineering School of Rome University Advanced Solar- and Laser-pushed Lightsail Concepts www.aibep.org: Official site of American Institute of Beamed Energy Propulsion Space Sailing Sailing ship concepts, operations, and history of concept Bernd Dachwald's Website Broad information on sail propulsion and missions Spacecraft attitude control Spacecraft propulsion Spacecraft components Interstellar travel Microwave technology Photonics Japanese inventions
0.769778
0.997366
0.76775
Reynolds transport theorem
In differential calculus, the Reynolds transport theorem (also known as the Leibniz–Reynolds transport theorem), or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics. Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time: If we wish to move the derivative into the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework. General form Reynolds transport theorem can be expressed as follows: in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (not the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used. Form for a material element In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey This condition may be substituted to obtain: A special case If we take to be constant with respect to time, then and the identity reduces to as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.) Interpretation and reduction to one dimension The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and , and that is a unit square in the -plane and has limits and . Then Reynolds transport theorem reduces to which, up to swapping and , is the standard expression for differentiation under the integral sign. See also References External links Osborne Reynolds, Collected Papers on Mechanical and Physical Subjects, in three volumes, published circa 1903, now fully and freely available in digital format: Volume 1, Volume 2, Volume 3, Aerodynamics Articles containing proofs Chemical engineering Continuum mechanics Eponymous theorems of physics Equations of fluid dynamics Fluid dynamics Fluid mechanics Mechanical engineering
0.773902
0.992029
0.767733
Proprioception
Proprioception is the sense of self-movement, force, and body position. Proprioception is mediated by proprioceptors, sensory receptors, located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinesthetic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species. Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement. System overview In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the chordotonal organ encode limb position and velocity. To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the campaniform sensilla. These proprioceptors are active when a limb experiences resistance. A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extremes of joint position. Invertebrates use hair plates to accomplish this; a field of bristles located within joints that detects the relative movement of limb segments through the deflection of the associated cuticular hairs. Reflexes The sense of proprioception is ubiquitous across mobile animals and is essential for the motor coordination of the body. Proprioceptors can form reflex circuits with motor neurons to provide rapid feedback about body and limb position. These mechanosensory circuits are important for flexibly maintaining posture and balance, especially during locomotion. For example, consider the stretch reflex, in which stretch across a muscle is detected by a sensory receptor (e.g., muscle spindle, chordotonal neurons), which activates a motor neuron to induce muscle contraction and oppose the stretch. During locomotion, sensory neurons can reverse their activity when stretched, to promote rather than oppose movement. Conscious and nonconscious In humans, a distinction is made between conscious proprioception and nonconscious proprioception: Conscious proprioception is communicated by the dorsal column-medial lemniscus pathway to the cerebrum. Nonconscious proprioception is communicated primarily via the dorsal spinocerebellar tract and ventral spinocerebellar tract, to the cerebellum. A nonconscious reaction is seen in the human proprioceptive reflex, or righting reflex—in the event that the body tilts in any direction, the person will cock their head back to level the eyes against the horizon. This is seen even in infants as soon as they gain control of their neck muscles. This control comes from the cerebellum, the part of the brain affecting balance. Mechanisms Proprioception is mediated by mechanically sensitive proprioceptor neurons distributed throughout an animal's body. Most vertebrates possess three basic types of proprioceptors: muscle spindles, which are embedded in skeletal muscles, Golgi tendon organs, which lie at the interface of muscles and tendons, and joint receptors, which are low-threshold mechanoreceptors embedded in joint capsules. Many invertebrates, such as insects, also possess three basic proprioceptor types with analogous functional properties: chordotonal neurons, campaniform sensilla, and hair plates. The initiation of proprioception is the activation of a proprioceptor in the periphery. The proprioceptive sense is believed to be composed of information from sensory neurons located in the inner ear (motion and orientation) and in the stretch receptors located in the muscles and the joint-supporting ligaments (stance). There are specific nerve receptors for this form of perception termed "proprioceptors", just as there are specific receptors for pressure, light, temperature, sound, and other sensory experiences. Proprioceptors are sometimes known as adequate stimuli receptors. Members of the transient receptor potential family of ion channels have been found to be important for proprioception in fruit flies, nematode worms, African clawed frogs, and zebrafish. PIEZO2, a nonselective cation channel, has been shown to underlie the mechanosensitivity of proprioceptors in mice. Humans with loss-of-function mutations in the PIEZO2 gene exhibit specific deficits in joint proprioception, as well as vibration and touch discrimination, suggesting that the PIEZO2 channel is essential for mechanosensitivity in some proprioceptors and low-threshold mechanoreceptors. Although it was known that finger kinesthesia relies on skin sensation, recent research has found that kinesthesia-based haptic perception relies strongly on the forces experienced during touch. This research allows the creation of "virtual", illusory haptic shapes with different perceived qualities. Anatomy Proprioception of the head stems from the muscles innervated by the trigeminal nerve, where the general somatic afferent fibers pass without synapsing in the trigeminal ganglion (first-order sensory neuron), reaching the mesencephalic tract and the mesencephalic nucleus of trigeminal nerve. Proprioception of limbs often occurs due to receptors in connective tissue near joints. Function Stability An important role for proprioception is to allow an animal to stabilize itself against perturbations. For instance, for a person to walk or stand upright, they must continuously monitor their posture and adjust muscle activity as needed to provide balance. Similarly, when walking on unfamiliar terrain or even tripping, the person must adjust the output of their muscles quickly based on estimated limb position and velocity. Proprioceptor reflex circuits are thought to play an important role to allow fast and unconscious execution of these behaviors, To make control of these behaviors efficient, proprioceptors are also thought to regulate reciprocal inhibition in muscles, leading to agonist-antagonist muscle pairs. Planning and refining movements When planning complex movements such as reaching or grooming, an animal must consider the current position and velocity of its limb and use that information to adjust dynamics to target a final position. If the animal's estimate of its limb's initial position is wrong, then a deficiency in the movement can result. Furthermore, proprioception is crucial in refining the movement if it deviates from the trajectory. Development In adult fruit flies, each proprioceptor class arises from a specific cell lineage (i.e. each chordotonal neuron is from the chordotonal neuron lineage, although multiple lineages give rise to sensory bristles). After the last cell division, proprioceptors send out axons toward the central nervous system and are guided by hormonal gradients to reach stereotyped synapses. The mechanisms underlying axon guidance are similar across invertebrates and vertebrates. In mammals with longer gestation periods, muscle spindles are fully formed at birth. Muscle spindles continue to grow throughout post-natal development as muscles grow. Mathematical models Proprioceptors transfer the mechanical state of the body into patterns of neural activity. This transfer can be modeled mathematically, for example to better understand the internal workings of a proprioceptor or to provide more realistic feedback in neuromechanical simulations. Various proprioceptor models of complexity have been developed. They range from simple phenomenological models to complex structural models, in which the mathematical elements correspond to anatomical features of the proprioceptor. The focus has been on muscle spindles, but Golgi tendon organs and insects' hair plates have been modeled too. Muscle spindles Poppele and Bowman used linear system theory to model mammalian muscle spindles Ia and II afferents. They obtained a set of de-afferented muscle spindles, measured their response to a series of sinusoidal and step function stretches, and fit a transfer function to the spike rate. They found that the following Laplace transfer function describes the firing rate responses of the primary sensory fibers for a change in length: The following equation describes the response of secondary sensory fibers: More recently, Blum et al. showed that the muscle spindle firing rate is modeled better as tracking the force of the muscle, rather than the length. Furthermore, muscle spindle firing rates show history dependence which cannot be modeled by a linear time-invariant system model. Golgi tendon organs Houk and Simon provided one of the first mathematical models of a Golgi tendon organ receptor, modeling the firing rate of the receptor as a function of the muscle tension force. Just as for muscle spindles, they find that, as the receptors respond linearly to sine waves of different frequencies and has little variance in response over time to the same stimulus, Golgi tendon organ receptors may be modeled as linear time-invariant systems. Specifically, they find that the firing rate of a Golgi tendon organ receptor may be modeled as a sum of 3 decaying exponentials: where is the firing rate and is a step function of force. The corresponding Laplace transfer function for this system is: For a soleus receptor, Houk and Simon obtain average values of K=57 pulses/sec/kg, A=0.31, a=0.22 sec−1, B=0.4, b=2.17 sec−1, C=2.5, c=36 sec−1 . When modeling a stretch reflex, Lin and Crago improved upon this model by adding a logarithmic nonlinearity before the Houk and Simon model and a threshold nonlinearity after. Impairment Chronic Proprioception, a sense vital for rapid and proper body coordination, can be permanently lost or impaired as a result of genetic conditions, disease, viral infections, and injuries. For instance, patients with joint hypermobility or Ehlers–Danlos syndromes, genetic conditions that result in weak connective tissue throughout the body, have chronic impairments to proprioception. Autism spectrum disorder and Parkinson's disease can also cause chronic disorder of proprioception. In regards to Parkinson's disease, it remains unclear whether the proprioceptive-related decline in motor function occurs due to disrupted proprioceptors in the periphery or signaling in the spinal cord or brain. In rare cases, viral infections result in a loss of proprioception. Ian Waterman and Charles Freed are two such people that lost their sense of proprioception from the neck down from supposed viral infections (i.e. gastric flu and a rare viral infection). After losing their sense of proprioception, Ian and Charles could move their lower body, but could not coordinate their movements. However, both individuals regained some control of their limbs and body by consciously planning their movements and relying solely on visual feedback. Interestingly, both individuals can still sense pain and temperature, indicating that they specifically lost proprioceptive feedback, but not tactile and nociceptive feedback. The impact of losing the sense of proprioception on daily life is perfectly illustrated when Ian Waterman stated, "What is an active brain without mobility". Proprioception is also permanently lost in people who lose a limb or body part through injury or amputation. After the removal of a limb, people may have a confused sense of that limb's existence on their body, known as phantom limb syndrome. Phantom sensations can occur as passive proprioceptive sensations of the limb's presence, or more active sensations such as perceived movement, pressure, pain, itching, or temperature. There are a variety of theories concerning the etiology of phantom limb sensations and experience. One is the concept of "proprioceptive memory", which argues that the brain retains a memory of specific limb positions and that after amputation there is a conflict between the visual system, which actually sees that the limb is missing, and the memory system which remembers the limb as a functioning part of the body. Phantom sensations and phantom pain may also occur after the removal of body parts other than the limbs, such as after amputation of the breast, extraction of a tooth (phantom tooth pain), or removal of an eye (phantom eye syndrome). There is a decline in the sense of proprioception with ageing. This can often result in chronic lower back pain, and be the cause of falls in the elderly. Acute Proprioception is occasionally impaired spontaneously, especially when one is tired. Similar effects can be felt during the hypnagogic state of consciousness, during the onset of sleep. One's body may feel too large or too small, or parts of the body may feel distorted in size. Similar effects can sometimes occur during epilepsy or migraine auras. These effects are presumed to arise from abnormal stimulation of the part of the parietal cortex of the brain involved with integrating information from different parts of the body. Proprioceptive illusions can also be induced, such as the "Pinocchio illusion", the illusion that one's nose is growing longer. Temporary impairment of proprioception has also been known to occur from an overdose of vitamin B6 (pyridoxine and pyridoxamine). This is due to a reversible neuropathy. Most of the impaired function returns to normal shortly after the amount of the vitamin in the body returns to a level that is closer to that of the physiological norm. Impairment can also be caused by cytotoxic factors such as chemotherapy. It has been proposed that even common tinnitus and the attendant hearing frequency-gaps masked by the perceived sounds may cause erroneous proprioceptive information to the balance and comprehension centers of the brain, precipitating mild confusion. Temporary loss or impairment of proprioception may happen periodically during growth, mostly during adolescence. Growth that might also influence this would be large increases or drops in bodyweight/size due to fluctuations of fat (liposuction, rapid fat loss or gain) and/or muscle content (bodybuilding, anabolic steroids, catabolisis/starvation). It can also occur in those that gain new levels of flexibility, stretching, and contortion. A limb's being in a new range of motion never experienced (or at least, not for a long time since youth perhaps) can disrupt one's sense of location of that limb. Possible experiences include suddenly feeling that feet or legs are missing from one's mental self-image; needing to look down at one's limbs to be sure they are still there; and falling down while walking, especially when attention is focused upon something other than the act of walking. Diagnosis Impaired proprioception may be diagnosed through a series of tests, each focusing on a different functional aspect of proprioception. The Romberg's test is often used to assess balance. The subject must stand with feet together and eyes closed without support for 30 seconds. If the subject loses balance and falls, it is an indicator for impaired proprioception. For evaluating proprioception's contribution to motor control, a common protocol is joint position matching. The patient is blindfolded while a joint is moved to a specific angle for a given period of time and then returned to neutral. The subject is then asked to move the joint back to the specified angle. Recent investigations have shown that hand dominance, participant age, active versus passive matching, and presentation time of the angle can all affect performance on joint position matching tasks. For passive sensing of joint angles, recent studies have found that experiments to probe psychophysical thresholds produce more precise estimates of proprioceptive discrimination than the joint position matching task. In these experiments, the subject holds on to an object (such as an armrest) that moves and stops at different positions. The subject must discriminate whether one position is closer to the body than another. From the subject's choices, the tester may determine the subject's discrimination thresholds. Proprioception is tested by American police officers using the field sobriety testing to check for alcohol intoxication. The subject is required to touch his or her nose with eyes closed; people with normal proprioception may make an error of no more than , while people with impaired proprioception (a symptom of moderate to severe alcohol intoxication) fail this test due to difficulty locating their limbs in space relative to their noses. Training Proprioception is what allows someone to learn to walk in complete darkness without losing balance. During the learning of any new skill, sport, or art, it is usually necessary to become familiar with some proprioceptive tasks specific to that activity. Without the appropriate integration of proprioceptive input, an artist would not be able to brush paint onto a canvas without looking at the hand as it moved the brush over the canvas; it would be impossible to drive an automobile because a motorist would not be able to steer or use the pedals while looking at the road ahead; a person could not touch type or perform ballet; and people would not even be able to walk without watching where they put their feet. Oliver Sacks reported the case of a young woman who lost her proprioception due to a viral infection of her spinal cord. At first she could not move properly at all or even control her tone of voice (as voice modulation is primarily proprioceptive). Later she relearned by using her sight (watching her feet) and inner ear only for movement while using hearing to judge voice modulation. She eventually acquired a stiff and slow movement and nearly normal speech, which is believed to be the best possible in the absence of this sense. She could not judge effort involved in picking up objects and would grip them painfully to be sure she did not drop them. The proprioceptive sense can be sharpened through study of many disciplines. Juggling trains reaction time, spatial location, and efficient movement. Standing on a wobble board or balance board is often used to retrain or increase proprioceptive abilities, particularly as physical therapy for ankle or knee injuries. Slacklining is another method to increase proprioception. Standing on one leg (stork standing) and various other body-position challenges are also used in such disciplines as yoga, Wing Chun and tai chi. The vestibular system of the inner ear, vision and proprioception are the main three requirements for balance. Moreover, there are specific devices designed for proprioception training, such as the exercise ball, which works on balancing the abdominal and back muscles. History of study In 1557, the position-movement sensation was described by Julius Caesar Scaliger as a "sense of locomotion". In 1826, Charles Bell expounded the idea of a "muscle sense", which is credited as one of the first descriptions of physiologic feedback mechanisms. Bell's idea was that commands are carried from the brain to the muscles, and that reports on the muscle's condition would be sent in the reverse direction. In 1847, the London neurologist Robert Todd highlighted important differences in the anterolateral and posterior columns of the spinal cord, and suggested that the latter were involved in the coordination of movement and balance. At around the same time, Moritz Heinrich Romberg, a Berlin neurologist, was describing unsteadiness made worse by eye closure or darkness, now known as the eponymous Romberg's sign, once synonymous with tabes dorsalis, that became recognised as common to all proprioceptive disorders of the legs. In 1880, Henry Charlton Bastian suggested "kinaesthesia" instead of "muscle sense" on the basis that some of the afferent information (back to the brain) comes from other structures, including tendons, joints, and skin. In 1889, Alfred Goldscheider suggested a classification of kinaesthesia into three types: muscle, tendon, and articular sensitivity. In 1906, the term proprio-ception (and also intero-ception and extero-ception) is attested in a publication by Charles Scott Sherrington involving receptors. He explains the terminology as follows: Today, the "exteroceptors" are the organs that provide information originating outside the body, such as the eyes, ears, mouth, and skin. The interoceptors provide information about the internal organs, and the "proprioceptors" provide information about movement derived from muscular, tendon, and articular sources. Using Sherrington's system, physiologists and anatomists search for specialised nerve endings that transmit mechanical data on joint capsule, tendon and muscle tension (such as Golgi tendon organs and muscle spindles), which play a large role in proprioception. Primary endings of muscle spindles "respond to the size of a muscle length change and its speed" and "contribute both to the sense of limb position and movement". Secondary endings of muscle spindles detect changes in muscle length, and thus supply information regarding only the sense of position. Essentially, muscle spindles are stretch receptors. It has been accepted that cutaneous receptors also contribute directly to proprioception by providing "accurate perceptual information about joint position and movement", and this knowledge is combined with information from the muscle spindles. Etymology Proprioception is from Latin proprius, meaning "one's own", "individual", and capio, capere, to take or grasp. Thus to grasp one's own position in space, including the position of the limbs in relation to each other and the body as a whole. The word kinesthesia or kinæsthesia (kinesthetic sense) refers to movement sense, but has been used inconsistently to refer either to proprioception alone or to the brain's integration of proprioceptive and vestibular inputs. Kinesthesia is a modern medical term composed of elements from Greek; kinein "to set in motion; to move" (from PIE root *keie- "to set in motion") + aisthesis "perception, feeling" (from PIE root *au- "to perceive"). Plants and bacteria Although they lack neurons, systems responding to stimuli (analogous to the sensory system in animals with a nervous system, which includes the proprioception) have also been described in some plants (angiosperms). Terrestrial plants control the orientation of their primary growth through the sensing of several vectorial stimuli such as the light gradient or the gravitational acceleration. This control has been called tropism. A quantitative study of shoot gravitropism demonstrated that, when a plant is tilted, it cannot recover a steady erected posture under the sole driving of the sensing of its angular deflection versus gravity. An additional control through the continuous sensing of its curvature by the organ and the subsequent driving an active straightening process are required. Being a sensing by the plant of the relative configuration of its parts, it has been called proprioception. This dual sensing and control by gravisensing and proprioception has been formalized into a unifying mathematical model simulating the complete driving of the gravitropic movement. This model has been validated on 11 species sampling the phylogeny of land angiosperms, and on organs of very contrasted sizes, ranging from the small germination of wheat (coleoptile) to the trunk of poplar trees. Further studies have shown that the cellular mechanism of proprioception in plants involves myosin and actin, and seems to occur in specialized cells. Proprioception was then found to be involved in other tropisms and to be central also to the control of nutation. The discovery of proprioception in plants has generated an interest in the popular science and generalist media. This is because this discovery questions a long-lasting a priori that we have on plants. In some cases this has led to a shift between proprioception and self-awareness or self-consciousness. There is no scientific ground for such a semantic shift. Indeed, even in animals, proprioception can be unconscious; so it is thought to be in plants. Recent studies suggest that bacteria have control systems that may resemble proprioception. See also Notes References External links Articles containing video clips Sensory systems
0.768744
0.998651
0.767707
Hydraulic analogy
Electronic-hydraulic analogies are the representation of electronic circuits by hydraulic circuits. Since electric current is invisible and the processes in play in electronics are often difficult to demonstrate, the various electronic components are represented by hydraulic equivalents. Electricity (as well as heat) was originally understood to be a kind of fluid, and the names of certain electric quantities (such as current) are derived from hydraulic equivalents. The electronic–hydraulic analogy (derisively referred to as the drain-pipe theory by Oliver Lodge) is the most widely used analogy for "electron fluid" in a metal conductor. As with all analogies, it demands an intuitive and competent understanding of the baseline paradigms (electronics and hydraulics), and in the case of the hydraulic analogy for electronics, students often have an inadequate knowledge of hydraulics. The analogy may also be reversed to explain or model hydraulic systems in terms of electronic circuits, as in expositions of the Windkessel effect. Paradigms There is no unique paradigm for establishing this analogy. Different paradigms have different strengths and weaknesses, depending on how and in what ways the intuitive understanding of the source of the analogy matches with phenomena in electronics. Two paradigms can be used to introduce the concept to students using pressure induced by gravity or by pumps. In the version with pressure induced by gravity, large tanks of water are held up high, or are filled to differing water levels, and the potential energy of the water head is the pressure source. This is reminiscent of electrical diagrams with an up arrow pointing to +V, grounded pins that otherwise are not shown connecting to anything, and so on. This has the advantage of associating electric potential with gravitational potential. A second paradigm is a completely enclosed version with pumps providing pressure only and no gravity. This is reminiscent of a circuit diagram with a voltage source shown and the wires actually completing a circuit. This paradigm is further discussed below. Other paradigms highlight the similarities between equations governing the flow of fluid and the flow of charge. Flow and pressure variables can be calculated in both steady and transient fluid flow situations with the use of the hydraulic ohm analogy. Hydraulic ohms are the units of hydraulic impedance, which is defined as the ratio of pressure to volume flow rate. The pressure and volume flow variables are treated as phasors in this definition, so possess a phase as well as magnitude. A slightly different paradigm is used in acoustics, where acoustic impedance is defined as a relationship between acoustic pressure and acoustic particle velocity. In this paradigm, a large cavity with a hole is analogous to a capacitor that stores compressional energy when the time-dependent pressure deviates from atmospheric pressure. A hole (or long tube) is analogous to an inductor that stores kinetic energy associated with the flow of air. Hydraulic analogy with horizontal water flow Voltage, current, and charge In general, electric potential is equivalent to hydraulic head. This model assumes that the water is flowing horizontally, so that the force of gravity can be ignored. In this case, electric potential is equivalent to pressure. The voltage (or voltage drop or potential difference) is a difference in pressure between two points. Electric potential is usually measured in volts. Electric current is equivalent to a hydraulic volume flow rate; that is, the volumetric quantity of flowing water over time. Usually measured in amperes. A unit of electric charge is analogous to a unit volume of water. Basic circuit elements A relatively wide hose completely filled with water is equivalent to a conducting wire. A rigidly mounted pipe is equivalent to a trace on a circuit board. When comparing to a trace or wire, the hose or pipe should be thought of as having semi-permanent caps on the ends. Connecting one end of a wire to a circuit is equivalent to un-capping one end of the hose and attaching it to another. With few exceptions (such as a high-voltage power source), a wire with only one end attached to a circuit will do nothing; the hose remains capped on the free end, and thus adds nothing to the circuit. A resistor is equivalent to a constriction in the bore of a pipe which requires more pressure to pass the same amount of water. All pipes have some resistance to flow, just as all wires and traces have some resistance to current. A node (or junction) in Kirchhoff's junction rule is equivalent to a pipe tee. The net flow of water into a piping tee (filled with water) must equal the net flow out. A capacitor is equivalent to a tank with one connection at each end and a rubber sheet dividing the tank in two lengthwise (a hydraulic accumulator). When water is forced into one pipe, equal water is simultaneously forced out of the other pipe, yet no water can penetrate the rubber diaphragm. Energy is stored by the stretching of the rubber. As more current flows "through" the capacitor, the back-pressure (voltage) becomes greater, thus current "leads" voltage in a capacitor. As the back-pressure from the stretched rubber approaches the applied pressure, the current becomes less and less. Thus capacitors "filter out" constant pressure differences and slowly varying, low-frequency pressure differences, while allowing rapid changes in pressure to pass through. An inductor is equivalent to a rotary vane pump with a heavy rotor placed in the current. The mass of the rotor and the surface area of the vanes restricts the water's ability to rapidly change its rate of flow (current) through the pump due to the effects of inertia, but, given time, a constant flowing stream will pass mostly unimpeded through the pump, as the rotor turns at the same speed as the water flow. The mass of the rotor and the surface area of its vanes are analogous to inductance, and friction between its axle and the axle bearings corresponds to the resistance that accompanies any non-superconducting inductor.An alternative inductor model is simply a long pipe, perhaps coiled into a spiral for convenience. This fluid-inertia device is used in real life as an essential component of a hydraulic ram. The inertia of the water flowing through the pipe produces the inductance effect; inductors "filter out" rapid changes in flow, while allowing slow variations in current to be passed through. The drag imposed by the walls of the pipe is somewhat analogous to parasitic resistance. In either model, the pressure difference (voltage) across the device must be present before the current will start moving, thus in inductors, voltage "leads" current. As the current increases, approaching the limits imposed by its own internal friction and of the current that the rest of the circuit can provide, the pressure drop across the device becomes lower and lower. An ideal voltage source (ideal battery) or ideal current source is a dynamic pump with feedback control. A pressure meter on both sides shows that regardless of the current being produced, this kind of pump produces constant pressure difference. If one terminal is kept fixed at ground, another analogy is a large body of water at a high elevation, sufficiently large that the drawn water does not affect the water level. To create the analog of an ideal current source, use a positive displacement pump: A current meter (little paddle wheel) shows that when this kind of pump is driven at a constant speed, it maintains a constant speed of the little paddle wheel. Other circuit elements A diode is equivalent to a one-way check valve with a slightly leaky valve seat. As with a diode, a small pressure difference is needed before the valve opens. And like a diode, too much reverse bias can damage or destroy the valve assembly. A transistor is a valve in which a diaphragm, controlled by a low-current signal (either constant current for a BJT or constant pressure for a FET), moves a plunger which affects the current through another section of pipe. CMOS is a combination of two MOSFET transistors. As the input pressure changes, the pistons allow the output to connect to either zero or positive pressure. A memristor is a needle valve operated by a flow meter. As water flows through in the forward direction, the needle valve restricts flow more; as water flows the other direction, the needle valve opens further, providing less resistance. Practical application On the basis of this analogy Johan van Veen developed around 1937 a method to compute tidal currents with an electric analogue. After the North Sea flood of 1953 in The Netherlands he elaborated this idea, which eventually lead to the analog computer ‘’Deltar’’, which was used to make the hydraulic computations for the closures in the framework of the Delta Works. Principal equivalents EM wave speed (velocity of propagation) is equivalent to the speed of sound in water. When a light switch is flipped, the electric wave travels very quickly through the wires. Charge flow speed (drift velocity) is equivalent to the particle speed of water. The moving charges themselves move rather slowly. DC is equivalent to a constant flow of water in a circuit of pipes. Low frequency AC is equivalent to water oscillating back and forth in a pipe Higher-frequency AC and transmission lines is somewhat equivalent to sound being transmitted through the water pipes, though this does not properly mirror the cyclical reversal of alternating electric current. As described, the fluid flow conveys pressure fluctuations, but fluids do not reverse at high rates in hydraulic systems, which the above "low frequency" entry does accurately describe. A better concept (if sound waves are to be the phenomenon) is that of direct current with high-frequency "ripple" superimposed. Inductive spark used in induction coils is similar to water hammer, caused by the inertia of water. Equation examples If the differential equations are equivalent in form, the dynamics of the systems they describe will be related. The example hydraulic equations approximately describe the relationship between a constant, laminar flow in a cylindrical pipe and the difference in pressure at each end, as long as the flow is not analyzed near the ends of the pipe. The example electric equations approximately describe the relationship between a current in a straight wire and the difference in electric potential (voltage). In these two cases, the states of both systems are well-approximated by the differential equations above, and so the states are related. The assumptions that make these differential equations good approximates are needed for this relationship. Any deviations from the assumptions (e.g. pipe or wire is not straight, flow or current is changing over time, other factors are influencing potential) can make the relationship fail to hold. The differential equations for hydraulics and electronics above are special cases of the Navier–Stokes equations and Maxwell's equations, respectively, and the two are not equivalent in form. Limits to the analogy If taken too far, the water analogy can create misconceptions. Negative transfer can occur when there is a mismatch between phenomena in the source (hydraulics) and the corresponding phenomena in the target (electronics). For the analogy to be useful, one must remain aware of the regions where electricity and water behave very differently. Fields (Maxwell equations, inductance): Electrons can push or pull other distant electrons via their fields, while water molecules experience forces only from direct contact with other molecules. For this reason, waves in water travel at the speed of sound, but waves in a sea of charge will travel much faster as the forces from one electron are applied to many distant electrons and not to only the neighbors in direct contact. In a hydraulic transmission line, the energy flows as mechanical waves through the water, but in an electric transmission line the energy flows as fields in the space surrounding the wires, and does not flow inside the metal. Also, an accelerating electron will drag its neighbors along while attracting them, both because of magnetic forces. Charge: Unlike water, movable charge carriers can be positive or negative, and conductors can exhibit an overall positive or negative net charge. The mobile carriers in electric currents are usually electrons, but sometimes they are charged positively, such as the positive ions in an electrolyte, the H+ ions in proton conductors or holes in p-type semiconductors and some (very rare) conductors. Leaking pipes: The electric charge of an electrical circuit and its elements is usually almost equal to zero, hence it is (almost) constant. This is formalized in Kirchhoff's current law, which does not have an analogy to hydraulic systems, where the amount of the liquid is not usually constant. Even with incompressible liquid the system may contain such elements as pistons and open pools, so the volume of liquid contained in a part of the system can change. For this reason, continuing electric currents require closed loops rather than hydraulics' open source/sink resembling spigots and buckets. Fluid velocity and resistance of metals: As with water hoses, the carrier drift velocity in conductors is directly proportional to current. However, water only experiences drag via the pipes' inner surface, while charges are slowed at all points within a metal, as with water forced through a filter. Also, typical velocity of charge carriers within a conductor is less than centimeters per minute, and the "electrical friction" is extremely high. If charges ever flowed as fast as water can flow in pipes, the electric current would be immense, and the conductors would become incandescently hot and perhaps vaporize. To model the resistance and the charge-velocity of metals, perhaps a pipe packed with sponge, or a narrow straw filled with syrup, would be a better analogy than a large-diameter water pipe. Quantum mechanics: Solid conductors and insulators contain charges at more than one discrete level of atomic orbit energy, while the water in one region of a pipe can only have a single value of pressure. For this reason there is no hydraulic explanation for such things as a battery's charge pumping ability, a diode's depletion layer and voltage drop, solar cell functions, Peltier effect, etc., however equivalent devices can be designed which exhibit similar responses, although some of the mechanisms would only serve to regulate the flow curves rather than to contribute to the component's primary function. In order for the model to be useful, the reader or student must have a substantial understanding of the model (hydraulic) system's principles. It also requires that the principles can be transferred to the target (electrical) system. Hydraulic systems are deceptively simple: the phenomenon of pump cavitation is a known, complex problem that few people outside of the fluid power or irrigation industries would understand. For those who do, the hydraulic analogy is amusing, as no "cavitation" equivalent exists in electrical engineering. The hydraulic analogy can give a mistaken sense of understanding that will be exposed once a detailed description of electrical circuit theory is required. One must also consider the difficulties in trying to make an analogy match reality completely. The above "electrical friction" example, where the hydraulic analog is a pipe filled with sponge material, illustrates the problem: the model must be increased in complexity beyond any realistic scenario. See also Bond graph Fluidics Hydraulic circuit Hydraulic conductivity Mechanical–electrical analogies Notes External links Animation Hydraulic Analogy for Inductive Electric Elements Electronics concepts Electrical analogies
0.776548
0.988612
0.767705
Nomothetic and idiographic
Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view. Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general. Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena. Use in the social sciences The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective). Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data. In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899. Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits. In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?). In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context. See also Nomological References Further reading Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford. Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215. Concepts in epistemology
0.777656
0.987158
0.76767
Courant–Friedrichs–Lewy condition
In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain upper bound, given a fixed spatial increment, in many explicit time-marching computer simulations; otherwise, the simulation produces incorrect or unstable results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper. Heuristic description The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (as determined by initial conditions and the parameters of the approximation scheme) must include the analytical domain of dependence (wherein the initial conditions have an effect on the exact value of the solution at that point) to assure that the scheme can access the information required to form the solution. Statement To make a reasonably formally precise statement of the condition, it is necessary to define the following quantities: Spatial coordinate: one of the coordinates of the physical space in which the problem is posed Spatial dimension of the problem: the number of spatial dimensions, i.e., the number of spatial coordinates of the physical space where the problem is posed. Typical values are , and . Time: the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinates The spatial coordinates and the time are discrete-valued independent variables, which are placed at regular distances called the interval length and the time step, respectively. Using these names, the CFL condition relates the length of the time step to a function of the interval lengths of each spatial coordinate and of the maximum speed that information can travel in the physical space. Operatively, the CFL condition is commonly prescribed for those terms of the finite-difference approximation of general partial differential equations that model the advection phenomenon. The one-dimensional case For the one-dimensional case, the continuous-time model equation (that is usually solved for ) is: The CFL condition then has the following form: where the dimensionless number is called the Courant number, is the magnitude of the velocity (whose dimension is length/time) is the time step (whose dimension is time) is the length interval (whose dimension is length). The value of changes with the method used to solve the discretised equation, especially depending on whether the method is explicit or implicit. If an explicit (time-marching) solver is used then typically . Implicit (matrix) solvers are usually less sensitive to numerical instability and so larger values of may be tolerated. The two and general n-dimensional case In the two-dimensional case, the CFL condition becomes with the obvious meanings of the symbols involved. By analogy with the two-dimensional case, the general CFL condition for the -dimensional case is the following one: The interval length is not required to be the same for each spatial variable . This "degree of freedom" can be used to somewhat optimize the value of the time step for a particular problem, by varying the values of the different interval to keep it not too small. Notes References . .: translated from the German by Phyllis Fox. This is an earlier version of the paper , circulated as a research report. . A freely downloadable copy can be found here. Carlos A. de Moura and Carlos S. Kubrusly (Eds.): "The Courant-Friedrichs-Lewy (CFL) Condition: 80 Years After Its Discovery", Birkhauser, ISBN 978-0-8176-8393-1 (2013). External links Numerical differential equations Computational fluid dynamics
0.772709
0.993462
0.767656
Vorticity
In continuum mechanics, vorticity is a pseudovector (or axial vector) field that describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings. Mathematically, the vorticity is the curl of the flow velocity : where is the nabla operator. Conceptually, could be determined by marking parts of a continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. By its own definition, the vorticity vector is a solenoidal field since In a two-dimensional flow, is always perpendicular to the plane of the flow, and can therefore be considered a scalar field. Mathematical definition and properties Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by , defined as the curl of the velocity field describing the continuum motion. In Cartesian coordinates: In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it. In a two-dimensional flow where the velocity is independent of the -coordinate and has no -component, the vorticity vector is always parallel to the -axis, and therefore can be expressed as a scalar field multiplied by a constant unit vector : The vorticity is also related to the flow's circulation (line integral of the velocity) along a closed path by the (classical) Stokes' theorem. Namely, for any infinitesimal surface element with normal direction and area , the circulation along the perimeter of is the dot product where is the vorticity at the center of . Since vorticity is a axial vector, it can be associated with a second-order antisymmetric tensor (the so-called vorticity or rotation tensor), which is said to be the dual of . The relation between the two quantities, in index notation, are given by where is the three-dimensional Levi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor , i.e., Examples In a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, in the central core of a Rankine vortex. The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest. Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero. {| border="0" |- | style="text-align:center;" colspan=3 | Example flows: |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" | Rigid-body-like vortex | style="text-align:center;" | Parallel flow with shear | style="text-align:center;" | Irrotational vortex |- | style="text-align:center;" colspan=3 | where is the velocity of the flow, is the distance to the center of the vortex and ∝ indicates proportionality.Absolute velocities around the highlighted point: |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" colspan=3 | Relative velocities (magnified) around the highlighted point |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" | Vorticity ≠ 0 | style="text-align:center;" | Vorticity ≠ 0 | style="text-align:center;" | Vorticity = 0 |} Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity. Evolution The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations. In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensional potential flow (i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane. Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation. Vortex lines and vortex tubes A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation where is the vorticity vector in Cartesian coordinates. A vortex tube is the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also called vortex flux) is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence. In a three-dimensional flow, vorticity (as measured by the volume integral of the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known as vortex stretching. This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents. Vorticity meters Rotating-vane vorticity meter A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend. Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity" and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research). Specific sciences Aeronautics In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density. Atmospheric sciences The relative vorticity is the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere. The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter. The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant (potential) temperature (or entropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity is conserved in an adiabatic flow. As adiabatic flow predominates in the atmosphere, the potential vorticity is useful as an approximate tracer of air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy. The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation. In modern numerical weather forecasting models and general circulation models (GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation. Related to the concept of vorticity is the helicity , defined as where the integral is over a given volume . In atmospheric science, helicity of the air motion is important in forecasting supercells and the potential for tornadic activity. See also Barotropic vorticity equation D'Alembert's paradox Enstrophy Palinstrophy Velocity potential Vortex Vortex tube Vortex stretching Horseshoe vortex Wingtip vortices Fluid dynamics Biot–Savart law Circulation Vorticity equations Kutta–Joukowski theorem Atmospheric sciences Prognostic equation Carl-Gustaf Rossby Hans Ertel References Bibliography Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London "Weather Glossary"' The Weather Channel Interactive, Inc.. 2004. "Vorticity". Integrated Publishing. Further reading Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005. Chorin, Alexandre J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994. Majda, Andrew J., Andrea L. Bertozzi, "Vorticity and Incompressible Flow". Cambridge University Press; 2002. Tritton, D. J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977. Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, Florida. 1985. External links Weisstein, Eric W., "Vorticity". Scienceworld.wolfram.com. Doswell III, Charles A., "A Primer on Vorticity for Application in Supercells and Tornadoes". Cooperative Institute for Mesoscale Meteorological Studies, Norman, Oklahoma. Cramer, M. S., "Navier–Stokes Equations -- Vorticity Transport Theorems: Introduction". Foundations of Fluid Mechanics. Parker, Douglas, "ENVI 2210 – Atmosphere and Ocean Dynamics, 9: Vorticity". School of the Environment, University of Leeds. September 2001. Graham, James R., "Astronomy 202: Astrophysical Gas Dynamics". Astronomy Department, UC Berkeley. "The vorticity equation: incompressible and barotropic fluids". "Interpretation of the vorticity equation". "Kelvin's vorticity theorem for incompressible or barotropic flow". "Spherepack 3.1 ". (includes a collection of FORTRAN vorticity program) "Mesoscale Compressible Community (MC2) Real-Time Model Predictions". (Potential vorticity analysis) Continuum mechanics Fluid dynamics Meteorological quantities Rotation fr:Tourbillon (physique)
0.772273
0.993993
0.767634
Uncertainty quantification
Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense. Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification. Sources Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider: Parameter This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization. Parametric This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance. Structural uncertainty Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics. Algorithmic Also known as numerical uncertainty, or discrete uncertainty. This type comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example, the finite element method or finite difference method may be used to approximate the solution of a partial differential equation (which introduces numerical errors). Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation. Experimental Also known as observation error, this comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables. Interpolation This comes from a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses. Aleatoric and epistemic Uncertainty is sometimes classified into two categories, prominently seen in medical applications. Aleatoric Aleatoric uncertainty is also known as stochastic uncertainty, and is representative of unknowns that differ each time we run the same experiment. For example, a single arrow shot with a mechanical bow that exactly duplicates each launch (the same acceleration, altitude, direction and final velocity) will not all impact the same point on the target due to random and complicated vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to eliminate the resulting scatter of impact points. The argument here is obviously in the definition of "cannot". Just because we cannot measure sufficiently with our currently available measurement devices does not preclude necessarily the existence of such information, which would move this uncertainty into the below category. Aleatoric is derived from the Latin alea or dice, referring to a game of chance. Epistemic uncertainty Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in principle know but does not in practice. This may be because a measurement is not accurate, because the model neglects certain effects, or because particular data have been deliberately hidden. An example of a source of this uncertainty would be the drag in an experiment designed to measure the acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of 9.8 m/s² ignores the effects of air resistance, but the air resistance for the object could be measured and incorporated into the experiment to reduce the resulting uncertainty in the calculation of the gravitational acceleration. Combined occurrence and interaction of aleatoric and epistemic uncertainty Aleatoric and epistemic uncertainty can also occur simultaneously in a single term E.g., when experimental parameters show aleatoric uncertainty, and those experimental parameters are input to a computer simulation. If then for the uncertainty quantification a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is learnt from computer experiments, this surrogate exhibits epistemic uncertainty that depends on or interacts with the aleatoric uncertainty of the experimental parameters. Such an uncertainty cannot solely be classified as aleatoric or epistemic any more, but is a more general inferential uncertainty. In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional (frequentist) probability is the most basic form. Techniques such as the Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens of Bayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim. Mathematical perspective In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be. Types of problems There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is the inverse assessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems. Forward Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the parametric variability listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be: To evaluate low-order moments of the outputs, i.e. mean and variance. To evaluate the reliability of the outputs. This is especially useful in reliability engineering where outputs of a system are usually closely related to the performance of the system. To assess the complete probability distribution of the outputs. This is useful in the scenario of utility optimization where the complete distribution is used to calculate the utility. Inverse Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is called bias correction), and estimates the values of unknown parameters in the model if there are any (which is called parameter calibration or simply calibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification: Bias correction only Bias correction quantifies the model inadequacy, i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is: where denotes the experimental measurements as a function of several input variables , denotes the computer model (mathematical model) response, denotes the additive discrepancy function (aka bias function), and denotes the experimental uncertainty. The objective is to estimate the discrepancy function , and as a by-product, the resulting updated model is . A prediction confidence interval is provided with the updated model as the quantification of the uncertainty. Parameter calibration only Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is: where denotes the computer model response that depends on several unknown model parameters , and denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate , or to come up with a probability distribution of that encompasses the best knowledge of the true parameter values. Bias correction and parameter calibration It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together: It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve. Selective methodologies Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems. Forward propagation Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically six categories of probabilistic approaches for uncertainty propagation: Simulation-based methods: Monte Carlo simulations, importance sampling, adaptive sampling, etc. General surrogate-based methods: In a non-instrusive approach, a surrogate model is learnt in order to replace the experiment or the simulation with a cheap and fast approximation. Surrogate-based methods can also be employed in a fully Bayesian fashion. This approach has proven particularly powerful when the cost of sampling, e.g. computationally expensive simulations, is prohibitively high. Local expansion-based methods: Taylor series, perturbation method, etc. These methods have advantages when dealing with relatively small input variability and outputs that don't express high nonlinearity. These linear or linearized methods are detailed in the article Uncertainty propagation. Functional expansion-based methods: Neumann expansion, orthogonal or Karhunen–Loeve expansions (KLE), with polynomial chaos expansion (PCE) and wavelet expansions as special cases. Most probable point (MPP)-based methods: first-order reliability method (FORM) and second-order reliability method (SORM). Numerical integration-based methods: Full factorial numerical integration (FFNI) and dimension reduction (DR). For non-probabilistic approaches, interval analysis, Fuzzy theory, Possibility theory and evidence theory are among the most widely used. The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics. This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals. Inverse uncertainty Frequentist In regression analysis and least squares problems, the standard error of parameter estimates is readily available, which can be expanded into a confidence interval. Bayesian Several methodologies for inverse uncertainty quantification exist under the Bayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is necessary, defining an inverse problem for finding the surrogate model that best approximates the simulations. Modular approach An approach to inverse uncertainty quantification is the modular Bayesian approach. The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned. Module 1: Gaussian process modeling for the computer model To address the issue from lack of simulation results, the computer model is replaced with a Gaussian process (GP) model where is the dimension of input variables, and is the dimension of unknown parameters. While is pre-defined, , known as hyperparameters of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized kriging method. Module 2: Gaussian process modeling for the discrepancy function Similarly with the first module, the discrepancy function is replaced with a GP model where Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for . At the same time, from Module 1 gets updated as well. Module 3: Posterior distribution of unknown parameters Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters: where includes all the fixed hyperparameters in previous modules. Module 4: Prediction of the experimental response and discrepancy function Full approach Fully Bayesian approach requires that not only the priors for unknown parameters but also the priors for the other hyperparameters should be assigned. It follows the following steps: Derive the posterior distribution ; Integrate out and obtain . This single step accomplishes the calibration; Prediction of the experimental response and discrepancy function. However, the approach has significant drawbacks: For most cases, is a highly intractable function of . Hence the integration becomes very troublesome. Moreover, if priors for the other hyperparameters are not carefully chosen, the complexity in numerical integration increases even more. In the prediction stage, the prediction (which should at least include the expected value of system responses) also requires numerical integration. Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive. The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations. Known issues The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved: Dimensionality issue: The computational cost increases dramatically with the dimensionality of the problem, i.e. the number of input variables and/or the number of unknown parameters. Identifiability issue: Multiple combinations of unknown parameters and discrepancy function can yield the same experimental prediction. Hence different values of parameters cannot be distinguished/identified. This issue is circumvented in a Bayesian approach, where such combinations are averaged over. Incomplete model response: Refers to a model not having a solution for some combinations of the input variables. Quantifying uncertainty in the input quantities: Crucial events missing in the available data or critical quantities unidentified to analysts due to, e.g., limitations in existing models. Little consideration of the impact of choices made by analysts. See also Computer experiment Further research is needed Quantification of margins and uncertainties Probabilistic numerics Bayesian regression Bayesian probability References Applied mathematics Mathematical modeling Operations research Statistical theory
0.773874
0.991934
0.767631
Electrophoresis
Electrophoresis is the motion of charged dispersed particles or dissolved charged molecules relative to a fluid under the influence of a spatially uniform electric field. As a rule, these are zwitterions. Electrophoresis is used in laboratories to separate macromolecules based on their charges. The technique normally applies a negative charge called cathode so protein molecules move towards a positive charge called anode. Therefore, electrophoresis of positively charged particles or molecules (cations) is sometimes called cataphoresis, while electrophoresis of negatively charged particles or molecules (anions) is sometimes called anaphoresis. Electrophoresis is the basis for analytical techniques used in biochemistry for separating particles, molecules, or ions by size, charge, or binding affinity either freely or through a supportive medium using a one-directional flow of electrical charge. It is used extensively in DNA, RNA and protein analysis. Liquid droplet electrophoresis is significantly different from the classic particle electrophoresis because of droplet characteristics such as a mobile surface charge and the nonrigidity of the interface. Also, the liquid–liquid system, where there is an interplay between the hydrodynamic and electrokinetic forces in both phases, adds to the complexity of electrophoretic motion. History Theory Suspended particles have an electric surface charge, strongly affected by surface adsorbed species, on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force, or ERF in short. When the electric field is applied and the charged particle to be analyzed is at steady movement through the diffuse layer, the total resulting force is zero: Considering the drag on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the drift velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as: The most well known and widely used theory of electrophoresis was developed in 1903 by Marian Smoluchowski: , where εr is the dielectric constant of the dispersion medium, ε0 is the permittivity of free space (C2 N−1 m−2), η is dynamic viscosity of the dispersion medium (Pa s), and ζ is zeta potential (i.e., the electrokinetic potential of the slipping plane in the double layer, units mV or V). The Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. It has limitations on its validity. For instance, it does not include Debye length κ−1 (units m). However, Debye length must be important for electrophoresis, as follows immediately from Figure 2, "Illustration of electrophoresis retardation". Increasing thickness of the double layer (DL) leads to removing the point of retardation force further from the particle surface. The thicker the DL, the smaller the retardation force must be. Detailed theoretical analysis proved that the Smoluchowski theory is valid only for sufficiently thin DL, when particle radius a is much greater than the Debye length: . This model of "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic theories. This model is valid for most aqueous systems, where the Debye length is usually only a few nanometers. It only breaks for nano-colloids in solution with ionic strength close to water. The Smoluchowski theory also neglects the contributions from surface conductivity. This is expressed in modern theory as condition of small Dukhin number: In the effort of expanding the range of validity of electrophoretic theories, the opposite asymptotic case was considered, when Debye length is larger than particle radius: . Under this condition of a "thick double layer", Erich Hückel predicted the following relation for electrophoretic mobility: . This model can be useful for some nanoparticles and non-polar fluids, where Debye length is much larger than in the usual cases. There are several analytical theories that incorporate surface conductivity and eliminate the restriction of a small Dukhin number, pioneered by Theodoor Overbeek and F. Booth. Modern, rigorous theories valid for any Zeta potential and often any aκ stem mostly from Dukhin–Semenikhin theory. In the thin double layer limit, these theories confirm the numerical solution to the problem provided by Richard W. O'Brien and Lee R. White. For modeling more complex scenarios, these simplifications become inaccurate, and the electric field must be modeled spatially, tracking its magnitude and direction. Poisson's equation can be used to model this spatially-varying electric field. Its influence on fluid flow can be modeled with the Stokes law, while transport of different ions can be modeled using the Nernst–Planck equation. This combined approach is referred to as the Poisson-Nernst-Planck-Stokes equations. It has been validated for the electrophoresis of particles. See also References Further reading External links List of relative mobilities Colloidal chemistry Biochemical separation processes Electroanalytical methods Instrumental analysis Laboratory techniques
0.770705
0.995999
0.767622
Inelastic scattering
In chemistry, nuclear physics, and particle physics, inelastic scattering is a process in which the internal states of a particle or a system of particles change after a collision. Often, this means the kinetic energy of the incident particle is not conserved (in contrast to elastic scattering). Additionally, relativistic collisions which involve a transition from one type of particle to another are referred to as inelastic even if the outgoing particles have the same kinetic energy as the incoming ones. Processes which are governed by elastic collisions at a microscopic level will appear to be inelastic if a macroscopic observer only has access to a subset of the degrees of freedom. In Compton scattering for instance, the two particles in the collision transfer energy causing a loss of energy in the measured particle. Electrons When an electron is the incident particle, the probability of inelastic scattering, depending on the energy of the incident electron, is usually smaller than that of elastic scattering. Thus in the case of gas electron diffraction (GED), reflection high-energy electron diffraction (RHEED), and transmission electron diffraction, because the energy of the incident electron is high, the contribution of inelastic electron scattering can be ignored. Deep inelastic scattering of electrons from protons provided the first direct evidence for the existence of quarks. Photons When a photon is the incident particle, there is an inelastic scattering process called Raman scattering. In this scattering process, the incident photon interacts with matter (gas, liquid, and solid) and the frequency of the photon is shifted towards red or blue. A red shift can be observed when part of the energy of the photon is transferred to the interacting matter, where it adds to its internal energy in a process called Stokes Raman scattering. The blue shift can be observed when internal energy of the matter is transferred to the photon; this process is called anti-Stokes Raman scattering. Inelastic scattering is seen in the interaction between an electron and a photon. When a high-energy photon collides with a free electron (more precisely, weakly bound since a free electron cannot participate in inelastic scattering with a photon) and transfers energy, the process is called Compton scattering. Furthermore, when an electron with relativistic energy collides with an infrared or visible photon, the electron gives energy to the photon. This process is called inverse Compton scattering. Neutrons Neutrons undergo many types of scattering, including both elastic and inelastic scattering. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal, or somewhere in between. It is also dependent on the nucleus it strikes and its neutron cross section. In inelastic scattering, the neutron interacts with the nucleus and the kinetic energy of the system is changed. This often activates the nucleus, putting it into an excited, unstable, short-lived energy state which causes it to quickly emit some kind of radiation to bring it back down to a stable or ground state. Alpha, beta, gamma, and protons may be emitted. Particles scattered in this type of nuclear reaction may cause the nucleus to recoil in the other direction. Molecular collisions Inelastic scattering is common in molecular collisions. Any collision which leads to a chemical reaction will be inelastic, but the term inelastic scattering is reserved for those collisions which do not result in reactions. There is a transfer of energy between the translational mode (kinetic energy) and rotational and vibrational modes. If the transferred energy is small compared to the incident energy of the scattered particle, one speaks of quasielastic scattering. See also Inelastic collision Elastic scattering Scattering theory References Particle physics Chemical kinetics Scattering, absorption and radiative transfer (optics)
0.781384
0.982377
0.767613
Effective potential
The effective potential (also known as effective potential energy) combines multiple, perhaps opposing, effects into a single potential. In its basic form, it is the sum of the 'opposing' centrifugal potential energy with the potential energy of a dynamical system. It may be used to determine the orbits of planets (both Newtonian and relativistic) and to perform semi-classical atomic calculations, and often allows problems to be reduced to fewer dimensions. Definition The basic form of potential is defined as: where L is the angular momentum r is the distance between the two masses μ is the reduced mass of the two bodies (approximately equal to the mass of the orbiting body if one mass is much larger than the other); and U(r) is the general form of the potential. The effective force, then, is the negative gradient of the effective potential: where denotes a unit vector in the radial direction. Important properties There are many useful features of the effective potential, such as To find the radius of a circular orbit, simply minimize the effective potential with respect to , or equivalently set the net force to zero and then solve for : After solving for , plug this back into to find the maximum value of the effective potential . A circular orbit may be either stable or unstable. If it is unstable, a small perturbation could destabilize the orbit, but a stable orbit would return to equilibrium. To determine the stability of a circular orbit, determine the concavity of the effective potential. If the concavity is positive, the orbit is stable: The frequency of small oscillations, using basic Hamiltonian analysis, is where the double prime indicates the second derivative of the effective potential with respect to and it is evaluated at a minimum. Gravitational potential Consider a particle of mass m orbiting a much heavier object of mass M. Assume Newtonian mechanics, which is both classical and non-relativistic. The conservation of energy and angular momentum give two constants E and L, which have values when the motion of the larger mass is negligible. In these expressions, is the derivative of r with respect to time, is the angular velocity of mass m, G is the gravitational constant, E is the total energy, and L is the angular momentum. Only two variables are needed, since the motion occurs in a plane. Substituting the second expression into the first and rearranging gives where is the effective potential. The original two-variable problem has been reduced to a one-variable problem. For many applications the effective potential can be treated exactly like the potential energy of a one-dimensional system: for instance, an energy diagram using the effective potential determines turning points and locations of stable and unstable equilibria. A similar method may be used in other applications, for instance determining orbits in a general relativistic Schwarzschild metric. Effective potentials are widely used in various condensed matter subfields, e.g. the Gauss-core potential (Likos 2002, Baeurle 2004) and the screened Coulomb potential (Likos 2001). See also Geopotential Notes References Further reading . Mechanics Potentials
0.78231
0.981195
0.767599
Wightman axioms
In mathematical physics, the Wightman axioms (also called Gårding–Wightman axioms), named after Arthur Wightman, are an attempt at a mathematically rigorous formulation of quantum field theory. Arthur Wightman formulated the axioms in the early 1950s, but they were first published only in 1964 after Haag–Ruelle scattering theory affirmed their significance. The axioms exist in the context of constructive quantum field theory and are meant to provide a basis for rigorous treatment of quantum fields and strict foundation for the perturbative methods used. One of the Millennium Problems is to realize the Wightman axioms in the case of Yang–Mills fields. Rationale One basic idea of the Wightman axioms is that there is a Hilbert space, upon which the Poincaré group acts unitarily. In this way, the concepts of energy, momentum, angular momentum and center of mass (corresponding to boosts) are implemented. There is also a stability assumption, which restricts the spectrum of the four-momentum to the positive light cone (and its boundary). However, this is not enough to implement locality. For that, the Wightman axioms have position-dependent operators called quantum fields, which form covariant representations of the Poincaré group. Since quantum field theory suffers from ultraviolet problems, the value of a field at a point is not well-defined. To get around this, the Wightman axioms introduce the idea of smearing over a test function to tame the UV divergences, which arise even in a free field theory. Because the axioms are dealing with unbounded operators, the domains of the operators have to be specified. The Wightman axioms restrict the causal structure of the theory by imposing either commutativity or anticommutativity between spacelike separated fields. They also postulate the existence of a Poincaré-invariant state called the vacuum and demand it to be unique. Moreover, the axioms assume that the vacuum is "cyclic", i.e., that the set of all vectors obtainable by evaluating at the vacuum-state elements of the polynomial algebra generated by the smeared field operators is a dense subset of the whole Hilbert space. Lastly, there is the primitive causality restriction, which states that any polynomial in the smeared fields can be arbitrarily accurately approximated (i.e. is the limit of operators in the weak topology) by polynomials in smeared fields over test functions with support in an open set in Minkowski space whose causal closure is the whole Minkowski space. Axioms W0 (assumptions of relativistic quantum mechanics) Quantum mechanics is described according to von Neumann; in particular, the pure states are given by the rays, i.e. the one-dimensional subspaces, of some separable complex Hilbert space. In the following, the scalar product of Hilbert space vectors Ψ and Φ is denoted by , and the norm of Ψ is denoted by . The transition probability between two pure states [Ψ] and [Φ] can be defined in terms of non-zero vector representatives Ψ and Φ to be and is independent of which representative vectors Ψ and Φ are chosen. The theory of symmetry is described according to Wigner. This is to take advantage of the successful description of relativistic particles by E. P. Wigner in his famous paper of 1939; see Wigner's classification. Wigner postulated the transition probability between states to be the same to all observers related by a transformation of special relativity. More generally, he considered the statement that a theory be invariant under a group G to be expressed in terms of the invariance of the transition probability between any two rays. The statement postulates that the group acts on the set of rays, that is, on projective space. Let (a, L) be an element of the Poincaré group (the inhomogeneous Lorentz group). Thus, a is a real Lorentz four-vector representing the change of spacetime origin x ↦ x − a, where x is in the Minkowski space M4, and L is a Lorentz transformation, which can be defined as a linear transformation of four-dimensional spacetime preserving the Lorentz distance c2t2 − x⋅x of every vector (ct, x). Then the theory is invariant under the Poincaré group if for every ray Ψ of the Hilbert space and every group element (a, L) is given a transformed ray Ψ(a, L) and the transition probability is unchanged by the transformation: Wigner's theorem says that under these conditions, the transformation on the Hilbert space are either linear or anti-linear operators (if moreover they preserve the norm, then they are unitary or antiunitary operators); the symmetry operator on the projective space of rays can be lifted to the underlying Hilbert space. This being done for each group element (a, L), we get a family of unitary or antiunitary operators U(a, L) on our Hilbert space, such that the ray Ψ transformed by (a, L) is the same as the ray containing U(a, L)ψ. If we restrict attention to elements of the group connected to the identity, then the anti-unitary case does not occur. Let (a, L) and (b, M) be two Poincaré transformations, and let us denote their group product by ; from the physical interpretation we see that the ray containing U(a, L)[U(b, M)ψ] must (for any ψ) be the ray containing U((a, L)⋅(b, M))ψ (associativity of the group operation). Going back from the rays to the Hilbert space, these two vectors may differ by a phase (and not in norm, because we choose unitary operators), which can depend on the two group elements (a, L) and (b, M), i.e. we do not have a representation of a group but rather a projective representation. These phases cannot always be cancelled by redefining each U(a), example for particles of spin 1/2. Wigner showed that the best one can get for Poincare group is i.e. the phase is a multiple of . For particles of integer spin (pions, photons, gravitons, ...) one can remove the ± sign by further phase changes, but for representations of half-odd-spin, we cannot, and the sign changes discontinuously as we go round any axis by an angle of 2π. We can, however, construct a representation of the covering group of the Poincare group, called the inhomogeneous SL(2, C); this has elements (a, A), where as before, a is a four-vector, but now A is a complex 2 × 2 matrix with unit determinant. We denote the unitary operators we get by U(a, A), and these give us a continuous, unitary and true representation in that the collection of U(a, A) obey the group law of the inhomogeneous SL(2, C). Because of the sign change under rotations by 2π, Hermitian operators transforming as spin 1/2, 3/2 etc., cannot be observables. This shows up as the univalence superselection rule: phases between states of spin 0, 1, 2 etc. and those of spin 1/2, 3/2 etc., are not observable. This rule is in addition to the non-observability of the overall phase of a state vector. Concerning the observables, and states |v⟩, we get a representation U(a, L) of Poincaré group on integer spin subspaces, and U(a, A) of the inhomogeneous SL(2, C) on half-odd-integer subspaces, which acts according to the following interpretation: An ensemble corresponding to U(a, L)|v⟩ is to be interpreted with respect to the coordinates in exactly the same way as an ensemble corresponding to |v⟩ is interpreted with respect to the coordinates x; and similarly for the odd subspaces. The group of spacetime translations is commutative, and so the operators can be simultaneously diagonalised. The generators of these groups give us four self-adjoint operators which transform under the homogeneous group as a four-vector, called the energy–momentum four-vector. The second part of the zeroth axiom of Wightman is that the representation U(a, A) fulfills the spectral condition—that the simultaneous spectrum of energy–momentum is contained in the forward cone: The third part of the axiom is that there is a unique state, represented by a ray in the Hilbert space, which is invariant under the action of the Poincaré group. It is called a vacuum. W1 (assumptions on the domain and continuity of the field) For each test function f, i.e. for a function with a compact support and continuous derivatives of any order, there exists a set of operators which, together with their adjoints, are defined on a dense subset of the Hilbert state space, containing the vacuum. The fields A are operator-valued tempered distributions. The Hilbert state space is spanned by the field polynomials acting on the vacuum (cyclicity condition). W2 (transformation law of the field) The fields are covariant under the action of Poincaré group and transform according to some representation S of the Lorentz group, or SL(2, C) if the spin is not integer: W3 (local commutativity or microscopic causality) If the supports of two fields are space-like separated, then the fields either commute or anticommute. Cyclicity of a vacuum and uniqueness of a vacuum are sometimes considered separately. Also, there is property of asymptotic completeness that Hilbert state space is spanned by the asymptotic spaces and , appearing in the collision S matrix. The other important property of field theory is mass gap, which is not required by the axioms that energy–momentum spectrum has a gap between zero and some positive number. Consequences of the axioms From these axioms, certain general theorems follow: CPT theorem — there is general symmetry under change of parity, particle–antiparticle reversal and time inversion (none of these symmetries alone exists in nature, as it turns out). Connection between spin and statistic — fields that transform according to half integer spin anticommute, while those with integer spin commute (axiom W3). There are actually technical fine details to this theorem. This can be patched up using Klein transformations. See parastatistics and also the ghosts in BRST. The impossibility of superluminal communication – if two observers are spacelike separated, then the actions of one observer (including both measurements and changes to the Hamiltonian) do not affect the measurement statistics of the other observer. Arthur Wightman showed that the vacuum expectation value distributions, satisfying certain set of properties, which follow from the axioms, are sufficient to reconstruct the field theory — Wightman reconstruction theorem, including the existence of a vacuum state; he did not find the condition on the vacuum expectation values guaranteeing the uniqueness of the vacuum; this condition, the cluster property, was found later by Res Jost, Klaus Hepp, David Ruelle and Othmar Steinmann. If the theory has a mass gap, i.e. there are no masses between 0 and some constant greater than zero, then vacuum expectation distributions are asymptotically independent in distant regions. Haag's theorem says that there can be no interaction picture — that we cannot use the Fock space of noninteracting particles as a Hilbert space — in the sense that we would identify Hilbert spaces via field polynomials acting on a vacuum at a certain time. Relation to other frameworks and concepts in quantum field theory The Wightman framework does not cover infinite-energy states like finite-temperature states. Unlike local quantum field theory, the Wightman axioms restrict the causal structure of the theory explicitly by imposing either commutativity or anticommutativity between spacelike separated fields, instead of deriving the causal structure as a theorem. If one considers a generalization of the Wightman axioms to dimensions other than 4, this (anti)commutativity postulate rules out anyons and braid statistics in lower dimensions. The Wightman postulate of a unique vacuum state does not necessarily make the Wightman axioms inappropriate for the case of spontaneous symmetry breaking because we can always restrict ourselves to a superselection sector. The cyclicity of the vacuum demanded by the Wightman axioms means that they describe only the superselection sector of the vacuum; again, this is not a great loss of generality. However, this assumption does leave out finite-energy states like solitons, which cannot be generated by a polynomial of fields smeared by test functions because a soliton, at least from a field-theoretic perspective, is a global structure involving topological boundary conditions at infinity. The Wightman framework does not cover effective field theories because there is no limit as to how small the support of a test function can be. I.e., there is no cutoff scale. The Wightman framework also does not cover gauge theories. Even in Abelian gauge theories conventional approaches start off with a "Hilbert space" with an indefinite norm (hence not truly a Hilbert space, which requires a positive-definite norm, but physicists call it a Hilbert space nonetheless), and the physical states and physical operators belong to a cohomology. This obviously is not covered anywhere in the Wightman framework. (However, as shown by Schwinger, Christ and Lee, Gribov, Zwanziger, Van Baal, etc., canonical quantization of gauge theories in Coulomb gauge is possible with an ordinary Hilbert space, and this might be the way to make them fall under the applicability of the axiom systematics.) The Wightman axioms can be rephrased in terms of a state called a Wightman functional on a Borchers algebra equal to the tensor algebra of a space of test functions. Existence of theories that satisfy the axioms One can generalize the Wightman axioms to dimensions other than 4. In dimension 2 and 3, interacting (i.e. non-free) theories that satisfy the axioms have been constructed. Currently, there is no proof that the Wightman axioms can be satisfied for interacting theories in dimension 4. In particular, the Standard Model of particle physics has no mathematically rigorous foundations. There is a million-dollar prize for a proof that the Wightman axioms can be satisfied for gauge theories, with the additional requirement of a mass gap. Osterwalder–Schrader reconstruction theorem Under certain technical assumptions, it has been shown that a Euclidean QFT can be Wick-rotated into a Wightman QFT, see Osterwalder–Schrader theorem. This theorem is the key tool for the constructions of interacting theories in dimension 2 and 3 that satisfy the Wightman axioms. See also Haag–Kastler axioms Hilbert's sixth problem Axiomatic quantum field theory Local quantum field theory References Further reading Arthur Wightman, "Hilbert's sixth problem: Mathematical treatment of the axioms of physics", in F. E. Browder (ed.): Vol. 28 (part 1) of Proc. Symp. Pure Math., Amer. Math. Soc., 1976, pp. 241–268. Res Jost, The general theory of quantized fields, Amer. Math. Soc., 1965. Axiomatic quantum field theory
0.787707
0.974452
0.767583
Boltzmann's entropy formula
In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy , also written as , of an ideal gas to the multiplicity (commonly denoted as or ), the number of real microstates corresponding to the gas's macrostate: where is the Boltzmann constant (also written as simply ) and equal to 1.380649 × 10−23 J/K, and is the natural logarithm function (or log base e, as in the image above). In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged. History The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases". A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates. The value of was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules. There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann. Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of identical particles, of which are in the -th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. can be counted using the formula for permutations where ranges over all possible molecular conditions and "" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one. Introduction of the natural logarithm In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation. Boltzmann writes: “The first task is to determine the permutation number, previously designated by 𝒫 , for any state distribution. Denoting by J the sum of the permutations 𝒫 for all possible state distributions, the quotient 𝒫 /J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations 𝒫 for the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. … “The most likely state distribution will be for those w0, w1 … values for which 𝒫 is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of 𝒫 is a product, it is easiest to determine the minimum of its logarithm, …” Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula. Generalization Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable. But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath. For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy, is: This reduces to equation if the probabilities pi are all equal. Boltzmann used a formula as early as 1866. He interpreted as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878. Boltzmann himself used an expression equivalent to in his later work and recognized it as more general than equation. That is, equation is a corollary of equation—and not vice versa. In every situation where equation is valid, equation is valid also—and not vice versa. Boltzmann entropy excludes statistical dependencies The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems. The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of N separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a single particle (rather than the 6N-dimensional phase space of the system as a whole), the Gibbs entropy simplifies to the Boltzmann entropy . This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy. For anything but the most dilute of real gases, leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a holode, rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the canonical one. See also History of entropy Gibbs entropy nat (unit) Shannon entropy von Neumann entropy References External links Introduction to Boltzmann's Equation Vorlesungen über Gastheorie, Ludwig Boltzmann (1896) vol. I, J.A. Barth, Leipzig Vorlesungen über Gastheorie, Ludwig Boltzmann (1898) vol. II. J.A. Barth, Leipzig. Eponymous equations of physics Thermodynamic entropy Thermodynamic equations Ludwig Boltzmann
0.772473
0.993649
0.767567
Terraforming of Mars
The terraforming of Mars or the terraformation of Mars is a hypothetical procedure that would consist of a planetary engineering project or concurrent projects aspiring to transform Mars from a planet hostile to terrestrial life to one that could sustainably host humans and other lifeforms free of protection or mediation. The process would involve the modification of the planet's extant climate, atmosphere, and surface through a variety of resource-intensive initiatives, as well as the installation of a novel ecological system or systems. Justifications for choosing Mars over other potential terraforming targets include the presence of water and a geological history that suggests it once harbored a dense atmosphere similar to Earth's. Hazards and difficulties include low gravity, toxic soil, low light levels relative to Earth's, and the lack of a magnetic field. Disagreement exists about whether current technology could render the planet habitable. Reasons for objecting to terraforming include ethical concerns about terraforming and the considerable cost that such an undertaking would involve. Reasons for terraforming the planet include allaying concerns about resource use and depletion on Earth and arguments that the altering and subsequent or concurrent settlement of other planets decreases the odds of humanity's extinction. Motivation and side effects Future population growth, demand for resources, and an alternate solution to the Doomsday argument may require human colonization of bodies other than Earth, such as Mars, the Moon, and other objects. Space colonization would facilitate harvesting the Solar System's energy and material resources. In many aspects, Mars is the most Earth-like of all the other planets in the Solar System. It is thought that Mars had a more Earth-like environment early in its geological history, with a thicker atmosphere and abundant water that was lost over the course of hundreds of millions of years through atmospheric escape. Given the foundations of similarity and proximity, Mars would make one of the most plausible terraforming targets in the Solar System. Side effects of terraforming include the potential displacement or destruction of any indigenous life if such life exists. Challenges and limitations The Martian environment presents several terraforming challenges to overcome and the extent of terraforming may be limited by certain key environmental factors. The process of terraforming aims to mitigate the following distinctions between Mars and Earth, among others: Reduced light levels (about 60% of Earth) Low surface gravity (38% of Earth's) Unbreathable atmosphere Low atmospheric pressure (about 1% of Earth's; well below the Armstrong limit) Ionizing solar and cosmic radiation at the surface Average temperature compared to Earth average of ) Molecular instability - bonds between atoms break down in critical molecules such as organic compounds Global dust storms No natural food source Toxic soil No global magnetic field to shield against the solar wind Countering the effects of space weather Mars has no intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes. This poses challenges for mitigating solar radiation and retaining an atmosphere. The lack of a magnetic field, its relatively small mass, and its atmospheric photochemistry, all would have contributed to the evaporation and loss of its surface liquid water over time. Solar wind–induced ejection of Martian atmospheric atoms has been detected by Mars-orbiting probes, indicating that the solar wind has stripped the Martian atmosphere over time. For comparison, while Venus has a dense atmosphere, it has only traces of water vapor (20 ppm) as it lacks a large, dipole-induced, magnetic field. Earth's ozone layer provides additional protection. Ultraviolet light is blocked before it can dissociate water into hydrogen and oxygen. Low gravity and pressure The surface gravity on Mars is 38% of that on Earth. It is not known if this is enough to prevent the health problems associated with weightlessness. Mars's atmosphere has about 1% the pressure of the Earth's at sea level. It is estimated that there is sufficient ice in the regolith and the south polar cap to form a atmosphere if it is released by planetary warming. The reappearance of liquid water on the Martian surface would add to the warming effects and atmospheric density, but the lower gravity of Mars requires 2.6 times Earth's column airmass to obtain the optimum pressure at the surface. Additional volatiles to increase the atmosphere's density must be supplied from an external source, such as redirecting several massive asteroids (40-400 billion tonnes total) containing ammonia as a source of nitrogen. Breathing on Mars Current conditions in the Martian atmosphere, at less than of atmospheric pressure, are significantly below the Armstrong limit of where very low pressure causes exposed bodily liquids such as saliva, tears, and the liquids wetting the alveoli within the lungs to boil away. Without a pressure suit, no amount of breathable oxygen delivered by any means will sustain oxygen-breathing life for more than a few minutes. In the NASA technical report Rapid (Explosive) Decompression Emergencies in Pressure-Suited Subjects, after exposure to pressure below the Armstrong limit, a survivor reported that his "last conscious memory was of the water on his tongue beginning to boil". In these conditions humans die within minutes unless a pressure suit provides life support. If Mars' atmospheric pressure could rise above , then a pressure suit would not be required. Visitors would only need to wear a mask that supplied 100% oxygen under positive pressure. A further increase to of atmospheric pressure would allow a simple mask supplying pure oxygen. This might look similar to mountain climbers who venture into pressures below , also called the death zone, where an insufficient amount of bottled oxygen has often resulted in hypoxia with fatalities. However, if the increase in atmospheric pressure was achieved by increasing CO2 (or other toxic gas) the mask would have to ensure the external atmosphere did not enter the breathing apparatus. CO2 concentrations as low as 1% cause drowsiness in humans. Concentrations of 7% to 10% may cause suffocation, even in the presence of sufficient oxygen. (See Carbon dioxide toxicity.) In 2021, the NASA Mars rover Perseverance was able to make oxygen on Mars. However, the process is complex and takes a considerable amount of time to produce a small amount of oxygen. Advantages According to scientists, Mars exists on the outer edge of the habitable zone, a region of the Solar System where liquid water on the surface may be supported if concentrated greenhouse gases could increase the atmospheric pressure. The lack of both a magnetic field and geologic activity on Mars may be a result of its relatively small size, which allowed the interior to cool more quickly than Earth's, although the details of such a process are still not well understood. There are strong indications that Mars once had an atmosphere as thick as Earth's during an earlier stage in its development, and that its pressure supported abundant liquid water at the surface. Although water appears to have once been present on the Martian surface, ground ice currently exists from mid-latitudes to the poles. The soil and atmosphere of Mars contain many of the main elements crucial to life, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon. Any climate change induced in the near term is likely to be driven by greenhouse warming produced by an increase in atmospheric carbon dioxide and a consequent increase in atmospheric water vapor. These two gases are the only likely sources of greenhouse warming that are available in large quantities in Mars' environment. Large amounts of water ice exist below the Martian surface, as well as on the surface at the poles, where it is mixed with dry ice, frozen . Significant amounts of water are located at the south pole of Mars, which, if melted, would correspond to a planetwide ocean 5–11 meters deep. Frozen carbon dioxide at the poles sublimes into the atmosphere during the Martian summers, and small amounts of water residue are left behind, which fast winds sweep off the poles at speeds approaching . This seasonal occurrence transports large amounts of dust and water ice into the atmosphere, forming Earth-like ice clouds. Most of the oxygen in the Martian atmosphere is present as carbon dioxide, the main atmospheric component. Molecular oxygen (O2) only exists in trace amounts. Large amounts of oxygen can be also found in metal oxides on the Martian surface, and in the soil, in the form of per-nitrates. An analysis of soil samples taken by the Phoenix lander indicated the presence of perchlorate, which has been used to liberate oxygen in chemical oxygen generators. Electrolysis could be employed to separate water on Mars into oxygen and hydrogen if sufficient liquid water and electricity were available. However, if vented into the atmosphere it would escape into space. Proposed methods and strategies Terraforming Mars would entail three major interlaced changes: building up the magnetosphere, building up the atmosphere, and raising the temperature. The atmosphere of Mars is relatively thin and has a very low surface pressure. Because its atmosphere consists mainly of , a known greenhouse gas, once Mars begins to heat, the may help to keep thermal energy near the surface. Moreover, as it heats, more should enter the atmosphere from the frozen reserves on the poles, enhancing the greenhouse effect. This means that the two processes of building the atmosphere and heating it would augment each other, favoring terraforming. However, it would be difficult to keep the atmosphere together because of the lack of a protective global magnetic field against erosion by the solar wind. Importing ammonia One method of augmenting the Martian atmosphere is to introduce ammonia (NH3). Large amounts of ammonia are likely to exist in frozen form on minor planets orbiting in the outer Solar System. It might be possible to redirect the orbits of these or smaller ammonia-rich objects so that they collide with Mars, thereby transferring the ammonia into the Martian atmosphere. Ammonia is not stable in the Martian atmosphere, however. It breaks down into (diatomic) nitrogen and hydrogen after a few hours. Thus, though ammonia is a powerful greenhouse gas, it is unlikely to generate much planetary warming. Presumably, the nitrogen gas would eventually be depleted by the same processes that stripped Mars of much of its original atmosphere, but these processes are thought to have required hundreds of millions of years. Being much lighter, the hydrogen would be removed much more quickly. Carbon dioxide is 2.5 times the density of ammonia, and nitrogen gas, which Mars barely holds on to, is more than 1.5 times the density, so any imported ammonia that did not break down would also be lost quickly into space. Importing hydrocarbons Another way to create a Martian atmosphere would be to import methane (CH4) or other hydrocarbons, which are common in Titan's atmosphere and on its surface; the methane could be vented into the atmosphere where it would act to compound the greenhouse effect. However, like ammonia (NH3), methane (CH4) is a relatively light gas. It is in fact even less dense than ammonia and so would similarly be lost into space if it was introduced, and at a faster rate than ammonia. Even if a method could be found to prevent it escaping into space, methane can exist in the Martian atmosphere for only a limited period before it is destroyed. Estimates of its lifetime range from 0.6–4 years. Use of fluorine compounds Especially powerful greenhouse gases, such as sulfur hexafluoride, chlorofluorocarbons (CFCs), or perfluorocarbons (PFCs), have been suggested both as a means of initially warming Mars and of maintaining long-term climate stability. These gases are proposed for introduction because they generate a greenhouse effect thousands of times stronger than that of . Fluorine-based compounds such as sulphur hexafluoride and perfluorocarbons are preferable to chlorine-based ones as the latter destroys ozone. It has been estimated that approximately 0.3 microbars of CFCs would need to be introduced into Mars' atmosphere to sublimate the south polar glaciers. This is equivalent to a mass of approximately 39 million tonnes, that is, about three times the amount of CFCs manufactured on Earth from 1972 to 1992 (when CFC production was banned by international treaty). Maintaining the temperature would require continual production of such compounds as they are destroyed due to photolysis. It has been estimated that introducing 170 kilotons of optimal greenhouse compounds (CF3CF2CF3, CF3SCF2CF3, SF6, SF5CF3, SF4(CF3)2) annually would be sufficient to maintain a 70-K greenhouse effect given a terraformed atmosphere with earth-like pressure and composition. Typical proposals envision producing the gases on Mars using locally extracted materials, nuclear power, and a significant industrial effort. The potential for mining fluorine-containing minerals to obtain the raw material necessary for the production of CFCs and PFCs is supported by mineralogical surveys of Mars that estimate the elemental presence of fluorine in the bulk composition of Mars at 32 ppm by mass (as compared to 19.4 ppm for the Earth). Alternatively, CFCs might be introduced by sending rockets with payloads of compressed CFCs on collision courses with Mars. When the rockets crashed into the surface they would release their payloads into the atmosphere. A steady barrage of these "CFC rockets" would need to be sustained for a little over a decade while Mars is changed chemically and becomes warmer. Use of conductive nanorods A 2024 study proposed using nanorods consisting of a conductive material, such as aluminum or iron, made by processing Martian minerals. These nanorods would scatter and absorb the thermal infrared upwelling from the surface, thus warming the planet. This process is claimed to be over 5,000 times more effective (in terms of warming per unit mass) than warming using fluorine compounds. Use of orbital mirrors Mirrors made of thin aluminized PET film could be placed in orbit around Mars to increase the total insolation it receives. This would direct the sunlight onto the surface and could increase Mars's surface temperature directly. The 125 km radius mirror could be positioned as a statite, using its effectiveness as a solar sail to orbit in a stationary position relative to Mars, near the poles, to sublimate the ice sheet and contribute to the warming greenhouse effect. However, certain problems have been found with this. The main concern is the difficulty of launching large mirrors from Earth. Use of nuclear weapons Elon Musk has proposed terraforming Mars by detonating nuclear weapons on the Martian polar ice caps to vaporize them and release carbon dioxide and water vapor into the atmosphere. Carbon dioxide and water vapor are greenhouse gases, and the resultant thicker atmosphere would trap heat from the Sun, increasing the planet's temperature. The formation of liquid water could be very favorable for oxygen-producing plants, and thus, human survival. Studies suggest that even if all the CO2 trapped in Mars' polar ice and regolith were released, it would not be enough to provide significant greenhouse warming to turn Mars into an Earth-like planet. Another criticism is that it would stir up enough dust and particles to block out a significant portion of the incoming sunlight, causing a nuclear winter, the opposite of the goal. Albedo reduction Reducing the albedo of the Martian surface would also make more efficient use of incoming sunlight in terms of heat absorption. This could be done by spreading dark dust from Mars's moons, Phobos and Deimos, which are among the blackest bodies in the Solar System; or by introducing dark extremophile microbial life forms such as lichens, algae and bacteria. The ground would then absorb more sunlight, warming the atmosphere. However, Mars is already the second-darkest planet in the solar system, absorbing over 70% of incoming sunlight so the scope for darkening it further is small. If algae or other green life were established, it would also contribute a small amount of oxygen to the atmosphere, though not enough to allow humans to breathe. The conversion process to produce oxygen is highly reliant upon water, without which the is mostly converted to carbohydrates. In addition, because on Mars atmospheric oxygen is lost into space (unless an artificial magnetosphere were to be created; see "Protecting the atmosphere" below), such life would need to be cultivated inside a closed system. On April 26, 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). One final issue with albedo reduction is the common Martian dust storms. These cover the entire planet for weeks, and not only increase the albedo, but block sunlight from reaching the surface. This has been observed to cause a surface temperature drop which the planet takes months to recover from. Once the dust settles it then covers whatever it lands on, effectively erasing the albedo reduction material from the view of the Sun. Funded research: ecopoiesis Since 2014, the NASA Institute for Advanced Concepts (NIAC) program and Techshot Inc have been working together to develop sealed biodomes that would employ colonies of oxygen-producing cyanobacteria and algae for the production of molecular oxygen (O2) on Martian soil. But first they need to test if it works on a small scale on Mars. The proposal is called Mars Ecopoiesis Test Bed. Eugene Boland is the Chief Scientist at Techshot, a company located in Greenville, Indiana. They intend to send small canisters of extremophile photosynthetic algae and cyanobacteria aboard a future rover mission. The rover would cork-screw the canisters into selected sites likely to experience transients of liquid water, drawing some Martian soil and then release oxygen-producing microorganisms to grow within the sealed soil. The hardware would use Martian subsurface ice as its phase changes into liquid water. The system would then look for oxygen given off as metabolic byproduct and report results to a Mars-orbiting relay satellite. If this experiment works on Mars, they will propose to build several large and sealed structures called biodomes, to produce and harvest oxygen for a future human mission to Mars life support systems. Being able to create oxygen there would provide considerable cost-savings to NASA and allow for longer human visits to Mars than would be possible if astronauts have to transport their own heavy oxygen tanks. This biological process, called ecopoiesis, would be isolated, in contained areas, and is not meant as a type of global planetary engineering for terraforming of Mars's atmosphere, but NASA states that "This will be the first major leap from laboratory studies into the implementation of experimental (as opposed to analytical) planetary in situ research of greatest interest to planetary biology, ecopoiesis, and terraforming." Research at the University of Arkansas presented in June 2015 suggested that some methanogens could survive in Mars's low pressure. Rebecca Mickol found that in her laboratory, four species of methanogens survived low-pressure conditions that were similar to a subsurface liquid aquifer on Mars. The four species that she tested were Methanothermobacter wolfeii, Methanosarcina barkeri, Methanobacterium formicicum, and Methanococcus maripaludis. Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO2) as their carbon source, so they could exist in subsurface environments on Mars. Protecting the atmosphere One key aspect of terraforming Mars is to protect the atmosphere (both present and future-built) from being lost into space. Some scientists hypothesize that creating a planet-wide artificial magnetosphere would be helpful in resolving this issue. According to two NIFS Japanese scientists, it is feasible to do that with current technology by building a system of refrigerated latitudinal superconducting rings, each carrying a sufficient amount of direct current. In the same report, it is claimed that the economic impact of the system can be minimized by using it also as a planetary energy transfer and storage system (SMES). Magnetic shield at L1 orbit During the Planetary Science Vision 2050 Workshop in late February 2017, NASA scientist Jim Green proposed a concept of placing a magnetic dipole field between the planet and the Sun to protect it from high-energy solar particles. It would be located at the Mars Lagrange orbit L1 at about 320 R♂, creating a partial and distant artificial magnetosphere. The field would need to be "Earth comparable" and sustain as measured at 1 Earth-radius. The paper abstract cites that this could be achieved by a magnet with a strength of . If constructed, the shield may allow the planet to partially restore its atmosphere. Plasma torus along the orbit of Phobos A plasma torus along the orbit of Phobos by ionizing and accelerating particles from the moon may be sufficient to create a magnetic field strong enough to protect a terraformed Mars. Thermodynamics of terraforming The overall energy required to sublimate the from the south polar ice cap was modeled by Zubrin and McKay in 1993. If using orbital mirrors, an estimated 120 MW-years of electrical energy would be required to produce mirrors large enough to vaporize the ice caps. This is considered the most effective method, though the least practical. If using powerful halocarbon greenhouse gases, an order of 1,000 MW-years of electrical energy would be required to accomplish this heating. However, if all of this were put into the atmosphere, it would only double the current atmospheric pressure from 6 mbar to 12 mbar, amounting to about 1.2% of Earth's mean sea level pressure. The amount of warming that could be produced today by putting even 100 mbar of into the atmosphere is small, roughly of order . Additionally, once in the atmosphere, it likely would be removed quickly, either by diffusion into the subsurface and adsorption or by re-condensing onto the polar caps. The surface or atmospheric temperature required to allow liquid water to exist has not been determined, and liquid water conceivably could exist when atmospheric temperatures are as low as . However, a warming of is much less than thought necessary to produce liquid water. See also Areography (geography of Mars) References External links Recent Arthur C Clarke interview mentions terraforming Red Colony Terraformers Society of Canada Research Paper: Technological Requirements for Terraforming Mars Peter Ahrens The Terraformation of Worlds Climate of Mars Exploration of Mars Mars Science fiction
0.770366
0.996325
0.767535
Turbulence
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to laminar flow, which occurs when a fluid flows in parallel layers with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this, the surface is dimpled to perturb the boundary layer and promote turbulence. This results in higher skin friction, but it moves the point of boundary layer separation further along, resulting in lower drag. Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere). Most of the terrestrial atmospheric circulation. The oceanic and atmospheric mixed layers and intense oceanic currents. The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines). The external flow over all kinds of vehicles such as cars, airplanes, ships, and submarines. The motions of matter in stellar atmospheres. A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing. Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence. Bridge supports (piers) in water. When river flow is slow, water flows smoothly around the support legs. When the flow is faster, a higher Reynolds number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent. In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structures and turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence. The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere. In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process. Recently, turbulence in porous media became a highly debated subject. Strategies used by animals for olfactory navigation, and their success, are heavily influenced by turbulence affecting the odor plume. Features Turbulence is characterized by the following features: Irregularity Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent. Diffusivity The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity". Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Rotationality Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish and maintain identifiable structure function. In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. Turbulent flow is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent. Dissipation To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. Integral time scale The integral time scale for a Lagrangian flow can be defined as: where u′ is the velocity fluctuation, and is the time lag between measurements. Integral length scales Large eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as where r is the distance between two measurement locations, and u′ is the velocity fluctuation in that same direction. Kolmogorov length scales Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous. Taylor microscales The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scales, but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term "turbulence" more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space. Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. A complete description of turbulence is one of the unsolved problems in physics. According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science: "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." Onset of turbulence The onset of turbulence can be, to some extent, predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number is used as a guide. With respect to laminar and turbulent flow regimes: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities. The Reynolds number is defined as where: is the density of the fluid (SI units: kg/m3) is a characteristic velocity of the fluid with respect to the object (m/s) is a characteristic linear dimension (m) is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)). While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Heat and momentum transfer When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: and similarly for temperature and pressure, where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress ) in the direction normal to the flow for a given time are where is the heat capacity at constant pressure, is the density of the fluid, is the coefficient of turbulent viscosity and is the turbulent thermal conductivity. Kolmogorov's theory of 1941 Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as ). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity and the rate of energy dissipation . With only these two parameters, the unique length that can be formed by dimensional analysis is This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length , while the input of energy into the cascade comes from the decay of the large scales, of order . These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length ) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range are universally and uniquely determined by the scale and the rate of energy dissipation . The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function , where is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field : where is the Fourier transform of the flow velocity field. Thus, represents the contribution to the kinetic energy from all the Fourier modes with , and therefore, where is the mean turbulent kinetic energy of the flow. The wavenumber corresponding to length scale is . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is where would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel, and considerable experimental evidence has since accumulated that supports it. Outside of the inertial area, one can find the formula below : In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: that is, the difference in flow velocity between points separated by a vector (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of ). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent , so that when is scaled by a factor , should have the same statistical distribution as with independent of the scale . From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as where the brackets denote the statistical average, and the would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the value predicted by the theory, becoming a non-linear function of the order of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law with , the second order structure function has also a power law, with the form Since the experimental values obtained for the second order structure function only deviate slightly from the value predicted by Kolmogorov theory, the value for is very near to (differences are about 2%). Thus the "Kolmogorov − spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale . This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles. See also Astronomical seeing Atmospheric dispersion modeling Chaos theory Clear-air turbulence Different types of boundary conditions in fluid dynamics Eddy covariance Fluid dynamics Darcy–Weisbach equation Eddy Navier–Stokes equations Large eddy simulation Hagen–Poiseuille equation Kelvin–Helmholtz instability Lagrangian coherent structure Turbulence kinetic energy Mesocyclones Navier–Stokes existence and smoothness Swing bowling Taylor microscale Turbulence modeling Velocimetry Vertical draft Vortex Vortex generator Wake turbulence Wave turbulence Wingtip vortices Wind tunnel Notes References Further reading Original scientific research papers and classic monographs Translated into English: Translated into English: External links Center for Turbulence Research, Scientific papers and books on turbulence Center for Turbulence Research, Stanford University Scientific American article Air Turbulence Forecast international CFD database iCFDdatabase Fluid Mechanics website with movies, Q&A, etc Johns Hopkins public database with direct numerical simulation data TurBase public database with experimental data from European High Performance Infrastructures in Turbulence (EuHIT) Concepts in physics Aerodynamics Chaos theory Transport phenomena Fluid dynamics Flow regimes
0.769744
0.997127
0.767532
Celestial mechanics
Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data. History Modern analytic celestial mechanics started with Isaac Newton's Principia (1687). The name celestial mechanics is more recent than that. Newton wrote that the field should be called "rational mechanics". The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term celestial mechanics. Prior to Kepler there was little connection between exact, quantitative prediction of planetary positions, using geometrical or numerical techniques, and contemporary discussions of the physical causes of the planets' motion. Johannes Kepler Johannes Kepler (1571–1630) was the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a New Astronomy, Based upon Causes, or Celestial Physics in 1609. His work led to the modern laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's elliptical model greatly improved the accuracy of predictions of planetary motion, years before Isaac Newton developed his law of gravitation in 1686. Isaac Newton Isaac Newton (25 December 1642 – 31 March 1727) is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified celestial and terrestrial dynamics. Using his law of gravity, Newton confirmed Kepler's Laws for elliptical orbits by deriving them from the gravitational two-body problem, which Newton included in his epochal Principia. Joseph-Louis Lagrange After Newton, Lagrange (25 January 1736 – 10 April 1813) attempted to solve the three-body problem, analyzed the stability of planetary orbits, and discovered the existence of the Lagrangian points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force, and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such (parabolic and hyperbolic orbits are conic section extensions of Kepler's elliptical orbits). More recently, it has also become useful to calculate spacecraft trajectories. Simon Newcomb Simon Newcomb (12 March 1835 – 11 July 1909) was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884 he conceived, with A.M.W. Downing, a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France, in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard. Albert Einstein Albert Einstein (14 March 1879 – 18 April 1955) explained the anomalous precession of Mercury's perihelion in his 1916 paper The Foundation of the General Theory of Relativity. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy. Observations of binary pulsars – the first in 1974 – whose orbits not only require the use of General Relativity for their explanation, but whose evolution proves the existence of gravitational radiation, was a discovery that led to the 1993 Nobel Physics Prize. Examples of problems Celestial motion, without additional forces such as drag forces or the thrust of a rocket, is governed by the reciprocal gravitational acceleration between masses. A generalization is the n-body problem, where a number n of masses are mutually interacting via the gravitational force. Although analytically not integrable in the general case, the integration can be well approximated numerically. Examples: 4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation) 3-body problem: Quasi-satellite Spaceflight to, and stay at a Lagrangian point In the case (two-body problem) the configuration is much simpler than for . In this case, the system is fully integrable and exact solutions can be found. Examples: A binary star, e.g., Alpha Centauri (approx. the same mass) A binary asteroid, e.g., 90 Antiope (approx. the same mass) A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid. Examples: The Solar System orbiting the center of the Milky Way A planet orbiting the Sun A moon orbiting a planet A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit) Perturbation theory Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun. Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use. The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem. There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy. The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache." This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers. Reference frame Problems in celestial mechanics are often posed in simplifying reference frames, such as the synodic reference frame applied to the three-body problem, where the origin coincides with the barycenter of the two larger celestial bodies. Other reference frames for n-body simulations include those that place the origin to follow the center of mass of a body, such as the heliocentric and the geocentric reference frames. The choice of reference frame gives rise to many phenomena, including the retrograde motion of superior planets while on a geocentric reference frame. Orbital mechanics See also Astrometry is a part of astronomy that deals with measuring the positions of stars and other celestial bodies, their distances and movements. Astrophysics Celestial navigation is a position fixing technique that was the first system devised to help sailors locate themselves on a featureless ocean. Developmental Ephemeris or the Jet Propulsion Laboratory Developmental Ephemeris (JPL DE) is a widely used model of the solar system, which combines celestial mechanics with numerical analysis and astronomical and spacecraft data. Dynamics of the celestial spheres concerns pre-Newtonian explanations of the causes of the motions of the stars and planets. Dynamical time scale Ephemeris is a compilation of positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times. Gravitation Lunar theory attempts to account for the motions of the Moon. Numerical analysis is a branch of mathematics, pioneered by celestial mechanicians, for calculating approximate numerical answers (such as the position of a planet in the sky) which are too difficult to solve down to a general, exact formula. Creating a numerical model of the solar system was the original goal of celestial mechanics, and has only been imperfectly achieved. It continues to motivate research. An orbit is the path that an object makes, around another object, whilst under the influence of a source of centripetal force, such as gravity. Orbital elements are the parameters needed to specify a Newtonian two-body orbit uniquely. Osculating orbit is the temporary Keplerian orbit about a central body that an object would continue on, if other perturbations were not present. Retrograde motion is orbital motion in a system, such as a planet and its satellites, that is contrary to the direction of rotation of the central body, or more generally contrary in direction to the net angular momentum of the entire system. Apparent retrograde motion is the periodic, apparently backwards motion of planetary bodies when viewed from the Earth (an accelerated reference frame). Satellite is an object that orbits another object (known as its primary). The term is often used to describe an artificial satellite (as opposed to natural satellites, or moons). The common noun ‘moon’ (not capitalized) is used to mean any natural satellite of the other planets. Tidal force is the combination of out-of-balance forces and accelerations of (mostly) solid bodies that raises tides in bodies of liquid (oceans), atmospheres, and strains planets' and satellites' crusts. Two solutions, called VSOP82 and VSOP87 are versions one mathematical theory for the orbits and positions of the major planets, which seeks to provide accurate positions over an extended period of time. Notes References Forest R. Moulton, Introduction to Celestial Mechanics, 1984, Dover, John E. Prussing, Bruce A. Conway, Orbital Mechanics, 1993, Oxford Univ. Press William M. Smart, Celestial Mechanics, 1961, John Wiley. J.M.A. Danby, Fundamentals of Celestial Mechanics, 1992, Willmann-Bell Alessandra Celletti, Ettore Perozzi, Celestial Mechanics: The Waltz of the Planets, 2007, Springer-Praxis, . Michael Efroimsky. 2005. Gauge Freedom in Orbital Mechanics. Annals of the New York Academy of Sciences, Vol. 1065, pp. 346-374 Alessandra Celletti, Stability and Chaos in Celestial Mechanics. Springer-Praxis 2010, XVI, 264 p., Hardcover Further reading Encyclopedia:Celestial mechanics Scholarpedia Expert articles External links Astronomy of the Earth's Motion in Space, high-school level educational web site by David P. Stern Newtonian Dynamics Undergraduate level course by Richard Fitzpatrick. This includes Lagrangian and Hamiltonian Dynamics and applications to celestial mechanics, gravitational potential theory, the 3-body problem and Lunar motion (an example of the 3-body problem with the Sun, Moon, and the Earth). Research Marshall Hampton's research page: Central configurations in the n-body problem Artwork Celestial Mechanics is a Planetarium Artwork created by D. S. Hessels and G. Dunne Course notes Professor Tatum's course notes at the University of Victoria Associations Italian Celestial Mechanics and Astrodynamics Association Simulations Classical mechanics Astronomical sub-disciplines Astrometry
0.772925
0.993022
0.767532
Geodesics in general relativity
In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic. In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space. Mathematical expression The full geodesic equation is where s is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Equivalent mathematical expression using coordinate time as parameter So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes: This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule. Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this: Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity. Derivation directly from the equivalence principle Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle. The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system. Setting , we have the following equation that is locally applicable in free fall: The next step is to employ the multi-dimensional chain rule. We have: Differentiating once more with respect to the time, we have: We have already said that the left-hand-side of this last equation must vanish because of the Equivalence Principle. Therefore: Multiply both sides of this last equation by the following quantity: Consequently, we have this: Weinberg defines the affine connection as follows: which leads to this formula: Notice that, if we had used the proper time “s” as the parameter of motion, instead of using the locally inertial time coordinate “T”, then our derivation of the geodesic equation of motion would be complete. In any event, let us continue by applying the one-dimensional chain rule: As before, we can set . Then the first derivative of x0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives: Subtracting d xλ / d t times this from the previous equation gives: which is a form of the geodesic equation of motion (using the coordinate time as parameter). The geodesic equation of motion can alternatively be derived using the concept of parallel transport. Deriving the geodesic equation via an action We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events. Let the action be where is the line element. There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter . Doing this we get: We can now go ahead and vary this action with respect to the curve . By the principle of least action we get: Using the product rule we get: where Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that: Simplifying a bit we see that: so, multiplying this equation by we get: So by Hamilton's principle we find that the Euler–Lagrange equation is Multiplying by the inverse metric tensor we get that Thus we get the geodesic equation: with the Christoffel symbol defined in terms of the metric tensor as (Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like or space-like separated pairs of points.) Equation of motion may follow from the field equations for empty space Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space, i.e. from the fact that the Ricci curvature vanishes. He wrote: It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points. and One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic. A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it. On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide. Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity, but this claim remains disputed. According to David Malament, “Though the geodesic principle can be recovered as theorem in general relativity, it is not a consequence of Einstein’s equation (or the conservation principle) alone. Other assumptions are needed to derive the theorems in question.” Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity. Extension to the case of a charged particle In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force. That is: with The Minkowski tensor is given by: These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows: with This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency: This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively. Geodesics as curves of stationary interval A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations, namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic. In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic. For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially (i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally (i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length. The interval of a curve in spacetime is Then, the Euler–Lagrange equation, becomes, after some calculation, where The goal being to find a curve for which the value of is stationary, where such goal can be accomplished by calculating the Euler–Lagrange equation for f, which is Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives Now calculate the derivatives: This is just one step away from the geodesic equation. If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because is constant). Finally, we have the geodesic equation Derivation using autoparallel transport The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light. Let be a smooth manifold with connection and be a curve on the manifold. The curve is said to be autoparallely transported if and only if . In order to derive the geodesic equation, we have to choose a chart : Using the linearity and the Leibniz rule: Using how the connection acts on functions and expanding the second term with the help of the connection coefficient functions: The first term can be simplified to . Renaming the dummy indices: We finally arrive to the geodesic equation: See also Geodesic Geodetic precession Schwarzschild geodesics Geodesics as Hamiltonian flows Synge's world function Bibliography Steven Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, (1972) John Wiley & Sons, New York . See chapter 3. Lev D. Landau and Evgenii M. Lifschitz, The Classical Theory of Fields, (1973) Pergammon Press, Oxford See section 87. Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; . Bernard F. Schutz, A first course in general relativity, (1985; 2002) Cambridge University Press: Cambridge, UK; . See chapter 6. Robert M. Wald, General Relativity, (1984) The University of Chicago Press, Chicago. See Section 3.3. References General relativity Articles containing proofs
0.773565
0.99219
0.767523
Attenuation
In physics, attenuation (in some contexts, extinction) is the gradual loss of flux intensity through a medium. For instance, dark glasses attenuate sunlight, lead attenuates X-rays, and water and air attenuate both light and sound at variable attenuation rates. Hearing protectors help reduce acoustic flux from flowing into the ears. This phenomenon is called acoustic attenuation and is measured in decibels (dBs). In electrical engineering and telecommunications, attenuation affects the propagation of waves and signals in electrical circuits, in optical fibers, and in air. Electrical attenuators and optical attenuators are commonly manufactured components in this field. Background In many cases, attenuation is an exponential function of the path length through the medium. In optics and in chemical spectroscopy, this is known as the Beer–Lambert law. In engineering, attenuation is usually measured in units of decibels per unit length of medium (dB/cm, dB/km, etc.) and is represented by the attenuation coefficient of the medium in question. Attenuation also occurs in earthquakes; when the seismic waves move farther away from the hypocenter, they grow smaller as they are attenuated by the ground. Ultrasound One area of research in which attenuation plays a prominent role, is in ultrasound physics. Attenuation in ultrasound is the reduction in amplitude of the ultrasound beam as a function of distance through the imaging medium. Accounting for attenuation effects in ultrasound is important because a reduced signal amplitude can affect the quality of the image produced. By knowing the attenuation that an ultrasound beam experiences traveling through a medium, one can adjust the input signal amplitude to compensate for any loss of energy at the desired imaging depth. Ultrasound attenuation measurement in heterogeneous systems, like emulsions or colloids, yields information on particle size distribution. There is an ISO standard on this technique. Ultrasound attenuation can be used for extensional rheology measurement. There are acoustic rheometers that employ Stokes' law for measuring extensional viscosity and volume viscosity. Wave equations which take acoustic attenuation into account can be written on a fractional derivative form. In homogeneous media, the main physical properties contributing to sound attenuation are viscosity and thermal conductivity. Attenuation coefficient Attenuation coefficients are used to quantify different media according to how strongly the transmitted ultrasound amplitude decreases as a function of frequency. The attenuation coefficient can be used to determine total attenuation in dB in the medium using the following formula: Attenuation is linearly dependent on the medium length and attenuation coefficient, as well as – approximately – the frequency of the incident ultrasound beam for biological tissue (while for simpler media, such as air, the relationship is quadratic). Attenuation coefficients vary widely for different media. In biomedical ultrasound imaging however, biological materials and water are the most commonly used media. The attenuation coefficients of common biological materials at a frequency of 1 MHz are listed below: There are two general ways of acoustic energy losses: absorption and scattering. Ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only. Propagation through heterogeneous media requires taking into account scattering. Light attenuation in water Shortwave radiation emitted from the Sun have wavelengths in the visible spectrum of light that range from 360 nm (violet) to 750 nm (red). When the Sun's radiation reaches the sea surface, the shortwave radiation is attenuated by the water, and the intensity of light decreases exponentially with water depth. The intensity of light at depth can be calculated using the Beer-Lambert Law. In clear mid-ocean waters, visible light is absorbed most strongly at the longest wavelengths. Thus, red, orange, and yellow wavelengths are totally absorbed at shallower depths, while blue and violet wavelengths reach deeper in the water column. Because the blue and violet wavelengths are absorbed least compared to the other wavelengths, open-ocean waters appear deep blue to the eye. Near the shore, coastal water contains more phytoplankton than the very clear mid-ocean waters. Chlorophyll-a pigments in the phytoplankton absorb light, and the plants themselves scatter light, making coastal waters less clear than mid-ocean waters. Chlorophyll-a absorbs light most strongly in the shortest wavelengths (blue and violet) of the visible spectrum. In coastal waters where high concentrations of phytoplankton occur, the green wavelength reaches the deepest in the water column and the color of water appears blue-green or green. Seismic The energy with which an earthquake affects a location depends on the running distance. The attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking. A seismic wave loses energy as it propagates through the earth (seismic attenuation). This phenomenon is tied into the dispersion of the seismic energy with the distance. There are two types of dissipated energy: geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat, also called intrinsic attenuation or anelastic attenuation. In porous fluid—saturated sedimentary rocks such as sandstones, intrinsic attenuation of seismic waves is primarily caused by the wave-induced flow of the pore fluid relative to the solid frame. Electromagnetic Attenuation decreases the intensity of electromagnetic radiation due to absorption or scattering of photons. Attenuation does not include the decrease in intensity due to inverse-square law geometric spreading. Therefore, calculation of the total change in intensity involves both the inverse-square law and an estimation of attenuation over the path. The primary causes of attenuation in matter are the photoelectric effect, Compton scattering, and, for photon energies of above 1.022 MeV, pair production. Coaxial and general RF cables The attenuation of RF cables is defined by: where is the input power into a 100 m long cable terminated with the nominal value of its characteristic impedance, and is the output power at the far end of this cable. Attenuation in a coaxial cable is a function of the materials and the construction. Radiography The beam of X-ray is attenuated when photons are absorbed when the x-ray beam passes through the tissue. Interaction with matter varies between high energy photons and low energy photons. Photons travelling at higher energy are more capable of travelling through a tissue specimen as they have less chances of interacting with matter. This is mainly due to the photoelectric effect which states that "the probability of photoelectric absorption is approximately proportional to (Z/E)3, where Z is the atomic number of the tissue atom and E is the photon energy. In context of this, an increase in photon energy (E) will result in a rapid decrease in the interaction with matter. Optics Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance travelled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the relatively high quality of transparency of modern optical transmission. The medium is typically a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. Empirical research has shown that attenuation in optical fiber is caused primarily by both scattering and absorption. Attenuation in fiber optics can be quantified using the following equation: Light scattering The propagation of light through the core of an optical fiber is based on total internal reflection of the lightwave. Rough and irregular surfaces, even at the molecular level of the glass, can cause light rays to be reflected in many random directions. This type of reflection is referred to as "diffuse reflection", and it is typically characterized by wide variety of reflection angles. Most objects that can be seen with the naked eye are visible due to diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering from many common surfaces can be modelled by reflectance. Light scattering depends on the wavelength of the light being scattered. Thus, limits to spatial scales of visibility arise, depending on the frequency of the incident lightwave and the physical dimension (or spatial scale) of the scattering center, which is typically in the form of some specific microstructural feature. For example, since visible light has a wavelength scale on the order of one micrometer, scattering centers will have dimensions on a similar spatial scale. Thus, attenuation results from the incoherent scattering of light at internal surfaces and interfaces. In (poly)crystalline materials such as metals and ceramics, in addition to pores, most of the internal surfaces or interfaces are in the form of grain boundaries that separate tiny regions of crystalline order. It has recently been shown that, when the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. This phenomenon has given rise to the production of transparent ceramic materials. Likewise, the scattering of light in optical quality glass fiber is caused by molecular-level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that a glass is simply the limiting case of a polycrystalline solid. Within this framework, "domains" exhibiting various degrees of short-range order become the building-blocks of both metals and alloys, as well as glasses and ceramics. Distributed both between and within these domains are microstructural defects that will provide the most ideal locations for the occurrence of light scattering. This same phenomenon is seen as one of the limiting factors in the transparency of IR missile domes. UV-Vis-IR absorption In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths, in a manner similar to that responsible for the appearance of color. Primary material considerations include both electrons and molecules as follows: At the electronic level, it depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color. At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how close-packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material transmitting longer wavelengths in the infrared (IR), far IR, radio and microwave ranges. The selective absorption of infrared (IR) light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integral multiple of the frequency) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared (IR) light. Applications In optical fibers, attenuation is the rate at which the signal light decreases in intensity. For this reason, glass fiber (which has a low attenuation) is used for long-distance fiber optic cables; plastic fiber has a higher attenuation and, hence, shorter range. There also exist optical attenuators that decrease the signal in a fiber optic cable intentionally. Attenuation of light is also important in physical oceanography. This same effect is an important consideration in weather radar, as raindrops absorb a part of the emitted beam that is more or less significant, depending on the wavelength used. Due to the damaging effects of high-energy photons, it is necessary to know how much energy is deposited in tissue during diagnostic treatments involving such radiation. In addition, gamma radiation is used in cancer treatments where it is important to know how much energy will be deposited in healthy and in tumorous tissue. In computer graphics attenuation defines the local or global influence of light sources and force fields. In CT imaging, attenuation describes the density or darkness of the image. Radio Attenuation is an important consideration in the modern world of wireless telecommunications. Attenuation limits the range of radio signals and is affected by the materials a signal must travel through (e.g., air, wood, concrete, rain). See the article on path loss for more information on signal loss in wireless communication. See also Air mass (astronomy) Astronomical filter Astronomical seeing Atmospheric refraction Attenuation length Attenuator (genetics) Cross section (physics) Electrical impedance Environmental remediation for natural attenuation Extinction (astronomy) ITU-R P.525 Mean free path Path loss Radar horizon Radiation length Rain fade Sunset#Colors Twinkling Wave propagation References External links NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables Underwater Radio Communication Physical phenomena Acoustics Telecommunications engineering
0.771225
0.995099
0.767446
Earth science
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history. Geology Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time. Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks. Earth's interior Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes. Atmospheric science Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change. The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. Earth's magnetic field Hydrology Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. Ecology Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature. Physical geography Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment. Methodology Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains). A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history. Earth's spheres In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere. The following fields of science are generally categorized within the Earth sciences: Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology. Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life. Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry." Glaciology covers the icy parts of the Earth (or cryosphere). Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. Earth science breakup Atmosphere Atmospheric chemistry Geography Climatology Meteorology Hydrometeorology Paleoclimatology Biosphere Biogeochemistry Biogeography Ecology Landscape ecology Geoarchaeology Geomicrobiology Paleontology Palynology Micropaleontology Hydrosphere Hydrology Hydrogeology Limnology (freshwater science) Oceanography (marine science) Chemical oceanography Physical oceanography Biological oceanography (marine biology) Geological oceanography (marine geology) Paleoceanography Lithosphere (geosphere) Geology Economic geology Engineering geology Environmental geology Forensic geology Historical geology Quaternary geology Planetary geology and planetary geography Sedimentology Stratigraphy Structural geology Geography Human geography Physical geography Geochemistry Geomorphology Geophysics Geochronology Geodynamics (see also Tectonics) Geomagnetism Gravimetry (also part of Geodesy) Seismology Glaciology Hydrogeology Mineralogy Crystallography Gemology Petrology Petrophysics Speleology Volcanology Pedosphere Geography Soil science Edaphology Pedology Systems Earth system science Environmental science Geography Human geography Physical geography Gaia hypothesis Systems ecology Systems geology Others Geography Cartography Geoinformatics (GIScience) Geostatistics Geodesy and Surveying Remote Sensing Hydrography Nanogeoscience See also American Geosciences Institute Earth sciences graphics software Four traditions of geography Glossary of geology terms List of Earth scientists List of geoscience organizations List of unsolved problems in geoscience Making North America National Association of Geoscience Teachers Solid-earth science Science tourism Structure of the Earth References Sources Further reading Allaby M., 2008. Dictionary of Earth Sciences, Oxford University Press, Korvin G., 1998. Fractal Models in the Earth Sciences, Elsvier, Tarbuck E. J., Lutgens F. K., and Tasa D., 2002. Earth Science, Prentice Hall, External links Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center. Geoethics in Planetary and Space Exploration. Geology Buzz: Earth Science Planetary science Science-related lists
0.769435
0.997397
0.767432
Complementarity (physics)
In physics, complementarity is a conceptual aspect of quantum mechanics that Niels Bohr regarded as an essential feature of the theory. The complementarity principle holds that certain pairs of complementary properties cannot all be observed or measured simultaneously. For example, position and momentum or wave and particle properties. In contemporary terms, complementarity encompasses both the uncertainty principle and wave-particle duality. Bohr considered one of the foundational truths of quantum mechanics to be the fact that setting up an experiment to measure one quantity of a pair, for instance the position of an electron, excludes the possibility of measuring the other, yet understanding both experiments is necessary to characterize the object under study. In Bohr's view, the behavior of atomic and subatomic objects cannot be separated from the measuring instruments that create the context in which the measured objects behave. Consequently, there is no "single picture" that unifies the results obtained in these different experimental contexts, and only the "totality of the phenomena" together can provide a completely informative description. History Background Complementarity as a physical model derives from Niels Bohr's 1927 presentation in Como, Italy, at a scientific celebration of the work of Alessandro Volta 100 years previous. Bohr's subject was complementarity, the idea that measurements of quantum events provide complementary information through seemingly contradictory results. While Bohr's presentation was not well received, it did crystallize the issues ultimately leading to the modern wave-particle duality concept. The contradictory results that triggered Bohr's ideas had been building up over the previous 20 years. This contradictory evidence came both from light and from electrons. The wave theory of light, broadly successful for over a hundred years, had been challenged by Planck's 1901 model of blackbody radiation and Einstein's 1905 interpretation of the photoelectric effect. These theoretical models use discrete energy, a quantum, to describe the interaction of light with matter. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum seemingly contradicted other experiments demonstrating the wave-like interference of light. The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thompson, Robert Millikan, and Charles Wilson, among others, had shown that free electrons had particle properties. However, in 1924, Louis de Broglie proposed that electrons had an associated wave and Schrödinger demonstrated that wave equations accurately account for electron properties in atoms. Again some experiments showed particle properties and others wave properties. Bohr's resolution of these contradictions is to accept them. In his Como lecture he says: "our interpretation of the experimental material rests essentially upon the classical concepts." Direct observation being impossible, observations of quantum effects are necessarily classical. Whatever the nature of quantum events, our only information will arrive via classical results. If experiments sometimes produce wave results and sometimes particle results, that is the nature of light and of the ultimate constituents of matter. Bohr's lectures Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding an as-yet-unpublished result, a thought experiment about a microscope using gamma rays. This thought experiment implied a tradeoff between uncertainties that would later be formalized as the uncertainty principle. To Bohr, Heisenberg's paper did not make clear the distinction between a position measurement merely disturbing the momentum value that a particle carried and the more radical idea that momentum was meaningless or undefinable in a context where position was measured instead. Upon returning from his vacation, by which time Heisenberg had already submitted his paper for publication, Bohr convinced Heisenberg that the uncertainty tradeoff was a manifestation of the deeper concept of complementarity. Heisenberg duly appended a note to this effect to his paper, before its publication, stating: Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand. Bohr publicly introduced the principle of complementarity in a lecture he delivered on 16 September 1927 at the International Physics Congress held in Como, Italy, attended by most of the leading physicists of the era, with the notable exceptions of Einstein, Schrödinger, and Dirac. However, these three were in attendance one month later when Bohr again presented the principle at the Fifth Solvay Congress in Brussels, Belgium. The lecture was published in the proceedings of both of these conferences, and was republished the following year in Naturwissenschaften (in German) and in Nature (in English). In his original lecture on the topic, Bohr pointed out that just as the finitude of the speed of light implies the impossibility of a sharp separation between space and time (relativity), the finitude of the quantum of action implies the impossibility of a sharp separation between the behavior of a system and its interaction with the measuring instruments and leads to the well-known difficulties with the concept of 'state' in quantum theory; the notion of complementarity is intended to capture this new situation in epistemology created by quantum theory. Physicists F.A.M. Frescura and Basil Hiley have summarized the reasons for the introduction of the principle of complementarity in physics as follows: Debate following the lectures Complementarity was a central feature of Bohr's reply to the EPR paradox, an attempt by Albert Einstein, Boris Podolsky and Nathan Rosen to argue that quantum particles must have position and momentum even without being measured and so quantum mechanics must be an incomplete theory. The thought experiment proposed by Einstein, Podolsky and Rosen involved producing two particles and sending them far apart. The experimenter could choose to measure either the position or the momentum of one particle. Given that result, they could in principle make a precise prediction of what the corresponding measurement on the other, faraway particle would find. To Einstein, Podolsky and Rosen, this implied that the faraway particle must have precise values of both quantities whether or not that particle is measured in any way. Bohr argued in response that the deduction of a position value could not be transferred over to the situation where a momentum value is measured, and vice versa. Later expositions of complementarity by Bohr include a 1938 lecture in Warsaw and a 1949 article written for a festschrift honoring Albert Einstein. It was also covered in a 1953 essay by Bohr's collaborator Léon Rosenfeld. Mathematical formalism For Bohr, complementarity was the "ultimate reason" behind the uncertainty principle. All attempts to grapple with atomic phenomena using classical physics were eventually frustrated, he wrote, leading to the recognition that those phenomena have "complementary aspects". But classical physics can be generalized to address this, and with "astounding simplicity", by describing physical quantities using non-commutative algebra. This mathematical expression of complementarity builds on the work of Hermann Weyl and Julian Schwinger, starting with Hilbert spaces and unitary transformation, leading to the theorems of mutually unbiased bases. In the mathematical formulation of quantum mechanics, physical quantities that classical mechanics had treated as real-valued variables become self-adjoint operators on a Hilbert space. These operators, called "observables", can fail to commute, in which case they are called "incompatible": Incompatible observables cannot have a complete set of common eigenstates; there can be some simultaneous eigenstates of and , but not enough in number to constitute a complete basis. The canonical commutation relation implies that this applies to position and momentum. In a Bohrian view, this is a mathematical statement that position and momentum are complementary aspects. Likewise, an analogous relationship holds for any two of the spin observables defined by the Pauli matrices; measurements of spin along perpendicular axes are complementary. The Pauli spin observables are defined for a quantum system described by a two-dimensional Hilbert space; mutually unbiased bases generalize these observables to Hilbert spaces of arbitrary finite dimension. Two bases and for an -dimensional Hilbert space are mutually unbiased when Here the basis vector , for example, has the same overlap with every ; there is equal transition probability between a state in one basis and any state in the other basis. Each basis corresponds to an observable, and the observables for two mutually unbiased bases are complementary to each other. This leads to the description of complementarity as a statement about quantum kinematics: The concept of complementarity has also been applied to quantum measurements described by positive-operator-valued measures (POVMs). Continuous complementarity While the concept of complementarity can be discussed via two experimental extremes, continuous tradeoff is also possible. In 1979 Wooters and Zurek introduced an information-theoretic treatment of the double-slit experiment providing on approach to a quantiative model of complementarity. The wave-particle relation, introduced by Daniel Greenberger and Allaine Yasin in 1988, and since then refined by others, quantifies the trade-off between measuring particle path distinguishability, , and wave interference fringe visibility, : The values of and can vary between 0 and 1 individually, but any experiment that combines particle and wave detection will limit one or the other, or both. The detailed definition of the two terms vary among applications, but the relation expresses the verified constraint that efforts to detect particle paths will result in less visible wave interference. Modern role While many of the early discussions of complementarity discussed hypothetical experiments, advances in technology have allowed advanced tests of this concept. Experiments like the quantum eraser verify the key ideas in complementarity; modern exploration of quantum entanglement builds directly on complementarity: In his Nobel lecture, physicist Julian Schwinger linked complementarity to quantum field theory: The Consistent histories interpretation of quantum mechanics takes a generalized form of complementarity as a key defining postulate. See also Copenhagen interpretation Canonical coordinates Conjugate variables Interpretations of quantum mechanics Wave–particle duality References Further reading Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994). Niels Bohr, Causality and Complementarity: supplementary papers edited by Jan Faye and Henry J. Folse. The Philosophical Writings of Niels Bohr, Volume IV. Ox Bow Press. 1998. External links Discussions with Einstein on Epistemological Problems in Atomic Physics Einstein's Reply to Criticisms Quantum mechanics Niels Bohr Dichotomies Scientific laws
0.775827
0.989172
0.767427
Governing equation
The governing equations of a mathematical model describe how the values of the unknown variables (i.e. the dependent variables) change when one or more of the known (i.e. independent) variables change. Physical systems can be modeled phenomenologically at various levels of sophistication, with each level capturing a different degree of detail about the system. A governing equation represents the most detailed and fundamental phenomenological model currently available for a given system. For example, at the coarsest level, a beam is just a 1D curve whose torque is a function of local curvature. At a more refined level, the beam is a 2D body whose stress-tensor is a function of local strain-tensor, and strain-tensor is a function of its deformation. The equations are then a PDE system. Note that both levels of sophistication are phenomenological, but one is deeper than the other. As another example, in fluid dynamics, the Navier-Stokes equations are more refined than Euler equations. As the field progresses and our understanding of the underlying mechanisms deepens, governing equations may be replaced or refined by new, more accurate models that better represent the system's behavior. These new governing equations can then be considered the deepest level of phenomenological model at that point in time. Mass balance A mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. It is the simplest governing equation, and it is simply a budget (balance calculation) over the quantity in question: Differential equation Physics The governing equations in classical physics that are lectured at universities are listed below. balance of mass balance of (linear) momentum balance of angular momentum balance of energy balance of entropy Maxwell-Faraday equation for induced electric field Ampére-Maxwell equation for induced magnetic field Gauss equation for electric flux Gauss equation for magnetic flux Classical continuum mechanics The basic equations in classical continuum mechanics are all balance equations, and as such each of them contains a time-derivative term which calculates how much the dependent variable change with time. For an isolated, frictionless / inviscid system the first four equations are the familiar conservation equations in classical mechanics. Darcy's law of groundwater flow has the form of a volumetric flux caused by a pressure gradient. A flux in classical mechanics is normally not a governing equation, but usually a defining equation for transport properties. Darcy's law was originally established as an empirical equation, but is later shown to be derivable as an approximation of Navier-Stokes equation combined with an empirical composite friction force term. This explains the duality in Darcy's law as a governing equation and a defining equation for absolute permeability. The non-linearity of the material derivative in balance equations in general, and the complexities of Cauchy's momentum equation and Navier-Stokes equation makes the basic equations in classical mechanics exposed to establishing of simpler approximations. Some examples of governing differential equations in classical continuum mechanics are Hele-Shaw flow Plate theory Kirchhoff–Love plate theory Mindlin–Reissner plate theory Vortex shedding Annular fin Astronautics Finite volume method for unsteady flow Acoustic theory Precipitation hardening Kelvin's circulation theorem Kernel function for solving integral equation of surface radiation exchanges Nonlinear acoustics Large eddy simulation Föppl–von Kármán equations Timoshenko beam theory Biology A famous example of governing differential equations within biology is Lotka-Volterra equations are prey-predator equations Sequence of states A governing equation may also be a state equation, an equation describing the state of the system, and thus actually be a constitutive equation that has "stepped up the ranks" because the model in question was not meant to include a time-dependent term in the equation. This is the case for a model of an oil production plant which on the average operates in a steady state mode. Results from one thermodynamic equilibrium calculation are input data to the next equilibrium calculation together with some new state parameters, and so on. In this case the algorithm and sequence of input data form a chain of actions, or calculations, that describes change of states from the first state (based solely on input data) to the last state that finally comes out of the calculation sequence. See also Constitutive equation Mass balance Master equation Mathematical model Primitive equations References Equations
0.791684
0.969348
0.767417
London dispersion force
London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest intermolecular force. Introduction The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant. While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation like , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by where is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds), such as hydrocarbons and highly symmetric molecules like bromine (Br2, a liquid at room temperature) or iodine (I2, a solid at room temperature). In hydrocarbons and waxes, the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces. When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions, the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules. Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F2, Cl2, Br2, I2). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms. Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons. Quantum mechanical theory The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum-mechanical theory based on second-order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied. London wrote a Taylor series expansion of the perturbation in , where is the distance between the nuclear centers of mass of the moieties. This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes, , and ionization energies, , (ancient term: ionization potentials). In this manner, the following approximation is obtained for the dispersion interaction between two atoms and . Here and are the polarizability volumes of the respective atoms. The quantities and are the first ionization energies of the atoms, and is the intermolecular distance. Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces. The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion. Relative magnitude Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given: See also Dispersion (chemistry) van der Waals force van der Waals molecule Non-covalent interactions References Intermolecular forces Chemical bonding sv:Dispersionkraft
0.77062
0.995815
0.767395
Walking
Walking (also known as ambulation) is one of the main gaits of terrestrial locomotion among legged animals. Walking is typically slower than running and other gaits. Walking is defined as an "inverted pendulum" gait in which the body vaults over the stiff limb or limbs with each step. This applies regardless of the usable number of limbs—even arthropods, with six, eight, or more limbs, walk. In humans, walking has health benefits including improved mental health and reduced risk of cardiovascular disease and death. Difference from running The word walk is descended from the Old English wealcan 'to roll'. In humans and other bipeds, walking is generally distinguished from running in that only one foot at a time leaves contact with the ground and there is a period of double-support. In contrast, running begins when both feet are off the ground with each step. This distinction has the status of a formal requirement in competitive walking events. For quadrupedal species, there are numerous gaits which may be termed walking or running, and distinctions based upon the presence or absence of a suspended phase or the number of feet in contact any time do not yield mechanically correct classification. The most effective method to distinguish walking from running is to measure the height of a person's centre of mass using motion capture or a force plate at mid-stance. During walking, the centre of mass reaches a maximum height at mid-stance, while running, it is then at a minimum. This distinction, however, only holds true for locomotion over level or approximately level ground. For walking up grades above 10%, this distinction no longer holds for some individuals. Definitions based on the percentage of the stride during which a foot is in contact with the ground (averaged across all feet) of greater than 50% contact corresponds well with identification of 'inverted pendulum' mechanics and are indicative of walking for animals with any number of limbs, however this definition is incomplete. Running humans and animals may have contact periods greater than 50% of a gait cycle when rounding corners, running uphill or carrying loads. Speed is another factor that distinguishes walking from running. Although walking speeds can vary greatly depending on many factors such as height, weight, age, terrain, surface, load, culture, effort, and fitness, the average human walking speed at crosswalks is about 5.0 kilometres per hour (km/h), or about 1.4 meters per second (m/s), or about 3.1 miles per hour (mph). Specific studies have found pedestrian walking speeds at crosswalks ranging from for older individuals and from for younger individuals; a brisk walking speed can be around . In Japan, the standard measure for walking speed is 80 m/min (4.8 km/h). Champion racewalkers can average more than over a distance of . An average human child achieves independent walking ability at around 11 months old. Health benefits Regular, brisk exercise can improve confidence, stamina, energy, weight control and may reduce stress. Scientific studies have also shown that walking may be beneficial for the mind, improving memory skills, learning ability, concentration, mood, creativity, and abstract reasoning. Sustained walking sessions for a minimum period of thirty to sixty minutes a day, five days a week, with the correct walking posture may improve health. The Centers for Disease Control and Prevention's fact sheet on the "Relationship of Walking to Mortality Among U.S. Adults with Diabetes" states that those with diabetes who walked for two or more hours a week lowered their mortality rate from all causes by 39 percent. Women who took 4,500 steps to 7,500 steps a day seemed to have fewer premature deaths compared to those who only took 2,700 steps a day. "Walking lengthened the life of people with diabetes regardless of age, sex, race, body mass index, length of time since diagnosis and presence of complications or functional limitations." One limited study found preliminary evidence of a relationship between the speed of walking and health, and that the best results are obtained with a speed of more than . A 2023 study by the European Journal of Preventive Cardiology, the largest study to date, found that walking at least 2,337 steps a day reduced the risk of dying from cardiovascular diseases, and that 3,967 steps a day reduced the risk of dying from any cause. Benefits continued to increase with more steps. James Leiper, associate medical director at the British Heart Foundation, said that if the benefits of walking could be sold as a medicine "we would be hailing it as a wonder drug". Origins It is theorized that "walking" among tetrapods originated underwater with air-breathing fish that could "walk" underwater, giving rise (potentially with vertebrates like Tiktaalik) to the plethora of land-dwelling life that walk on four or two limbs. While terrestrial tetrapods are theorised to have a single origin, arthropods and their relatives are thought to have independently evolved walking several times, specifically in hexapods, myriapods, chelicerates, tardigrades, onychophorans, and crustaceans. Little skates, members of the demersal fish community, can propel themselves by pushing off the ocean floor with their pelvic fins, using neural mechanisms which evolved as early as 420 million years ago, before vertebrates set foot on land. Hominin Data in the fossil record indicate that among hominin ancestors, bipedal walking was one of the first defining characteristics to emerge, predating other defining characteristics of Hominidae. Judging from footprints discovered on a former shore in Kenya, it is thought possible that ancestors of modern humans were walking in ways very similar to the present activity as long as 3 million years ago. Today, the walking gait of humans is unique and differs significantly from bipedal or quadrupedal walking gaits of other primates, like chimpanzees. It is believed to have been selectively advantageous in hominin ancestors in the Miocene due to metabolic energy efficiency. Human walking has been found to be slightly more energy efficient than travel for a quadrupedal mammal of a similar size, like chimpanzees. The energy efficiency of human locomotion can be accounted for by the reduced use of muscle in walking, due to an upright posture which places ground reaction forces at the hip and knee. When walking bipedally, chimpanzees take a crouched stance with bent knees and hips, forcing the quadriceps muscles to perform extra work, which costs more energy. Comparing chimpanzee quadrupedal travel to that of true quadrupedal animals has indicated that chimpanzees expend one-hundred and fifty percent of the energy required for travel compared to true quadrupeds. In 2007, a study further explored the origin of human bipedalism, using chimpanzee and human energetic costs of locomotion. They found that the energy spent in moving the human body is less than what would be expected for an animal of similar size and approximately seventy-five percent less costly than that of chimpanzees. Chimpanzee quadrupedal and bipedal energy costs are found to be relatively equal, with chimpanzee bipedalism costing roughly ten percent more than quadrupedal. The same 2007 study found that among chimpanzee individuals, the energy costs for bipedal and quadrupedal walking varied significantly, and those that flexed their knees and hips to a greater degree and took a more upright posture, closer to that of humans, were able to save more energy than chimpanzees that did not take this stance. Further, compared to other apes, humans have longer legs and short dorsally oriented ischia (hipbone), which result in longer hamstring extensor moments, improving walking energy economy. Longer legs also support lengthened Achilles tendons which are thought to increase energy efficiency in bipedal locomotor activities. It was thought that hominins like Ardipithecus ramidus, which had a variety of both terrestrial and arboreal adaptions would not be as efficient walkers, however, with a small body mass A. ramidus had developed an energy efficient means of bipedal walking while still maintaining arboreal adaptations. Humans have long femoral necks, meaning that while walking, hip muscles do not require as much energy to flex while moving. These slight kinematic and anatomic differences demonstrate how bipedal walking may have developed as the dominant means of locomotion among early hominins because of the energy saved. Variants Scrambling is a method of ascending a hill or mountain that involves using both hands, because of the steepness of the terrain. Of necessity, it will be a slow and careful form of walking and with possibly of occasional brief, easy rock climbing. Some scrambling takes place on narrow exposed ridges where more attention to balance will be required than in normal walking. Snow shoeing – Snowshoes are footwear for walking over the snow. Snowshoes work by distributing the weight of the person over a larger area so that the person's foot does not sink completely into the snow, a quality called "flotation". It is often said by snowshoers that if you can walk, you can snowshoe. This is true in optimal conditions, but snowshoeing properly requires some slight adjustments to walking. The method of walking is to lift the shoes slightly and slide the inner edges over each other, thus avoiding the unnatural and fatiguing "straddle-gait" that would otherwise be necessary. A snowshoer must be willing to roll his or her feet slightly as well. An exaggerated stride works best when starting out, particularly with larger or traditional shoes. Cross-country skiing – originally conceived like snow shoes as a means of travel in deep snow. Trails hiked in the summer are often skied in the winter and the Norwegian Trekking Association maintains over 400 huts stretching across thousands of kilometres of trails which hikers can use in the summer and skiers in the winter. Beach walking is a sport that is based on a walk on the sand of the beach. Beach walking can be developed on compact sand or non-compact sand. There are beach walking competitions on non-compact sand, and there are world records of beach walking on non-compact sand in Multiday distances. Beach walking has a specific technique of walk. Nordic walking is a physical activity and a sport, which is performed with specially designed walking poles similar to ski poles. Compared to regular walking, Nordic walking (also called pole walking) involves applying force to the poles with each stride. Nordic walkers use more of their entire body (with greater intensity) and receive fitness building stimulation not present in normal walking for the chest, lats, triceps, biceps, shoulder, abdominals, spinal and other core muscles that may result in significant increases in heart rate at a given pace. Nordic walking has been estimated as producing up to a 46% increase in energy consumption, compared to walking without poles. Pedestrianism is a sport that developed during the late eighteenth and nineteenth centuries, and was a popular spectator sport in the British Isles. By the end of the 18th century, and especially with the growth of the popular press, feats of foot travel over great distances (similar to a modern ultramarathon) gained attention, and were labeled "pedestrianism". Interest in the sport, and the wagering which accompanied it, spread to the United States, Canada, and Australia in the 19th century. By the end of the 19th century, Pedestrianism was largely displaced by the rise in modern spectator sports and by controversy involving rules, which limited its appeal as a source of wagering and led to its inclusion in the amateur athletics movement. Pedestrianism was first codified in the last half of the 19th century, evolving into what would become racewalking, By the mid 19th century, competitors were often expected to extend their legs straight at least once in their stride, and obey what was called the "fair heel and toe" rule. This rule, the source of modern racewalking, was a vague commandment that the toe of one foot could not leave the ground before the heel of the next foot touched down. This said, rules were customary and changed with the competition. Racers were usually allowed to jog in order to fend off cramps, and it was distance, not code, which determined gait for longer races. Newspaper reports suggest that "trotting" was common in events. Speed walking is the general term for fast walking. Within the Speed Walking category are a variety of fast walking techniques: Power Walking, Fit Walking, etc. Power walking is the act of walking with a speed at the upper end of the natural range for walking gait, typically . To qualify as power walking as opposed to jogging or running, at least one foot must be in contact with the ground at all times. Racewalking is a long-distance athletic event. Although it is a foot race, it is different from running in that one foot must appear to be in contact with the ground at all times. Stride length is reduced, so to achieve competitive speeds, racewalkers must attain cadence rates comparable to those achieved by Olympic 800-meter runners, and they must do so for hours at a time since the Olympic events are the race walk (men and women) and race walk (men only), and events are also held. See also pedestrianism above. Afghan walking: The Afghan Walk is a rhythmic breathing technique synchronized with walking. It was born in the 1980s on the basis of the observations made by the Frenchman Édouard G. Stiegler, during his contacts with Afghan caravaners, capable of making walks of more than 60 km per day for dozens of days. Backward walking: In this activity, an individual walks in reverse, facing away from their intended direction of movement. This unique form of exercise has gained popularity for its various health and fitness benefits. It requires more attention and engages different muscles than forward walking, making it a valuable addition to a fitness routine. Some potential benefits of retro walking include improved balance, enhanced coordination, strengthened leg muscles, and reduced knee stress. It is also a rehabilitation exercise for certain injuries and can be way to switch up one's workout routine. Biomechanics Human walking is accomplished with a strategy called the double pendulum. During forward motion, the leg that leaves the ground swings forward from the hip. This sweep is the first pendulum. Then the leg strikes the ground with the heel and rolls through to the toe in a motion described as an inverted pendulum. The motion of the two legs is coordinated so that one foot or the other is always in contact with the ground. While walking, the muscles of the calf contract, raising the body's center of mass, while this muscle is contracted, potential energy is stored. Then gravity pulls the body forward and down onto the other leg and the potential energy is then transformed into kinetic energy. The process of human walking can save approximately sixty-five percent of the energy used by utilizing gravity in forward motion. Walking differs from a running gait in a number of ways. The most obvious is that during walking one leg always stays on the ground while the other is swinging. In running there is typically a ballistic phase where the runner is airborne with both feet in the air (for bipedals). Another difference concerns the movement of the centre of mass of the body. In walking the body "vaults" over the leg on the ground, raising the centre of mass to its highest point as the leg passes the vertical, and dropping it to the lowest as the legs are spread apart. Essentially kinetic energy of forward motion is constantly being traded for a rise in potential energy. This is reversed in running where the centre of mass is at its lowest as the leg is vertical. This is because the impact of landing from the ballistic phase is absorbed by bending the leg and consequently storing energy in muscles and tendons. In running there is a conversion between kinetic, potential, and elastic energy. There is an absolute limit on an individual's speed of walking (without special techniques such as those employed in speed walking) due to the upwards acceleration of the centre of mass during a stride – if it is greater than the acceleration due to gravity the person will become airborne as they vault over the leg on the ground. Typically, however, animals switch to a run at a lower speed than this due to energy efficiencies. Based on the 2D inverted pendulum model of walking, there are at least five physical constraints that place fundamental limits on walking like an inverted pendulum. These constraints are: take-off constraint, sliding constraint, fall-back constraint, steady-state constraint, high step-frequency constraint. Leisure activity Many people enjoy walking as a recreation in the mainly urban modern world, and it is one of the best forms of exercise. For some, walking is a way to enjoy nature and the outdoors; and for others the physical, sporting and endurance aspect is more important. There are a variety of different kinds of walking, including bushwalking, racewalking, beach walking, hillwalking, volksmarching, Nordic walking, trekking, dog walking and hiking. Some people prefer to walk indoors on a treadmill, or in a gym, and fitness walkers and others may use a pedometer to count their steps. Hiking is the usual word used in Canada, the United States and South Africa for long vigorous walks; similar walks are called tramps in New Zealand, or hill walking or just walking in Australia, the UK and the Irish Republic. In the UK, rambling is also used. Australians also bushwalk. In English-speaking parts of North America, the term walking is used for short walks, especially in towns and cities. Snow shoeing is walking in snow; a slightly different gait is required compared with regular walking. Tourism In terms of tourism, the possibilities range from guided walking tours in cities, to organized trekking holidays in the Himalayas. In the UK the term walking tour also refers to a multi-day walk or hike undertaken by a group or individual. Well-organized systems of trails exist in many other European counties, as well as Canada, United States, New Zealand, and Nepal. Systems of lengthy waymarked walking trails now stretch across Europe from Norway to Turkey, Portugal to Cyprus. Many also walk the traditional pilgrim routes, of which the most famous is El Camino de Santiago, The Way of St. James. Numerous walking festivals and other walking events take place each year in many countries. The world's largest multi-day walking event is the International Four Days Marches Nijmegen in the Netherlands. The "Vierdaagse" (Dutch for "Four day Event") is an annual walk that has taken place since 1909; it has been based at Nijmegen since 1916. Depending on age group and category, walkers have to walk 30, 40 or 50 kilometers each day for four days. Originally a military event with a few civilians, it now is a mainly civilian event. Numbers have risen in recent years, with over 40,000 now taking part, including about 5,000 military personnel. Due to crowds on the route, since 2004 the organizers have limited the number of participants. In the U.S., there is the annual Labor Day walk on Mackinac Bridge, Michigan, which draws over 60,000 participants; it is the largest single-day walking event; while the Chesapeake Bay Bridge Walk in Maryland draws over 50,000 participants each year. There are also various walks organised as charity events, with walkers sponsored for a specific cause. These walks range in length from two miles (3 km) or five km to 50 miles (80 km). The MS Challenge Walk is an 80 km or 50-mile walk which raises money to fight multiple sclerosis, while walkers in the Oxfam Trailwalker cover 100 km or 60 miles. Rambling In Britain, The Ramblers, a registered charity, is the largest organisation that looks after the interests of walkers, with some 100,000 members. Its "Get Walking Keep Walking" project provides free route guides, led walks, as well as information for people new to walking. The Long Distance Walkers Association in the UK is for the more energetic walker, and organizes lengthy challenge hikes of 20 or even 50 miles (30 to 80 km) or more in a day. The LDWA's annual "Hundred" event, entailing walking 100 miles or 160 km in 48 hours, takes place each British Spring Bank Holiday weekend. Walkability There has been a recent focus among urban planners in some communities to create pedestrian-friendly areas and roads, allowing commuting, shopping and recreation to be done on foot. The concept of walkability has arisen as a measure of the degree to which an area is friendly to walking. Some communities are at least partially car-free, making them particularly supportive of walking and other modes of transportation. In the United States, the active living network is an example of a concerted effort to develop communities more friendly to walking and other physical activities. An example of such efforts to make urban development more pedestrian friendly is the pedestrian village. This is a compact, pedestrian-oriented neighborhood or town, with a mixed-use village center, that follows the tenets of New Pedestrianism. Shared-use lanes for pedestrians and those using bicycles, Segways, wheelchairs, and other small rolling conveyances that do not use internal combustion engines. Generally, these lanes are in front of the houses and businesses, and streets for motor vehicles are always at the rear. Some pedestrian villages might be nearly car-free with cars either hidden below the buildings or on the periphery of the village. Venice, Italy is essentially a pedestrian village with canals. The canal district in Venice, California, on the other hand, combines the front lane/rear street approach with canals and walkways, or just walkways. Walking is also considered to be a clear example of a sustainable mode of transport, especially suited for urban use and/or relatively shorter distances. Non-motorized transport modes such as walking, but also cycling, small-wheeled transport (skates, skateboards, push scooters and hand carts) or wheelchair travel are often key elements of successfully encouraging clean urban transport. A large variety of case studies and good practices (from European cities and some worldwide examples) that promote and stimulate walking as a means of transportation in cities can be found at Eltis, Europe's portal for local transport. The development of specific rights of way with appropriate infrastructure can promote increased participation and enjoyment of walking. Examples of types of investment include pedestrian malls, and foreshoreways such as oceanways and also river walks. The first purpose-built pedestrian street in Europe is the Lijnbaan in Rotterdam, opened in 1953. The first pedestrianised shopping centre in the United Kingdom was in Stevenage in 1959. A large number of European towns and cities have made part of their centres car-free since the early 1960s. These are often accompanied by car parks on the edge of the pedestrianised zone, and, in the larger cases, park and ride schemes. Central Copenhagen is one of the largest and oldest: It was converted from car traffic into pedestrian zone in 1962. In robotics Generally, the first successful walking robots had six legs. As microprocessor technology advanced, the number of legs could be reduced and there are now robots that can walk on two legs. One, for example, is ASIMO. Although there has been significant advances, robots still do not walk nearly as well as human beings as they often need to keep their knees bent permanently in order to improve stability. In 2009, Japanese roboticist Tomotaka Takahashi developed a robot that can jump three inches off the ground. The robot, named Ropid, is capable of getting up, walking, running, and jumping. Many other robots have also been able to walk over the years like a bipedal walking robot. Mathematical models Multiple mathematical models have been proposed to reproduce the kinematics observed in walking. These may be broadly broken down into four categories: rule-based models based on mechanical considerations and past literature, weakly coupled phase oscillators models, control-based models which guide simulations to maximize some property of locomotion, and phenomenological models which fit equations directly to the kinematics. Rule-based models The rule-based models integrate the past literature on motor control to generate a few simple rules which are presumed to be responsible for walking (e.g. “loading of the left leg triggers unloading of right leg”). Such models are generally most strictly based on the past literature and when they are based on a few rules can be easy to interpret. However, the influence of each rule can be hard to interpret when these models become more complex. Furthermore, the tuning of parameters is often done in an ad hoc way, revealing little intuition about why the system may be organized in this way. Finally, such models are typically based fully on sensory feedback, ignoring the effect of descending and rhythm generating neurons, which have been shown to be crucial in coordinating proper walking. Coupled oscillator models Dynamical system theory shows that any network with cyclical dynamics may be modeled as a set of weakly coupled phase oscillators, so another line of research has been exploring this view of walking. Each oscillator may model a muscle, joint angle, or even a whole leg, and is coupled to some set of other oscillators. Often, these oscillators are thought to represent the central pattern generators underlying walking. These models have rich theory behind them, allow for some extensions based on sensory feedback, and can be fit to kinematics. However, they need to be heavily constrained to fit to data and by themselves make no claims on which gaits allow the animal to move faster, more robustly, or more efficiently. Control based models Control-based models start with a simulation based on some description of the animal's anatomy and optimize control parameters to generate some behavior. These may be based on a musculoskeletal model, skeletal model, or even simply a ball and stick model. As these models generate locomotion by optimizing some metric, they can be used to explore the space of optimal locomotion behaviors under some assumptions. However, they typically do not generate plausible hypotheses on the neural coding underlying the behaviors and are typically sensitive to modeling assumptions. Statistical models Phenomenological models model the kinematics of walking directly by fitting a dynamical system, without postulating an underlying mechanism for how the kinematics are generated neurally. Such models can produce the most realistic kinematic trajectories and thus have been explored for simulating walking for computer-based animation. However, the lack of underlying mechanism makes it hard to apply these models to study the biomechanical or neural properties of walking. Animals Horses The walk is a four-beat gait that averages about . When walking, a horse's legs follow this sequence: left hind leg, left front leg, right hind leg, right front leg, in a regular 1-2-3-4 beat. At the walk, the horse will always have one foot raised and the other three feet on the ground, save for a brief moment when weight is being transferred from one foot to another. A horse moves its head and neck in a slight up and down motion that helps maintain balance. Ideally, the advancing rear hoof oversteps the spot where the previously advancing front hoof touched the ground. The more the rear hoof oversteps, the smoother and more comfortable the walk becomes. Individual horses and different breeds vary in the smoothness of their walk. However, a rider will almost always feel some degree of gentle side-to-side motion in the horse's hips as each hind leg reaches forward. The fastest "walks" with a four-beat footfall pattern are actually the lateral forms of ambling gaits such as the running walk, singlefoot, and similar rapid but smooth intermediate speed gaits. If a horse begins to speed up and lose a regular four-beat cadence to its gait, the horse is no longer walking but is beginning to either trot or pace. Elephants Elephants can move both forwards and backwards, but cannot trot, jump, or gallop. They use only two gaits when moving on land, the walk and a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders rising and falling while the foot is planted on the ground. With no "aerial phase", the fast gait does not meet all the criteria of running, although the elephant uses its legs much like other running animals, with the hips and shoulders falling and then rising while the feet are on the ground. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of . At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Walking fish Walking fish (or ambulatory fish) are fish that are able to travel over land for extended periods of time. The term may also be used for some other cases of nonstandard fish locomotion, e.g., when describing fish "walking" along the sea floor, as the handfish or frogfish. Insects Insects must carefully coordinate their six legs during walking to produce gaits that allow for efficient navigation of their environment. Interleg coordination patterns have been studied in a variety of insects, including locusts (Schistocerca gregaria), cockroaches (Periplaneta americana), stick insects (Carausius morosus), and fruit flies (Drosophila melanogaster). Different walking gaits have been observed to exist on a speed dependent continuum of phase relationships. Even though their walking gaits are not discrete, they can often be broadly categorized as either a metachronal wave gait, tetrapod gait, or tripod gait. In a metachronal wave gait, only one leg leaves contact with the ground at a time. This gait starts at one of the hind legs, then propagates forward to the mid and front legs on the same side before starting at the hind leg of the contralateral side. The wave gait is often used at slow walking speeds and is the most stable, since five legs are always in contact with the ground at a time. In a tetrapod gait, two legs swing at a time while the other four legs remain in contact with the ground. There are multiple configurations for tetrapod gaits, but the legs that swing together must be on contralateral sides of the body. Tetrapod gaits are typically used at medium speeds and are also very stable. A walking gait is considered tripod if three of the legs enter the swing phase simultaneously, while the other three legs make contact with the ground. The middle leg of one side swings with the hind and front legs on the contralateral side. Tripod gaits are most commonly used at high speeds, though it can be used at lower speeds. The tripod gait is less stable than wave-like and tetrapod gaits, but it is theorized to be the most robust. This means that it is easier for an insect to recover from an offset in step timing when walking in a tripod gait. The ability to respond robustly is important for insects when traversing uneven terrain. See also Arm swing in human locomotion Duckwalk Footpath Gait training Hand walking Hot Girl Walk International charter for walking Kinhin List of longest walks New Urbanism Obesity and walking Pedestrian village Pedestrian zone Preferred walking speed Student transport Tobler's hiking function Walkability Walkathon Walking audit Walking bus Walking city Walking tour References External links European Local Transport Information Service (Eltis) provides case studies concerning walking as a local transport concept. Hiking Private transport Articles containing video clips
0.771765
0.99433
0.767389
Geodesic
In geometry, a geodesic is a curve representing in some sense the shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line". The noun geodesic and the adjective geodetic come from geodesy, the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph. In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion. Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles. Introduction A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function f from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from f(s) to f(t) along the curve equals |s−t|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic. It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic. A contiguous segment of a geodesic is again a geodesic. In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with "constant speed". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map from the unit interval on the real number line to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant. Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In general relativity, geodesics in spacetime describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved spacetime. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways. This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian manifolds. The article Levi-Civita connection discusses the more general case of a pseudo-Riemannian manifold and geodesic (general relativity) discusses the special case of general relativity in greater detail. Examples The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points, then there are infinitely many shortest paths between them. Geodesics on an ellipsoid behave in a more complicated way than on a sphere; in particular, they are not closed in general (see figure). Triangles A geodesic triangle is formed by the geodesics joining each pair out of three points on a given surface. On the sphere, the geodesics are great circle arcs, forming a spherical triangle. Metric geometry In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve from an interval I of the reals to the metric space M is a geodesic if there is a constant such that for any there is a neighborhood J of t in I such that for any we have This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parameterization, i.e. in the above identity v = 1 and If the last equality is satisfied for all , the geodesic is called a minimizing geodesic or shortest path. In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic. Riemannian geometry In a Riemannian manifold M with metric tensor g, the length L of a continuously differentiable curve γ : [a,b] → M is defined by The distance d(p, q) between two points p and q of M is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves γ : [a,b] → M such that γ(a) = p and γ(b) = q. In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. In fact, only paths that are both locally distance minimizing and parameterized proportionately to arc-length are geodesics. Another equivalent way of defining geodesics on a Riemannian manifold, is to define them as the minima of the following action or energy functional All minima of E are also minima of L, but L is a bigger set since paths that are minima of L can be arbitrarily re-parameterized (without changing their length), while minima of E cannot. For a piecewise curve (more generally, a curve), the Cauchy–Schwarz inequality gives with equality if and only if is equal to a constant a.e.; the path should be travelled at constant speed. It happens that minimizers of also minimize , because they turn out to be affinely parameterized, and the inequality is an equality. The usefulness of this approach is that the problem of seeking minimizers of E is a more robust variational problem. Indeed, E is a "convex function" of , so that within each isotopy class of "reasonable functions", one ought to expect existence, uniqueness, and regularity of minimizers. In contrast, "minimizers" of the functional are generally not very regular, because arbitrary reparameterizations are allowed. The Euler–Lagrange equations of motion for the functional E are then given in local coordinates by where are the Christoffel symbols of the metric. This is the geodesic equation, discussed below. Calculus of variations Techniques of the classical calculus of variations can be applied to examine the energy functional E. The first variation of energy is defined in local coordinates by The critical points of the first variation are precisely the geodesics. The second variation is defined by In an appropriate sense, zeros of the second variation along a geodesic γ arise along Jacobi fields. Jacobi fields are thus regarded as variations through geodesics. By applying variational techniques from classical mechanics, one can also regard geodesics as Hamiltonian flows. They are solutions of the associated Hamilton equations, with (pseudo-)Riemannian metric taken as Hamiltonian. Affine geodesics A geodesic on a smooth manifold M with an affine connection ∇ is defined as a curve γ(t) such that parallel transport along the curve preserves the tangent vector to the curve, so at each point along the curve, where is the derivative with respect to . More precisely, in order to define the covariant derivative of it is necessary first to extend to a continuously differentiable vector field in an open set. However, the resulting value of is independent of the choice of extension. Using local coordinates on M, we can write the geodesic equation (using the summation convention) as where are the coordinates of the curve γ(t) and are the Christoffel symbols of the connection ∇. This is an ordinary differential equation for the coordinates. It has a unique solution, given an initial position and an initial velocity. Therefore, from the point of view of classical mechanics, geodesics can be thought of as trajectories of free particles in a manifold. Indeed, the equation means that the acceleration vector of the curve has no components in the direction of the surface (and therefore it is perpendicular to the tangent plane of the surface at each point of the curve). So, the motion is completely determined by the bending of the surface. This is also the idea of general relativity where particles move on geodesics and the bending is caused by gravity. Existence and uniqueness The local existence and uniqueness theorem for geodesics states that geodesics on a smooth manifold with an affine connection exist, and are unique. More precisely: For any point p in M and for any vector V in TpM (the tangent space to M at p) there exists a unique geodesic : I → M such that and where I is a maximal open interval in R containing 0. The proof of this theorem follows from the theory of ordinary differential equations, by noticing that the geodesic equation is a second-order ODE. Existence and uniqueness then follow from the Picard–Lindelöf theorem for the solutions of ODEs with prescribed initial conditions. γ depends smoothly on both p and V. In general, I may not be all of R as for example for an open disc in R2. Any extends to all of if and only if is geodesically complete. Geodesic flow Geodesic flow is a local R-action on the tangent bundle TM of a manifold M defined in the following way where t ∈ R, V ∈ TM and denotes the geodesic with initial data . Thus, is the exponential map of the vector tV. A closed orbit of the geodesic flow corresponds to a closed geodesic on M. On a (pseudo-)Riemannian manifold, the geodesic flow is identified with a Hamiltonian flow on the cotangent bundle. The Hamiltonian is then given by the inverse of the (pseudo-)Riemannian metric, evaluated against the canonical one-form. In particular the flow preserves the (pseudo-)Riemannian metric , i.e. In particular, when V is a unit vector, remains unit speed throughout, so the geodesic flow is tangent to the unit tangent bundle. Liouville's theorem implies invariance of a kinematic measure on the unit tangent bundle. Geodesic spray The geodesic flow defines a family of curves in the tangent bundle. The derivatives of these curves define a vector field on the total space of the tangent bundle, known as the geodesic spray. More precisely, an affine connection gives rise to a splitting of the double tangent bundle TTM into horizontal and vertical bundles: The geodesic spray is the unique horizontal vector field W satisfying at each point v ∈ TM; here ∗ : TTM → TM denotes the pushforward (differential) along the projection  : TM → M associated to the tangent bundle. More generally, the same construction allows one to construct a vector field for any Ehresmann connection on the tangent bundle. For the resulting vector field to be a spray (on the deleted tangent bundle TM \ {0}) it is enough that the connection be equivariant under positive rescalings: it need not be linear. That is, (cf. Ehresmann connection#Vector bundles and covariant derivatives) it is enough that the horizontal distribution satisfy for every X ∈ TM \ {0} and λ > 0. Here d(Sλ) is the pushforward along the scalar homothety A particular case of a non-linear connection arising in this manner is that associated to a Finsler manifold. Affine and projective geodesics Equation is invariant under affine reparameterizations; that is, parameterizations of the form where a and b are constant real numbers. Thus apart from specifying a certain class of embedded curves, the geodesic equation also determines a preferred class of parameterizations on each of the curves. Accordingly, solutions of are called geodesics with affine parameter. An affine connection is determined by its family of affinely parameterized geodesics, up to torsion . The torsion itself does not, in fact, affect the family of geodesics, since the geodesic equation depends only on the symmetric part of the connection. More precisely, if are two connections such that the difference tensor is skew-symmetric, then and have the same geodesics, with the same affine parameterizations. Furthermore, there is a unique connection having the same geodesics as , but with vanishing torsion. Geodesics without a particular parameterization are described by a projective connection. Computational methods Efficient solvers for the minimal geodesic problem on surfaces have been proposed by Mitchell, Kimmel, Crane, and others. Ribbon test A ribbon "test" is a way of finding a geodesic on a physical surface. The idea is to fit a bit of paper around a straight line (a ribbon) onto a curved surface as closely as possible without stretching or squishing the ribbon (without changing its internal geometry). For example, when a ribbon is wound as a ring around a cone, the ribbon would not lie on the cone's surface but stick out, so that circle is not a geodesic on the cone. If the ribbon is adjusted so that all its parts touch the cone's surface, it would give an approximation to a geodesic. Mathematically the ribbon test can be formulated as finding a mapping of a neighborhood of a line in a plane into a surface so that the mapping "doesn't change the distances around by much"; that is, at the distance from we have where and are metrics on and . Applications Geodesics serve as the basis to calculate: geodesic airframes; see geodesic airframe or geodetic airframe geodesic structures – for example geodesic domes horizontal distances on or near Earth; see Earth geodesics mapping images on surfaces, for rendering; see UV mapping robot motion planning (e.g., when painting car parts); see Shortest path problem geodesic shortest path (GSP) correction over Poisson surface reconstruction (e.g. in digital dentistry); without GSP reconstruction often results in self-intersections within the surface See also Differential geometry of surfaces Geodesic circle Notes References Further reading . See chapter 2. . See section 2.7. . See section 1.4. . . See section 87. . Note especially pages 7 and 10. . . See chapter 3''. External links Geodesics Revisited — Introduction to geodesics including two ways of derivation of the equation of geodesic with applications in geometry (geodesic on a sphere and on a torus), mechanics (brachistochrone) and optics (light beam in inhomogeneous medium). Totally geodesic submanifold at the Manifold Atlas Differential geometry
0.76949
0.997265
0.767386
Lorentz group
In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz. For example, the following laws, equations, and theories respect Lorentz symmetry: The kinematical laws of special relativity Maxwell's field equations in the theory of electromagnetism The Dirac equation in the theory of the electron The Standard Model of particle physics The Lorentz group expresses the fundamental symmetry of space and time of all known fundamental laws of nature. In small enough regions of spacetime where gravitational variances are negligible, physical laws are Lorentz invariant in the same manner as special relativity. Basic properties The Lorentz group is a subgroup of the Poincaré group—the group of all isometries of Minkowski spacetime. Lorentz transformations are, precisely, isometries that leave the origin fixed. Thus, the Lorentz group is the isotropy subgroup with respect to the origin of the isometry group of Minkowski spacetime. For this reason, the Lorentz group is sometimes called the homogeneous Lorentz group while the Poincaré group is sometimes called the inhomogeneous Lorentz group. Lorentz transformations are examples of linear transformations; general isometries of Minkowski spacetime are affine transformations. Physics definition Assume two inertial reference frames and , and two points , , the Lorentz group is the set of all the transformations between the two reference frames that preserve the speed of light propagating between the two points: In matrix form these are all the linear transformations such that: These are then called Lorentz transformations Mathematical definition Mathematically, the Lorentz group may be described as the indefinite orthogonal group , the matrix Lie group that preserves the quadratic form on (the vector space equipped with this quadratic form is sometimes written ). This quadratic form is, when put on matrix form (see Classical orthogonal group), interpreted in physics as the metric tensor of Minkowski spacetime. Mathematical properties The Lorentz group is a six-dimensional noncompact non-abelian real Lie group that is not connected. The four connected components are not simply connected. The identity component (i.e., the component containing the identity element) of the Lorentz group is itself a group, and is often called the restricted Lorentz group, and is denoted . The restricted Lorentz group consists of those Lorentz transformations that preserve both the orientation of space and the direction of time. Its fundamental group has order 2, and its universal cover, the indefinite spin group , is isomorphic to both the special linear group and to the symplectic group . These isomorphisms allow the Lorentz group to act on a large number of mathematical structures important to physics, most notably spinors. Thus, in relativistic quantum mechanics and in quantum field theory, it is very common to call the Lorentz group, with the understanding that is a specific representation (the vector representation) of it. A recurrent representation of the action of the Lorentz group on Minkowski space uses biquaternions, which form a composition algebra. The isometry property of Lorentz transformations holds according to the composition property . Another property of the Lorentz group is conformality or preservation of angles. Lorentz boosts act by hyperbolic rotation of a spacetime plane, and such "rotations" preserve hyperbolic angle, the measure of rapidity used in relativity. Therefore the Lorentz group is a subgroup of the conformal group of spacetime. Note that this article refers to as the "Lorentz group", as the "proper Lorentz group", and as the "restricted Lorentz group". Many authors (especially in physics) use the name "Lorentz group" for (or sometimes even ) rather than . When reading such authors it is important to keep clear exactly which they are referring to. Connected components Because it is a Lie group, the Lorentz group is a group and also has a topological description as a smooth manifold. As a manifold, it has four connected components. Intuitively, this means that it consists of four topologically separated pieces. The four connected components can be categorized by two transformation properties its elements have: Some elements are reversed under time-inverting Lorentz transformations, for example, a future-pointing timelike vector would be inverted to a past-pointing vector Some elements have orientation reversed by improper Lorentz transformations, for example, certain vierbein (tetrads) Lorentz transformations that preserve the direction of time are called . The subgroup of orthochronous transformations is often denoted . Those that preserve orientation are called proper, and as linear transformations they have determinant . (The improper Lorentz transformations have determinant .) The subgroup of proper Lorentz transformations is denoted . The subgroup of all Lorentz transformations preserving both orientation and direction of time is called the proper, orthochronous Lorentz group or restricted Lorentz group, and is denoted by . The set of the four connected components can be given a group structure as the quotient group , which is isomorphic to the Klein four-group. Every element in can be written as the semidirect product of a proper, orthochronous transformation and an element of the discrete group where P and T are the parity and time reversal operators: . Thus an arbitrary Lorentz transformation can be specified as a proper, orthochronous Lorentz transformation along with a further two bits of information, which pick out one of the four connected components. This pattern is typical of finite-dimensional Lie groups. Restricted Lorentz group The restricted Lorentz group is the identity component of the Lorentz group, which means that it consists of all Lorentz transformations that can be connected to the identity by a continuous curve lying in the group. The restricted Lorentz group is a connected normal subgroup of the full Lorentz group with the same dimension, in this case with dimension six. The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz boosts (which are rotations in a hyperbolic space that includes a time-like direction). Since every proper, orthochronous Lorentz transformation can be written as a product of a rotation (specified by 3 real parameters) and a boost (also specified by 3 real parameters), it takes 6 real parameters to specify an arbitrary proper orthochronous Lorentz transformation. This is one way to understand why the restricted Lorentz group is six-dimensional. (See also the Lie algebra of the Lorentz group.) The set of all rotations forms a Lie subgroup isomorphic to the ordinary rotation group . The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of non-colinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.) A boost in some direction, or a rotation about some axis, generates a one-parameter subgroup. Surfaces of transitivity If a group acts on a space , then a surface is a surface of transitivity if is invariant under (i.e., ) and for any two points there is a such that . By definition of the Lorentz group, it preserves the quadratic form The surfaces of transitivity of the orthochronous Lorentz group , acting on flat spacetime are the following: is the upper branch of a hyperboloid of two sheets. Points on this sheet are separated from the origin by a future time-like vector. is the lower branch of this hyperboloid. Points on this sheet are the past time-like vectors. is the upper branch of the light cone, the future light cone. is the lower branch of the light cone, the past light cone. is a hyperboloid of one sheet. Points on this sheet are space-like separated from the origin. The origin . These surfaces are , so the images are not faithful, but they are faithful for the corresponding facts about . For the full Lorentz group, the surfaces of transitivity are only four since the transformation takes an upper branch of a hyperboloid (cone) to a lower one and vice versa. As symmetric spaces An equivalent way to formulate the above surfaces of transitivity is as a symmetric space in the sense of Lie theory. For example, the upper sheet of the hyperboloid can be written as the quotient space , due to the orbit-stabilizer theorem. Furthermore, this upper sheet also provides a model for three-dimensional hyperbolic space. Representations of the Lorentz group These observations constitute a good starting point for finding all infinite-dimensional unitary representations of the Lorentz group, in fact, of the Poincaré group, using the method of induced representations. One begins with a "standard vector", one for each surface of transitivity, and then ask which subgroup preserves these vectors. These subgroups are called little groups by physicists. The problem is then essentially reduced to the easier problem of finding representations of the little groups. For example, a standard vector in one of the hyperbolas of two sheets could be suitably chosen as . For each , the vector pierces exactly one sheet. In this case the little group is , the rotation group, all of whose representations are known. The precise infinite-dimensional unitary representation under which a particle transforms is part of its classification. Not all representations can correspond to physical particles (as far as is known). Standard vectors on the one-sheeted hyperbolas would correspond to tachyons. Particles on the light cone are photons, and more hypothetically, gravitons. The "particle" corresponding to the origin is the vacuum. Homomorphisms and isomorphisms Several other groups are either homomorphic or isomorphic to the restricted Lorentz group . These homomorphisms play a key role in explaining various phenomena in physics. The special linear group is a double covering of the restricted Lorentz group. This relationship is widely used to express the Lorentz invariance of the Dirac equation and the covariance of spinors. In other words, the (restricted) Lorentz group is isomorphic to The symplectic group is isomorphic to ; it is used to construct Weyl spinors, as well as to explain how spinors can have a mass. The spin group is isomorphic to ; it is used to explain spin and spinors in terms of the Clifford algebra, thus making it clear how to generalize the Lorentz group to general settings in Riemannian geometry, including theories of supergravity and string theory. The restricted Lorentz group is isomorphic to the projective special linear group which is, in turn, isomorphic to the Möbius group, the symmetry group of conformal geometry on the Riemann sphere. This relationship is central to the classification of the subgroups of the Lorentz group according to an earlier classification scheme developed for the Möbius group. Weyl representation The Weyl representation or spinor map is a pair of surjective homomorphisms from to . They form a matched pair under parity transformations, corresponding to left and right chiral spinors. One may define an action of on Minkowski spacetime by writing a point of spacetime as a two-by-two Hermitian matrix in the form in terms of Pauli matrices. This presentation, the Weyl presentation, satisfies Therefore, one has identified the space of Hermitian matrices (which is four-dimensional, as a real vector space) with Minkowski spacetime, in such a way that the determinant of a Hermitian matrix is the squared length of the corresponding vector in Minkowski spacetime. An element acts on the space of Hermitian matrices via where is the Hermitian transpose of . This action preserves the determinant and so acts on Minkowski spacetime by (linear) isometries. The parity-inverted form of the above is which transforms as That this is the correct transformation follows by noting that remains invariant under the above pair of transformations. These maps are surjective, and kernel of either map is the two element subgroup . By the first isomorphism theorem, the quotient group is isomorphic to . The parity map swaps these two coverings. It corresponds to Hermitian conjugation being an automorphism of . These two distinct coverings corresponds to the two distinct chiral actions of the Lorentz group on spinors. The non-overlined form corresponds to right-handed spinors transforming as , while the overline form corresponds to left-handed spinors transforming as . It is important to observe that this pair of coverings does not survive quantization; when quantized, this leads to the peculiar phenomenon of the chiral anomaly. The classical (i.e., non-quantized) symmetries of the Lorentz group are broken by quantization; this is the content of the Atiyah–Singer index theorem. Notational conventions In physics, it is conventional to denote a Lorentz transformation as , thus showing the matrix with spacetime indexes . A four-vector can be created from the Pauli matrices in two different ways: as and as . The two forms are related by a parity transformation. Note that . Given a Lorentz transformation , the double-covering of the orthochronous Lorentz group by given above can be written as Dropping the this takes the form The parity conjugate form is Proof That the above is the correct form for indexed notation is not immediately obvious, partly because, when working in indexed notation, it is quite easy to accidentally confuse a Lorentz transform with its inverse, or its transpose. This confusion arises due to the identity being difficult to recognize when written in indexed form. Lorentz transforms are not tensors under Lorentz transformations! Thus a direct proof of this identity is useful, for establishing its correctness. It can be demonstrated by starting with the identity where so that the above are just the usual Pauli matrices, and is the matrix transpose, and is complex conjugation. The matrix is Written as the four-vector, the relationship is This transforms as Taking one more transpose, one gets Symplectic group The symplectic group is isomorphic to . This isomorphism is constructed so as to preserve a symplectic bilinear form on , that is, to leave the form invariant under Lorentz transformations. This may be articulated as follows. The symplectic group is defined as where Other common notations are for this element; sometimes is used, but this invites confusion with the idea of almost complex structures, which are not the same, as they transform differently. Given a pair of Weyl spinors (two-component spinors) the invariant bilinear form is conventionally written as This form is invariant under the Lorentz group, so that for one has This defines a kind of "scalar product" of spinors, and is commonly used to defined a Lorentz-invariant mass term in Lagrangians. There are several notable properties to be called out that are important to physics. One is that and so The defining relation can be written as which closely resembles the defining relation for the Lorentz group where is the metric tensor for Minkowski space and of course, as before. Covering groups Since is simply connected, it is the universal covering group of the restricted Lorentz group . By restriction, there is a homomorphism . Here, the special unitary group SU(2), which is isomorphic to the group of unit norm quaternions, is also simply connected, so it is the covering group of the rotation group . Each of these covering maps are twofold covers in the sense that precisely two elements of the covering group map to each element of the quotient. One often says that the restricted Lorentz group and the rotation group are doubly connected. This means that the fundamental group of the each group is isomorphic to the two-element cyclic group . Twofold coverings are characteristic of spin groups. Indeed, in addition to the double coverings we have the double coverings These spinorial double coverings are constructed from Clifford algebras. Topology The left and right groups in the double covering are deformation retracts of the left and right groups, respectively, in the double covering . But the homogeneous space is homeomorphic to hyperbolic 3-space , so we have exhibited the restricted Lorentz group as a principal fiber bundle with fibers and base . Since the latter is homeomorphic to , while is homeomorphic to three-dimensional real projective space , we see that the restricted Lorentz group is locally homeomorphic to the product of with . Since the base space is contractible, this can be extended to a global homeomorphism. Conjugacy classes Because the restricted Lorentz group is isomorphic to the Möbius group , its conjugacy classes also fall into five classes: Elliptic transformations Hyperbolic transformations Loxodromic transformations Parabolic transformations The trivial identity transformation In the article on Möbius transformations, it is explained how this classification arises by considering the fixed points of Möbius transformations in their action on the Riemann sphere, which corresponds here to null eigenspaces of restricted Lorentz transformations in their action on Minkowski spacetime. An example of each type is given in the subsections below, along with the effect of the one-parameter subgroup it generates (e.g., on the appearance of the night sky). The Möbius transformations are the conformal transformations of the Riemann sphere (or celestial sphere). Then conjugating with an arbitrary element of obtains the following examples of arbitrary elliptic, hyperbolic, loxodromic, and parabolic (restricted) Lorentz transformations, respectively. The effect on the flow lines of the corresponding one-parameter subgroups is to transform the pattern seen in the examples by some conformal transformation. For example, an elliptic Lorentz transformation can have any two distinct fixed points on the celestial sphere, but points still flow along circular arcs from one fixed point toward the other. The other cases are similar. Elliptic An elliptic element of is and has fixed points = 0, ∞. Writing the action as and collecting terms, the spinor map converts this to the (restricted) Lorentz transformation This transformation then represents a rotation about the axis, exp(). The one-parameter subgroup it generates is obtained by taking to be a real variable, the rotation angle, instead of a constant. The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same two fixed points, the North and South poles. The transformations move all other points around latitude circles so that this group yields a continuous counter-clockwise rotation about the axis as increases. The angle doubling evident in the spinor map is a characteristic feature of spinorial double coverings. Hyperbolic A hyperbolic element of is and has fixed points = 0, ∞. Under stereographic projection from the Riemann sphere to the Euclidean plane, the effect of this Möbius transformation is a dilation from the origin. The spinor map converts this to the Lorentz transformation This transformation represents a boost along the axis with rapidity . The one-parameter subgroup it generates is obtained by taking to be a real variable, instead of a constant. The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same fixed points (the North and South poles), and they move all other points along longitudes away from the South pole and toward the North pole. Loxodromic A loxodromic element of is and has fixed points = 0, ∞. The spinor map converts this to the Lorentz transformation The one-parameter subgroup this generates is obtained by replacing with any real multiple of this complex constant. (If , vary independently, then a two-dimensional abelian subgroup is obtained, consisting of simultaneous rotations about the axis and boosts along the -axis; in contrast, the one-dimensional subgroup discussed here consists of those elements of this two-dimensional subgroup such that the rapidity of the boost and angle of the rotation have a fixed ratio.) The corresponding continuous transformations of the celestial sphere (excepting the identity) all share the same two fixed points (the North and South poles). They move all other points away from the South pole and toward the North pole (or vice versa), along a family of curves called loxodromes. Each loxodrome spirals infinitely often around each pole. Parabolic A parabolic element of is and has the single fixed point = ∞ on the Riemann sphere. Under stereographic projection, it appears as an ordinary translation along the real axis. The spinor map converts this to the matrix (representing a Lorentz transformation) This generates a two-parameter abelian subgroup, which is obtained by considering a complex variable rather than a constant. The corresponding continuous transformations of the celestial sphere (except for the identity transformation) move points along a family of circles that are all tangent at the North pole to a certain great circle. All points other than the North pole itself move along these circles. Parabolic Lorentz transformations are often called null rotations. Since these are likely to be the least familiar of the four types of nonidentity Lorentz transformations (elliptic, hyperbolic, loxodromic, parabolic), it is illustrated here how to determine the effect of an example of a parabolic Lorentz transformation on Minkowski spacetime. The matrix given above yields the transformation Now, without loss of generality, pick . Differentiating this transformation with respect to the now real group parameter and evaluating at produces the corresponding vector field (first order linear partial differential operator), Apply this to a function , and demand that it stays invariant; i.e., it is annihilated by this transformation. The solution of the resulting first order linear partial differential equation can be expressed in the form where is an arbitrary smooth function. The arguments of give three rational invariants describing how points (events) move under this parabolic transformation, as they themselves do not move, Choosing real values for the constants on the right hand sides yields three conditions, and thus specifies a curve in Minkowski spacetime. This curve is an orbit of the transformation. The form of the rational invariants shows that these flowlines (orbits) have a simple description: suppressing the inessential coordinate , each orbit is the intersection of a null plane, , with a hyperboloid, . The case 3 = 0 has the hyperboloid degenerate to a light cone with the orbits becoming parabolas lying in corresponding null planes. A particular null line lying on the light cone is left invariant; this corresponds to the unique (double) fixed point on the Riemann sphere mentioned above. The other null lines through the origin are "swung around the cone" by the transformation. Following the motion of one such null line as increases corresponds to following the motion of a point along one of the circular flow lines on the celestial sphere, as described above. A choice instead, produces similar orbits, now with the roles of and interchanged. Parabolic transformations lead to the gauge symmetry of massless particles (such as photons) with helicity || ≥ 1. In the above explicit example, a massless particle moving in the direction, so with 4-momentum , is not affected at all by the -boost and -rotation combination defined below, in the "little group" of its motion. This is evident from the explicit transformation law discussed: like any light-like vector, P itself is now invariant; i.e., all traces or effects of have disappeared. , in the special case discussed. (The other similar generator, as well as it and comprise altogether the little group of the light-like vector, isomorphic to .) Appearance of the night sky This isomorphism has the consequence that Möbius transformations of the Riemann sphere represent the way that Lorentz transformations change the appearance of the night sky, as seen by an observer who is maneuvering at relativistic velocities relative to the "fixed stars". Suppose the "fixed stars" live in Minkowski spacetime and are modeled by points on the celestial sphere. Then a given point on the celestial sphere can be associated with , a complex number that corresponds to the point on the Riemann sphere, and can be identified with a null vector (a light-like vector) in Minkowski space or, in the Weyl representation (the spinor map), the Hermitian matrix The set of real scalar multiples of this null vector, called a null line through the origin, represents a line of sight from an observer at a particular place and time (an arbitrary event we can identify with the origin of Minkowski spacetime) to various distant objects, such as stars. Then the points of the celestial sphere (equivalently, lines of sight) are identified with certain Hermitian matrices. Projective geometry and different views of the 2-sphere This picture emerges cleanly in the language of projective geometry. The (restricted) Lorentz group acts on the projective celestial sphere. This is the space of non-zero null vectors with under the given quotient for projective spaces: if for . This is referred to as the celestial sphere as this allows us to rescale the time coordinate to 1 after acting using a Lorentz transformation, ensuring the space-like part sits on the unit sphere. From the Möbius side, acts on complex projective space , which can be shown to be diffeomorphic to the 2-sphere – this is sometimes referred to as the Riemann sphere. The quotient on projective space leads to a quotient on the group . Finally, these two can be linked together by using the complex projective vector to construct a null-vector. If is a projective vector, it can be tensored with its Hermitian conjugate to produce a Hermitian matrix. From elsewhere in this article we know this space of matrices can be viewed as 4-vectors. The space of matrices coming from turning each projective vector in the Riemann sphere into a matrix is known as the Bloch sphere. Lie algebra As with any Lie group, a useful way to study many aspects of the Lorentz group is via its Lie algebra. Since the Lorentz group is a matrix Lie group, its corresponding Lie algebra is a matrix Lie algebra, which may be computed as . If is the diagonal matrix with diagonal entries , then the Lie algebra consists of matrices such that . Explicitly, consists of matrices of the form , where are arbitrary real numbers. This Lie algebra is six dimensional. The subalgebra of consisting of elements in which , , and equal to zero is isomorphic to . The full Lorentz group , the proper Lorentz group and the proper orthochronous Lorentz group (the component connected to the identity) all have the same Lie algebra, which is typically denoted . Since the identity component of the Lorentz group is isomorphic to a finite quotient of (see the section above on the connection of the Lorentz group to the Möbius group), the Lie algebra of the Lorentz group is isomorphic to the Lie algebra . As a complex Lie algebra is three dimensional, but is six dimensional when viewed as a real Lie algebra. Commutation relations of the Lorentz algebra The standard basis matrices can be indexed as where take values in . These arise from taking only one of to be one, and others zero, in turn. The components can be written as . The commutation relations are There are different possible choices of convention in use. In physics, it is common to include a factor of with the basis elements, which gives a factor of in the commutation relations. Then generate boosts and generate rotations. The structure constants for the Lorentz algebra can be read off from the commutation relations. Any set of basis elements which satisfy these relations form a representation of the Lorentz algebra. Generators of boosts and rotations The Lorentz group can be thought of as a subgroup of the diffeomorphism group of and therefore its Lie algebra can be identified with vector fields on . In particular, the vectors that generate isometries on a space are its Killing vectors, which provides a convenient alternative to the left-invariant vector field for calculating the Lie algebra. We can write down a set of six generators: Vector fields on generating three rotations , Vector fields on generating three boosts , The factor of appears to ensure that the generators of rotations are Hermitian. It may be helpful to briefly recall here how to obtain a one-parameter group from a vector field, written in the form of a first order linear partial differential operator such as The corresponding initial value problem (consider a function of a scalar and solve with some initial conditions) is The solution can be written or where we easily recognize the one-parameter matrix group of rotations about the z-axis. Differentiating with respect to the group parameter and setting it in that result, we recover the standard matrix, which corresponds to the vector field we started with. This illustrates how to pass between matrix and vector field representations of elements of the Lie algebra. The exponential map plays this special role not only for the Lorentz group but for Lie groups in general. Reversing the procedure in the previous section, we see that the Möbius transformations that correspond to our six generators arise from exponentiating respectively (for the three boosts) or (for the three rotations) times the three Pauli matrices Generators of the Möbius group Another generating set arises via the isomorphism to the Möbius group. The following table lists the six generators, in which The first column gives a generator of the flow under the Möbius action (after stereographic projection from the Riemann sphere) as a real vector field on the Euclidean plane. The second column gives the corresponding one-parameter subgroup of Möbius transformations. The third column gives the corresponding one-parameter subgroup of Lorentz transformations (the image under our homomorphism of preceding one-parameter subgroup). The fourth column gives the corresponding generator of the flow under the Lorentz action as a real vector field on Minkowski spacetime. Notice that the generators consist of Two parabolics (null rotations) One hyperbolic (boost in the direction) Three elliptics (rotations about the x, y, z axes, respectively) Worked example: rotation about the y-axis Start with Exponentiate: This element of represents the one-parameter subgroup of (elliptic) Möbius transformations: Next, The corresponding vector field on (thought of as the image of under stereographic projection) is Writing , this becomes the vector field on Returning to our element of , writing out the action and collecting terms, we find that the image under the spinor map is the element of Differentiating with respect to at , yields the corresponding vector field on , This is evidently the generator of counterclockwise rotation about the -axis. Subgroups of the Lorentz group The subalgebras of the Lie algebra of the Lorentz group can be enumerated, up to conjugacy, from which the closed subgroups of the restricted Lorentz group can be listed, up to conjugacy. (See the book by Hall cited below for the details.) These can be readily expressed in terms of the generators given in the table above. The one-dimensional subalgebras of course correspond to the four conjugacy classes of elements of the Lorentz group: generates a one-parameter subalgebra of parabolics , generates a one-parameter subalgebra of boosts , generates a one-parameter of rotations , (for any ) generates a one-parameter subalgebra of loxodromic transformations. (Strictly speaking the last corresponds to infinitely many classes, since distinct give different classes.) The two-dimensional subalgebras are: generate an abelian subalgebra consisting entirely of parabolics, generate a nonabelian subalgebra isomorphic to the Lie algebra of the affine group , generate an abelian subalgebra consisting of boosts, rotations, and loxodromics all sharing the same pair of fixed points. The three-dimensional subalgebras use the Bianchi classification scheme: generate a Bianchi V subalgebra, isomorphic to the Lie algebra of , the group of euclidean homotheties, generate a Bianchi VII subalgebra, isomorphic to the Lie algebra of , the euclidean group, , where , generate a Bianchi VII subalgebra, generate a Bianchi VIII subalgebra, isomorphic to the Lie algebra of , the group of isometries of the hyperbolic plane, generate a Bianchi IX subalgebra, isomorphic to the Lie algebra of , the rotation group. The Bianchi types refer to the classification of three-dimensional Lie algebras by the Italian mathematician Luigi Bianchi. The four-dimensional subalgebras are all conjugate to generate a subalgebra isomorphic to the Lie algebra of , the group of Euclidean similitudes. The subalgebras form a lattice (see the figure), and each subalgebra generates by exponentiation a closed subgroup of the restricted Lie group. From these, all subgroups of the Lorentz group can be constructed, up to conjugation, by multiplying by one of the elements of the Klein four-group. As with any connected Lie group, the coset spaces of the closed subgroups of the restricted Lorentz group, or homogeneous spaces, have considerable mathematical interest. A few, brief descriptions: The group is the stabilizer of a null line; i.e., of a point on the Riemann sphere—so the homogeneous space is the Kleinian geometry that represents conformal geometry on the sphere . The (identity component of the) Euclidean group is the stabilizer of a null vector, so the homogeneous space is the momentum space of a massless particle; geometrically, this Kleinian geometry represents the degenerate geometry of the light cone in Minkowski spacetime. The rotation group is the stabilizer of a timelike vector, so the homogeneous space is the momentum space of a massive particle; geometrically, this space is none other than three-dimensional hyperbolic space . Generalization to higher dimensions The concept of the Lorentz group has a natural generalization to spacetime of any number of dimensions. Mathematically, the Lorentz group of (n + 1)-dimensional Minkowski space is the indefinite orthogonal group of linear transformations of that preserves the quadratic form The group preserves the quadratic form is isomorphic to , and both presentations of the Lorentz group are in use in the theoretical physics community. The former is more common in literature related to gravity, while the latter is more common in particle physics literature. A common notation for the vector space , equipped with this choice of quadratic form, is . Many of the properties of the Lorentz group in four dimensions (where ) generalize straightforwardly to arbitrary . For instance, the Lorentz group has four connected components, and it acts by conformal transformations on the celestial -sphere in -dimensional Minkowski space. The identity component is an -bundle over hyperbolic -space . The low-dimensional cases and are often useful as "toy models" for the physical case , while higher-dimensional Lorentz groups are used in physical theories such as string theory that posit the existence of hidden dimensions. The Lorentz group is also the isometry group of -dimensional de Sitter space , which may be realized as the homogeneous space . In particular is the isometry group of the de Sitter universe , a cosmological model. See also Lorentz transformation Lorentz group representation theory Poincaré group Möbius group Minkowski space Biquaternions Indefinite orthogonal group Quaternions and spatial rotation Special relativity Symmetry in quantum mechanics Notes References Reading List Emil Artin (1957) Geometric Algebra, chapter III: Symplectic and Orthogonal Geometry via Internet Archive, covers orthogonal groups A canonical reference; see chapters 1–6 for representations of the Lorentz group. An excellent resource for Lie theory, fiber bundles, spinorial coverings, and many other topics. See Lecture 11 for the irreducible representations of . See Chapter 6 for the subalgebras of the Lie algebra of the Lorentz group. See also the See Section 1.3 for a beautifully illustrated discussion of covering spaces. See Section 3D for the topology of rotation groups. §41.3 (Dover reprint edition.) An excellent reference on Minkowski spacetime and the Lorentz group. See Chapter 3 for a superbly illustrated discussion of Möbius transformations. Lie groups Special relativity Group theory Hendrik Lorentz
0.772644
0.993107
0.767317
Relative strength index
The relative strength index (RSI) is a technical indicator used in the analysis of financial markets. It is intended to chart the current and historical strength or weakness of a stock or market based on the closing prices of a recent trading period. The indicator should not be confused with relative strength. The RSI is classified as a momentum oscillator, measuring the velocity and magnitude of price movements. Momentum is the rate of the rise or fall in price. The relative strength RS is given as the ratio of higher closes to lower closes. Concretely, one computes two averages of absolute values of closing price changes, i.e. two sums involving the sizes of candles in a candle chart. The RSI computes momentum as the ratio of higher closes to overall closes: stocks which have had more or stronger positive changes have a higher RSI than stocks which have had more or stronger negative changes. The RSI is most typically used on a 14-day timeframe, measured on a scale from 0 to 100, with high and low levels marked at 70 and 30, respectively. Short or longer timeframes are used for alternately shorter or longer outlooks. High and low levels—80 and 20, or 90 and 10—occur less frequently but indicate stronger momentum. The relative strength index was developed by J. Welles Wilder and published in a 1978 book, New Concepts in Technical Trading Systems, and in Commodities magazine (now Modern Trader magazine) in the June 1978 issue. It has become one of the most popular oscillator indices. The RSI provides signals that tell investors to buy when the security or currency is oversold and to sell when it is overbought. RSI with recommended parameters and its day-to-day optimization was tested and compared with other strategies in Marek and Šedivá (2017). The testing was randomised in time and companies (e.g., Apple, Exxon Mobil, IBM, Microsoft) and showed that RSI can still produce good results; however, in longer time it is usually overcome by the simple buy-and-hold strategy. Calculation For each trading period an upward change U or downward change D is calculated. Up periods are characterized by the close being higher than the previous close: Conversely, a down period is characterized by the close being lower than the previous period's close, If the last close is the same as the previous, both U and D are zero. Note that both U and D are nonnegative numbers. Averages are now calculated from sequences of such U and D, using an n-period smoothed or modified moving average (SMMA or MMA), which is the exponentially smoothed moving average with α = 1 / n. Those are positively weighted averages of those positive terms, and behave additively with respect to the partition. Wilder originally formulated the calculation of the moving average as: newval = (prevval * (n - 1) + newdata) / n, which is equivalent to the aforementioned exponential smoothing. So new data is simply divided by n, or multiplied by α and previous average values are modified by (n - 1) / n, i.e. 1 - α. Some commercial packages, like AIQ, use a standard exponential moving average (EMA) as the average instead of Wilder's SMMA. The smoothed moving averages should be appropriately initialized with a simple moving average using the first n values in the price series. The ratio of these averages is the relative strength or relative strength factor: The relative strength factor is then converted to a relative strength index between 0 and 100: If the average of U values is zero, both RS and RSI are also zero. If the average of U values equals the average of D values, the RS is 1 and RSI is 50. If the average of U values is maximal, so that the average of D values is zero, then the RS value diverges to infinity, while the RSI is 100. Interpretation Basic configuration The RSI is presented on a graph above or below the price chart. The indicator has an upper line and a lower line, typically at 70 and 30 respectively, and a dashed mid-line at 50. Wilder recommended a smoothing period of 14 (see exponential smoothing, i.e. α = 1/14 or N = 14). Principles Wilder posited that when price moves up very rapidly, at some point it is considered overbought. Likewise, when price falls very rapidly, at some point it is considered oversold. In either case, Wilder deemed a reaction or reversal imminent. The level of the RSI is a measure of the stock's recent trading strength. The slope of the RSI is directly proportional to the velocity of a change in the trend. The distance traveled by the RSI is proportional to the magnitude of the move. Wilder believed that tops and bottoms are indicated when RSI goes above 70 or drops below 30. Traditionally, RSI readings greater than the 70 level are considered to be in overbought territory, and RSI readings lower than the 30 level are considered to be in oversold territory. In between the 30 and 70 level is considered neutral, with the 50 level a sign of no trend. Divergence Wilder further believed that divergence between RSI and price action is a very strong indication that a market turning point is imminent. Bearish divergence occurs when price makes a new high but the RSI makes a lower high, thus failing to confirm. Bullish divergence occurs when price makes a new low but RSI makes a higher low. Furthermore, traders often use "hidden" divergences to indicate possible trend reversals. A hidden bullish divergence occurs when the price makes a lower high, while the RSI makes a higher high. Conversely, a hidden bearish divergence occurs when price makes a higher low, but the RSI makes a lower low. Overbought and oversold conditions Wilder thought that "failure swings" above 50 and below 50 on the RSI are strong indications of market reversals. For example, assume the RSI hits 76, pulls back to 72, then rises to 77. If it falls below 72, Wilder would consider this a "failure swing" above 70. Finally, Wilder wrote that chart formations and areas of support and resistance could sometimes be more easily seen on the RSI chart as opposed to the price chart. The center line for the relative strength index is 50, which is often seen as both the support and resistance line for the indicator. Uptrends and downtrends In addition to Wilder's original theories of RSI interpretation, Andrew Cardwell has developed several new interpretations of RSI to help determine and confirm trend. First, Cardwell noticed that uptrends generally traded between RSI 40 and 80, while downtrends usually traded between RSI 60 and 20. Cardwell observed when securities change from uptrend to downtrend and vice versa, the RSI will undergo a "range shift." Next, Cardwell noted that bearish divergence: 1) only occurs in uptrends, and 2) mostly only leads to a brief correction instead of a reversal in trend. Therefore, bearish divergence is a sign confirming an uptrend. Similarly, bullish divergence is a sign confirming a downtrend. Reversals Finally, Cardwell discovered the existence of positive and negative reversals in the RSI. Reversals are the opposite of divergence. For example, a positive reversal occurs when an uptrend price correction results in a higher low compared to the last price correction, while RSI results in a lower low compared to the prior correction. A negative reversal happens when a downtrend rally results in a lower high compared to the last downtrend rally, but RSI makes a higher high compared to the prior rally. In other words, despite stronger momentum as seen by the higher high or lower low in the RSI, price could not make a higher high or lower low. This is evidence the main trend is about to resume. Cardwell noted that positive reversals only happen in uptrends while negative reversals only occur in downtrends, and therefore their existence confirms the trend. Cutler's RSI A variation called Cutler's RSI is based on a simple moving average of U and D, instead of the exponential average above. Cutler had found that since Wilder used a smoothed moving average to calculate RSI, the value of Wilder's RSI depended upon where in the data file his calculations started. Cutler termed this Data Length Dependency. Cutler's RSI is not data length dependent, and returns consistent results regardless of the length of, or the starting point within a data file. Cutler's RSI generally comes out slightly different from the normal Wilder RSI, but the two are similar, since SMA and SMMA are also similar. General definitions In analysis, If is a real function with positive -norm on a set , then divided by its norm integral, the normalized function , is an associated distribution. If is a distribution, one can evaluate measurable subsets of the domain. To this end, the associated indicator function may be mapped to an "index" in the real unit interval, via the pairing . Now for defined as being equal to if and only if the value of is positive, the index is a quotient of two integrals and is a value that assigns a weight to the part of that is positive. A ratio of two averages over the same domain is also always computed as the ratio of two integrals or sums. Yet more specifically, for a real function on an ordered set (e.g. a price curve), one may consider that function's gradient , or some weighted variant thereof. In the case where is an ordered finite set (e.g. a sequence of timestamps), the gradient is given as the finite difference. In the relative strength index, evaluates a sequence derived from closing prices observed in an interval of so called market periods . The positive value equals the absolute price change , i.e. at , multiplied by an exponential factor according to the SMMA weighting for that time. The denominator is the sum of all those numbers. In the numerator one computes the SMMA of only , or in other words multiplied by the indicator function for the positive changes. The resulting index in this way weighs , which includes only the positive changes, over the changes in the whole interval. See also Stochastic Oscillator MACD, moving average convergence/divergence True strength index, a similar momentum-based indicator References External links Technical indicators
0.770928
0.995231
0.767251
Wave equation
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics. This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation. Introduction The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for as the representation of an electric vector field wave in the absence of wave sources, each coordinate axis component (i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions. The scalar wave equation is where is a fixed non-negative real coefficient representing the propagation speed of the wave is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density) , and are the three spatial coordinates and being the time coordinate. The equation states that, at any given point, the second derivative of with respect to time is proportional to the sum of the second derivatives of with respect to space, with the constant of proportionality being the square of the speed of the wave. Using notations from vector calculus, the wave equation can be written compactly as or where the double subscript denotes the second-order partial derivative with respect to time, is the Laplace operator and the d'Alembert operator, defined as: A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments. Wave equation in one space dimension The wave equation in one spatial dimension can be written as follows: This equation is typically described as having only one spatial dimension , because the only other independent variable is the time . Derivation The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension. Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress). Hooke's law The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of : Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass at the location is: By equating the latter equation with the equation of motion for the weight at the location is obtained: If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array , we can write the above equation as Taking the limit and assuming smoothness, one gets which is from the definition of a second derivative. is the square of the propagation speed in this particular case. Stress pulse in a bar In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes is equal to the volume of the bar, and therefore where is the density of the material. The wave equation reduces to The speed of a stress wave in a bar is therefore . General solution Algebraic approach For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables changes the wave equation into which leads to the general solution In other words, the solution is the sum of a right-traveling function and a left-traveling function . "Traveling" means that the shape of these individual arbitrary functions with respect to stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert. Another way to arrive at this result is to factor the wave equation using two first-order differential operators: Then, for our original equation, we can define and find that we must have This advection equation can be solved by interpreting it as telling us that the directional derivative of in the direction is 0. This means that the value of is constant on characteristic lines of the form , and thus that must depend only on , that is, have the form . Then, to solve the first (inhomogenous) equation relating to , we can note that its homogenous solution must be a function of the form , by logic similar to the above. Guessing a particular solution of the form , we find that Expanding out the left side, rearranging terms, then using the change of variables simplifies the equation to This means we can find a particular solution of the desired form by integration. Thus, we have again shown that obeys . For an initial-value problem, the arbitrary functions and can be determined to satisfy initial conditions: The result is d'Alembert's formula: In the classical sense, if , and , then . However, the waveforms and may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components. Plane-wave eigenmodes Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function: This produces an ordinary differential equation for the spatial part : Therefore, which is precisely an eigenvalue equation for , hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions with wave number . The total wave function for this eigenmode is then the linear combination where complex numbers , depend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor so that a full solution can be decomposed into an eigenmode expansion: or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet , which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of . The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. Vectorial wave equation in three space dimensions The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. In a homogeneous continuum (cartesian coordinate ) with a constant modulus of elasticity a vectorial, elastic deflection causes the stress tensor . The local equilibrium of a) the tension force due to deflection and b) the inertial force caused by the local acceleration can be written as By merging density and elasticity module the sound velocity results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium: (Note: Instead of vectorial only scalar can be used, i.e. waves are travelling only along the axis, and the scalar wave equation follows as .) The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term can be seen that there are two waves travelling in opposite directions and are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For special two-wave equation with the d'Alembert operator results: For this simplifies to Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction results as Scalar wave equation in three space dimensions A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. Spherical waves To obtain a solution with constant frequencies, apply the Fourier transform which transforms the wave equation into an elliptic partial differential equation of the form: This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as: The angular part of the solution take the form of spherical harmonics and the radial function satisfies: independent of , with . Substituting transforms the equation into which is the Bessel equation. Example Consider the case . Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., . In this case, the wave equation reduces to or This equation can be rewritten as where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions. For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation. Monochromatic spherical wave Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane-wave solutions: or From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to , an example of the inverse-square law. Solution of a general initial-value problem The wave equation is linear in and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let be an arbitrary function of three independent variables, and let the spherical wave form be a delta function. Let a family of spherical waves have center at , and let be the radial distance from that point. Thus If is a superposition of such waves with weighting function , then the denominator is a convenience. From the definition of the delta function, may also be written as where , , and are coordinates on the unit sphere , and is the area element on . This result has the interpretation that is times the mean value of on a sphere of radius centered at : It follows that The mean value is an even function of , and hence if then These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point , given depends only on the data on the sphere of radius that is intersected by the light cone drawn backwards from . It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. Scalar wave equation in two space dimensions In two space dimensions, the wave equation is We can use the three-dimensional theory to solve this problem if we regard as a function in three dimensions that is independent of the third dimension. If then the three-dimensional solution formula becomes where and are the first two coordinates on the unit sphere, and is the area element on the sphere. This integral may be rewritten as a double integral over the disc with center and radius It is apparent that the solution at depends not only on the data on the light cone where but also on data that are interior to that cone. Scalar wave equation in general dimension and Kirchhoff's formulae We want to find solutions to for with and . Odd dimensions Assume is an odd integer, and , for . Let and let Then , in , , . Even dimensions Assume is an even integer and , , for . Let and let then in Green's function Consider the inhomogeneous wave equation in dimensionsBy rescaling time, we can set wave speed . Since the wave equation has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity . The effect of inflicting a velocity impulse is to suddenly change the wave displacement . For acceleration impulse, where is the Dirac delta function. The solution to this case is called the Green's function for the wave equation. For velocity impulse, , so if we solve the Green function , the solution for this case is just . Duhamel's principle The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case. Given the Green function , and initial conditions , the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly, For the inhomogeneous case, the solution has one additional term by convolution over spacetime: Solution by Fourier transform By a Fourier transform,The term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by or by , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution givesThe integral can be solved by analytically continuing the Poisson kernel, givingwhere is half the surface area of a -dimensional hypersphere. Solutions in particular dimensions We can relate the Green's function in dimensions to the Green's function in dimensions. Lowering dimensions Given a function and a solution of a differential equation in dimensions, we can trivially extend it to dimensions by setting the additional dimensions to be constant: Since the Green's function is constructed from and , the Green's function in dimensions integrates to the Green's function in dimensions: Raising dimensions The Green's function in dimensions can be related to the Green's function in dimensions. By spherical symmetry, Integrating in polar coordinates, where in the last equality we made the change of variables . Thus, we obtain the recurrence relation Solutions in D = 1, 2, 3 When , the integrand in the Fourier transform is the sinc function where is the sign function and is the unit step function. One solution is the forward solution, the other is the backward solution. The dimension can be raised to give the caseand similarly for the backward solution. This can be integrated down by one dimension to give the case Wavefronts and wakes In case, the Green's function solution is the sum of two wavefronts moving in opposite directions. In odd dimensions, the forward solution is nonzero only at . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,where , and the wave speed is restored. In even dimensions, the forward solution is nonzero in , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:The wavefront itself also involves increasingly higher derivatives of the Dirac delta function. This means that a general Huygens' principle – the wave displacement at a point in spacetime depends only on the state at points on characteristic rays passing – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions. Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients Problems with boundaries One space dimension Reflection and transmission at the boundary of two media For an incident wave traveling from one medium (where the wave speed is ) to another medium (where the wave speed is ), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary. Consider the component of the incident wave with an angular frequency of , which has the waveform At , the incident reaches the boundary between the two media at . Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms The continuity condition at the boundary is This gives the equations and we have the reflectivity and transmissivity When , the reflected wave has a reflection phase change of 180°, since . The energy conservation can be verified by The above discussion holds true for any component, regardless of its angular frequency of . The limiting case of corresponds to a "fixed end" that does not move, whereas the limiting case of corresponds to a "free end". The Sturm–Liouville formulation A flexible string that is stretched between two points and satisfies the wave equation for and . On the boundary points, may satisfy a variety of boundary conditions. A general form that is appropriate for applications is where and are non-negative. The case where is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective or approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form A consequence is that The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem This is a special case of the general problem of Sturm–Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for and can be obtained from expansion of these functions in the appropriate trigonometric series. Several space dimensions The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in , and . On the boundary of , the solution shall satisfy where is the unit outward normal to , and is a non-negative function defined on . The case where vanishes on is a limiting case for approaching infinity. The initial conditions are where and are defined in . This problem may be solved by expanding and in the eigenfunctions of the Laplacian in , which satisfy the boundary conditions. Thus the eigenfunction satisfies in , and on . In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary . If is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle , multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation. If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order. Inhomogeneous wave equation in one dimension The inhomogeneous wave equation in one dimension is with initial conditions The function is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism. One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and . This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is , then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point as . Suppose we integrate the inhomogeneous wave equation over this region: To simplify this greatly, we can use Green's theorem to simplify the left side to get the following: The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute: In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus . For the other two sides of the region, it is worth noting that is a constant, namely , where the sign is chosen appropriately. Using this, we can get the relation , again choosing the right sign: And similarly for the final boundary segment: Adding the three results together and putting them back in the original integral gives Solving for , we arrive at In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. Further generalizations Elastic waves The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: where: and are the so-called Lamé parameters describing the elastic properties of the medium, is the density, is the source function (driving force), is the displacement vector. By using , the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation. Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves. Dispersion relation In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation where is the angular frequency, and is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is , but in general, the constant speed gets replaced by a variable phase velocity: See also Acoustic attenuation Acoustic wave equation Bateman transform Electromagnetic wave equation Helmholtz equation Inhomogeneous electromagnetic wave equation Laplace operator Mathematics of oscillation Maxwell's equations Schrödinger equation Standing wave Vibrations of a circular membrane Wheeler–Feynman absorber theory Notes References R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962. "Linear Wave Equations", EqWorld: The World of Mathematical Equations. "Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations. William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET. External links Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Wolfram Demonstrations Project. Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki . Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 Equations of physics Hyperbolic partial differential equations Wave mechanics
0.768389
0.998493
0.767231
Klein paradox
In relativistic quantum mechanics, the Klein paradox (also known as Klein tunneling) is a quantum phenomenon related to particles encountering high-energy potential barriers. It is named after physicist Oskar Klein who discovered in 1929. Originally, Klein obtained a paradoxical result by applying the Dirac equation to the familiar problem of electron scattering from a potential barrier. In nonrelativistic quantum mechanics, electron tunneling into a barrier is observed, with exponential damping. However, Klein's result showed that if the potential is at least of the order of the electron mass (where V is the electric potential, e is the elementary charge, m is the electron mass and c is the speed of light), the barrier is nearly transparent. Moreover, as the potential approaches infinity, the reflection diminishes and the electron is always transmitted. The immediate application of the paradox was to Rutherford's proton–electron model for neutral particles within the nucleus, before the discovery of the neutron. The paradox presented a quantum mechanical objection to the notion of an electron confined within a nucleus. This clear and precise paradox suggested that an electron could not be confined within a nucleus by any potential well. The meaning of this paradox was intensely debated by Niels Bohr and others at the time. Physics overview The Klein paradox is an unexpected consequence of relativity on the interaction of quantum particles with electrostatic potentials. The quantum mechanical problem of free particles striking an electrostatic step potential has two solutions when relativity is ignored. One solution applies when the particles approaching the barrier have less kinetic energy than the step: the particles are reflected. If the particles have more energy than the step, some are transmitted past the step, while some are reflected. The ratio of reflection to transmission depends on the energy difference. Relativity adds a third solution: very steep potential steps appear to create particles and antiparticles that then change the calculated ratio of transmission and reflection. The theoretical tools called quantum mechanics cannot handle the creation of particles, making any analysis of the relativistic case suspect. Before antiparticles where discovered and quantum field theory developed, this third solution was not understood. The puzzle came to be called the Klein paradox. For massive particles, the electric field strength required to observe the effect is enormous. The electric potential energy change similar to the rest energy of the incoming particle, , would need to occur over the Compton wavelength of the particle, , which works out to 1016 V/cm for electrons. For electrons, such extreme fields might only be relevant in Z>170 nuclei or evaporation at the event horizon of black holes, but for 2-D quasiparticles at graphene p-n junctions the effect can be studied experimentally. History Oscar Klein published the paper describing what later came to be called the Klein paradox in 1929, just as physicists were grappling with two problems: how to combine the theories of relativity and quantum mechanics and how to understand the coupling of matter and light known as electrodynamics. The paradox raised questions about how relativity was added to quantum mechanics in Dirac's first attempt. It would take the development of the new quantum field theory developed for electrodynamics to resolve the paradox. Thus the background of the paradox has two threads: the development of quantum mechanics and of quantum electrodynamics. Dirac equation mysteries The Bohr model of the atom published in 1913 assumed electrons in motion around a compact positive nucleus. An atomic electron obeying classical mechanics in the presence of a positive charged nucleus experiences a Lorentz force: they should radiate energy and accelerate in to the atomic core. The success of the Bohr model in predicting atomic spectra suggested that the classical mechanics could not be correct. In 1926 Edwin Schrodinger developed a new mechanics for the electron, a quantum mechanics that reproduced Bohr's results. Schrodinger and other physicists knew this mechanics was incomplete: it did not include effects of special relativity nor the interaction of matter and radiation. Paul Dirac solved the first issue in 1928 with his relativistic quantum theory of the electron. The combination was more accurate and also predicted electron spin. However, it also included twice as many states as expected, all with lower energy than the ones involved in atomic physics. Klein found that these extra states caused absurd results from models for electrons striking a large, sharp change in electrostatic potential: a negative current appeared beyond the barrier. Significantly Dirac's theory only predicted single-particle states. Creation or annihilation of particles could not be correctly analyzed in the single particle theory. The Klein result was widely discussed immediately after it publication. Niels Bohr thought the result was related to the abrupt step and as a result Arnold Sommerfeld asked Fritz Sauter to investigate sloped steps. Sauter was able to confirm Bohr's conjecture: the paradoxical result only appeared for a step of over a distance similar to the electrons Compton wavelength, , about 2 x 10-12m. Throughout 1929 and 1930, a series of papers by different physicists attempted to understand Dirac's extra states. Hermann Weyl suggested they corresponded to recently discovered protons, the only elementary particle other than the electron known at the time. Dirac pointed out Klein's negative electrons could not convert themselves to positive protons and suggested that the extra states were all filled with electrons already. Then a proton would amount to a missing electron in these lower states. Robert Oppenheimer and separately Igor Tamm showed that this would make atoms unstable. Finally in 1931 Dirac concluded that these states must correspond to a new "anti-electron" particle. In 1932 Carl Anderson experimentally observed these particles, renamed positrons. Positron-electron creation Resolution of the paradox would require quantum field theory which developed alongside quantum mechanics but at a slower pace due its many complexities. The concept goes back to Max Planck's demonstration that Maxwell's classical electrodynamics so successful in many applications, fails to predict the blackbody spectrum. Planck showed that the blackbody oscillators must be restricted to quantum transitions. In 1927, Dirac published his first work on quantum electrodynamics by using quantum field theory. With this foundation, Heisenberg, Jordan, and Pauli incorporated relativistic invariance in quantized Maxwell's equations in 1928 and 1929. However it took another 10 years before the theory could be applied to the problem of the Klein paradox. In 1941 Friedrich Hund showed that, under the conditions of the paradox, two currents of opposite charge are spontaneously generated at the step. In modern terminology pairs of electrons and positrons are spontaneously created the step potential. These results were confirmed in 1981 by Hansen and Ravndal using a more general treatment. Massless particles Consider a massless relativistic particle approaching a potential step of height with energy  and momentum . The particle's wave function, , follows the time-independent Dirac equation: And is the Pauli matrix: Assuming the particle is propagating from the left, we obtain two solutions — one before the step, in region (1) and one under the potential, in region (2): where the coefficients , and are complex numbers. Both the incoming and transmitted wave functions are associated with positive group velocity (Blue lines in Fig.1), whereas the reflected wave function is associated with negative group velocity. (Green lines in Fig.1) We now want to calculate the transmission and reflection coefficients, They are derived from the probability amplitude currents. The definition of the probability current associated with the Dirac equation is: In this case: The transmission and reflection coefficients are: Continuity of the wave function at , yields: And so the transmission coefficient is 1 and there is no reflection. One interpretation of the paradox is that a potential step cannot reverse the direction of the group velocity of a massless relativistic particle. This explanation best suits the single particle solution cited above. Other, more complex interpretations are suggested in literature, in the context of quantum field theory where the unrestrained tunnelling is shown to occur due to the existence of particle–antiparticle pairs at the potential. Massive case For the massive case, the calculations are similar to the above. The results are as surprising as in the massless case. The transmission coefficient is always larger than zero, and approaches 1 as the potential step goes to infinity. The Klein zone If the energy of the particle is in the range , then partial reflection rather than total reflection will result. Resolutions for the massive case The traditional resolution uses particle–anti-particle pair production in the context of quantum field theory. Other cases These results were expanded to higher dimensions, and to other types of potentials, such as a linear step, a square barrier, a smooth potential, etc. Many experiments in electron transport in graphene rely on the Klein paradox for massless particles. See also List of paradoxes References Further reading Physical paradoxes
0.789493
0.97177
0.767206
Physics education
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. History In Ancient Greece, Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Teaching strategies Teaching strategies are the various techniques used to facilitate the education of students with different learning styles. The different teaching strategies are intended to help students develop critical thinking and engage with the material. The choice of teaching strategy depends on the concept being taught, and indeed on the interest of the students. Methods/Approaches for teaching physics Lecture: Lecturing is one of the more traditional ways of teaching science. Owing to the convenience of this method, and the fact that most teachers are taught by it, it remains popular in spite of certain limitations (compared to other methods, it does little to develop critical thinking and scientific attitude among students). This method is teacher centric. Recitation: Also known as the Socratic method. In this method, the student plays a greater role than they would in a lecture. The teacher asks questions with the aim of prompting the thoughts of the students. This method can be very effective in developing higher order thinking in pupils. To apply this strategy, the students should be partially informed about the content. The efficacy of the recitation method depends largely on the quality of the questions. This method is student centric. Demonstration: In this method, the teacher performs certain experiments, which students observe and ask questions about. After the demonstration, the teacher can explain the experiment further and test the students' understanding via questions. This method is an important one, as science is not an entirely theoretical subject. Lecture-cum-Demonstration: As its name suggests, this is a combination of two of the above methods: lecture and demonstration. The teacher performs the experiment and explains it simultaneously. By this method, the teacher can provide more information in less time. As with the demonstration method, the students only observe; they do not get any practical experience of their own. It is not possible to teach all topics by this method. Laboratory Activities: Laboratories have students conduct physics experiments and collect data by interacting with physics equipment. Generally, students follow instructions in a lab manual. These instructions often take students through an experiment step-by-step. Typical learning objectives include reinforcing the course content through real-world interaction (similar to demonstrations) and thinking like experimental physicists. Lately, there has been some effort to shift lab activities toward the latter objective by separating from the course content, having students make their own decisions, and calling to question the notion of a "correct" experimental result. Unlike the demonstration method, the laboratory method gives students practical experience performing experiments like professional scientists. However, it often requires a significant amount of time and resources to work properly. Problem-based learning: A group of 8-10 students and a tutor meet together to study a "case" or trigger problem. One student acts as a chair and one as a scribe to record the session. Students interact to understand the terminology and issues of the problem, discussing possible solutions and a set of learning objectives. The group breaks up for private study then return to share results. The approach has been used in many UK medical schools. The technique fosters independence, engagement, development of communication skill, and integration of new knowledge with real world issues. However, the technique requires more staff per student, staff willing to facilitate rather than lecture, and well designed and documented trigger scenarios. The technique has been shown to be effective in teaching physics. Research Physics education research is the study of how physics is taught and how students learn physics. It a subfield of educational research. Worldwide Physics education in Hong Kong Physics education in the United Kingdom See also American Association of Physics Teachers Balsa wood bridge Concept inventory Egg drop competition Feynman lectures Harvard Project Physics Learning Assistant Model List of physics concepts in primary and secondary education curricula Mousetrap car Physical Science Study Committee Physics First SAT Subject Test in Physics Physics Outreach Science education Teaching quantum mechanics Mathematics education Engineering education Discipline-based education research References Further reading PER Reviews: Miscellaneous: Education by subject Occupations
0.782464
0.980497
0.767204
Wave function collapse
In quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation. Calculations of quantum decoherence show that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate. Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement. Mathematical description In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position and the momentum but also energy , components of spin, and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing for an eigenstate and for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation: The kets specify the different available quantum "alternatives", i.e., particular quantum states. The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true. Collapse To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation, abruptly converting an arbitrary state into a single component eigenstate of the observable: where the arrow represents a measurement of the observable corresponding to the basis. For any single event, only one eigenvalue is measured, chosen randomly from among the possible values. Meaning of the expansion coefficients The complex coefficients in the expansion of a quantum state in terms of eigenstates , can be written as an (complex) overlap of the corresponding eigenstate and the quantum state: They are called the probability amplitudes. The square modulus is the probability that a measurement of the observable yields the eigenstate . The sum of the probability over all possible outcomes must be one: As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area. This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information. Terminology The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description. Reduction of the state vector replaces the full state vector with a single eigenstate of the observable. The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation". When the wave function representation is used, the "reduction" is called "wave function collapse". The measurement problem The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrodinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics. Physical approaches to collapse Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself". Various interpretations of quantum mechanics attempt to provide a physical model for collapse. Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule. The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent. Quantum decoherence Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them. History The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr also repeatedly cautioned that we must give up a "pictorial representation", and perhaps also interpreted collapse as a formal, not physical, process. The "Copenhagen" model espoused by Heisenberg and Bohr separated the quantum system from the classical measurement apparatus. In 1932 von Neumann took a more formal approach, developing an "ideal" measurement scheme that postulated that there were two processes of wave function change: The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse). The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation). In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe. While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule. Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system". Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics. By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular Compton scattering was paradigmatic). Later work discussed so-called measurements of the second kind, that is to say measurements that will not give the same value when immediately repeated as opposed to the more easily discussed measurements of the first kind, which will. See also Arrow of time Interpretations of quantum mechanics Quantum decoherence Quantum interference Quantum Zeno effect Schrödinger's cat Stern–Gerlach experiment Wave function collapse (algorithm) References External links Concepts in physics Quantum measurement
0.769363
0.997191
0.767202
Molecular dynamics
Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields. The method is applied mostly in chemical physics, materials science, and biophysics. Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned, generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated. For systems that obey the ergodic hypothesis, the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed "statistical mechanics by numbers" and "Laplace's vision of Newtonian mechanics" of predicting the future by animating nature's forces and allowing insight into molecular motion on an atomic scale. History MD was originally developed in the early 1950s, following earlier successes with Monte Carlo simulationswhich themselves date back to the eighteenth century, in the Buffon's needle problem for examplebut was popularized for statistical mechanics at Los Alamos National Laboratory by Marshall Rosenbluth and Nicholas Metropolis in what is known today as the Metropolis–Hastings algorithm. Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Isaac Newton, and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the solar system. Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre. Numerical calculations with these algorithms can be considered to be MD done "by hand". As early as 1941, integration of the many-body equations of motion was carried out with analog computers. Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal describes this process in 1962, writing:... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption.Following the discovery of microscopic particles and the development of computers, interest expanded beyond the proving ground of gravitational systems to the statistical properties of matter. In an attempt to understand the origin of irreversibility, Enrico Fermi proposed in 1953, and published in 1955, the use of the early computer MANIAC I, also at Los Alamos National Laboratory, to solve the time evolution of the equations of motion for a many-body system subject to several choices of force laws. Today, this seminal work is known as the Fermi–Pasta–Ulam–Tsingou problem. The time evolution of the energy from the original work is shown in the figure to the right. In 1957, Berni Alder and Thomas Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, in perhaps the first realistic simulation of matter, J.B. Gibson et al. simulated radiation damage of solid copper by using a Born–Mayer type of repulsive interaction along with a cohesive surface force. In 1964, Aneesur Rahman published simulations of liquid argon that used a Lennard-Jones potential; calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Today, the Lennard-Jones potential is still one of the most frequently used intermolecular potentials. It is used for describing simple substances (a.k.a. Lennard-Jonesium) for conceptual and model studies and as a building block in many force fields of real substances. Areas of application and limits First used in theoretical physics, the molecular dynamics method gained popularity in materials science soon afterward, and since the 1970s it has also been commonly used in biochemistry and biophysics. MD is frequently used to refine 3-dimensional structures of proteins and other macromolecules based on experimental constraints from X-ray crystallography or NMR spectroscopy. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and ion subplantation, and to examine the physical properties of nanotechnological devices that have not or cannot yet be created. In biophysics and structural biology, the method is frequently applied to study the motions of macromolecules such as proteins and nucleic acids, which can be useful for interpreting the results of certain biophysical experiments and for modeling interactions with other molecules, as in ligand docking. In principle, MD can be used for ab initio prediction of protein structure by simulating folding of the polypeptide chain from a random coil. The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, of which a popular method is NMR spectroscopy. MD-derived structure predictions can be tested through community-wide experiments in Critical Assessment of Protein Structure Prediction (CASP), although the method has historically had limited success in this area. Michael Levitt, who shared the Nobel Prize partly for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to "... a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Improvements in computational resources permitting more and longer MD trajectories, combined with modern improvements in the quality of force field parameters, have yielded some improvements in both structure prediction and homology model refinement, without reaching the point of practical utility in these areas; many identify force field parameters as a key area for further development. MD simulation has been reported for pharmacophore development and drug design. For example, Pinto et al. implemented MD simulations of Bcl-xL complexes to calculate average positions of critical amino acids involved in ligand binding. Carlson et al. implemented molecular dynamics simulations to identify compounds that complement a receptor while causing minimal disruption to the conformation and flexibility of the active site. Snapshots of the protein at constant time intervals during the simulation were overlaid to identify conserved binding regions (conserved in at least three out of eleven frames) for pharmacophore development. Spyrakis et al. relied on a workflow of MD simulations, fingerprints for ligands and proteins (FLAP) and linear discriminant analysis (LDA) to identify the best ligand-protein conformations to act as pharmacophore templates based on retrospective ROC analysis of the resulting pharmacophores. In an attempt to ameliorate structure-based drug discovery modeling, vis-à-vis the need for many modeled compounds, Hatmal et al. proposed a combination of MD simulation and ligand-receptor intermolecular contacts analysis to discern critical intermolecular contacts (binding interactions) from redundant ones in a single ligand–protein complex. Critical contacts can then be converted into pharmacophore models that can be used for virtual screening. An important factor is intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is a crude approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are usually calculated using the dielectric constant of a vacuum, even though the surrounding aqueous solution has a much higher dielectric constant. Thus, using the macroscopic dielectric constant at short interatomic distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is only applicable in a vacuum. However, all types of van der Waals forces are ultimately of electrostatic origin and therefore depend on dielectric properties of the environment. The direct measurement of attraction forces between different materials (as Hamaker constant) shows that "the interaction between hydrocarbons across water is about 10% of that across vacuum". The environment-dependence of van der Waals forces is neglected in standard simulations, but can be included by developing polarizable force fields. Design constraints The design of a molecular dynamics simulation should account for the available computational power. Simulation size (n = number of particles), timestep, and total time duration must be selected so that the calculation can finish within a reasonable time period. However, the simulations should be long enough to be relevant to the time scales of the natural processes being studied. To make statistically valid conclusions from the simulations, the time span simulated should match the kinetics of the natural process. Otherwise, it is analogous to making conclusions about how a human walks when only looking at less than one footstep. Most scientific publications about the dynamics of proteins and DNA use data from simulations spanning nanoseconds (10−9 s) to microseconds (10−6 s). To obtain these simulations, several CPU-days to CPU-years are needed. Parallel algorithms allow the load to be distributed among CPUs; an example is the spatial or force decomposition algorithm. During a classical MD simulation, the most CPU intensive task is the evaluation of the potential as a function of the particles' internal coordinates. Within that energy evaluation, the most expensive one is the non-bonded or non-covalent part. In big O notation, common molecular dynamics simulations scale by if all pair-wise electrostatic and van der Waals interactions must be accounted for explicitly. This computational cost can be reduced by employing electrostatics methods such as particle mesh Ewald summation, particle-particle-particle mesh (P3M), or good spherical cutoff methods. Another factor that impacts total CPU time needed by a simulation is the size of the integration timestep. This is the time length between evaluations of the potential. The timestep must be chosen small enough to avoid discretization errors (i.e., smaller than the period related to fastest vibrational frequency in the system). Typical timesteps for classical MD are on the order of 1 femtosecond (10−15 s). This value may be extended by using algorithms such as the SHAKE constraint algorithm, which fix the vibrations of the fastest atoms (e.g., hydrogens) into place. Multiple time scale methods have also been developed, which allow extended times between updates of slower long-range forces. For simulating molecules in a solvent, a choice should be made between an explicit and implicit solvent. Explicit solvent particles (such as the TIP3P, SPC/E and SPC-f water models) must be calculated expensively by the force field, while implicit solvents use a mean-field approach. Using an explicit solvent is computationally expensive, requiring inclusion of roughly ten times more particles in the simulation. But the granularity and viscosity of explicit solvent is essential to reproduce certain properties of the solute molecules. This is especially important to reproduce chemical kinetics. In all kinds of molecular dynamics simulations, the simulation box size must be large enough to avoid boundary condition artifacts. Boundary conditions are often treated by choosing fixed values at the edges (which may cause artifacts), or by employing periodic boundary conditions in which one side of the simulation loops back to the opposite side, mimicking a bulk phase (which may cause artifacts too). Microcanonical ensemble (NVE) In the microcanonical ensemble, the system is isolated from changes in moles (N), volume (V), and energy (E). It corresponds to an adiabatic process with no heat exchange. A microcanonical molecular dynamics trajectory may be seen as an exchange of potential and kinetic energy, with total energy being conserved. For a system of N particles with coordinates and velocities , the following pair of first order differential equations may be written in Newton's notation as The potential energy function of the system is a function of the particle coordinates . It is referred to simply as the potential in physics, or the force field in chemistry. The first equation comes from Newton's laws of motion; the force acting on each particle in the system can be calculated as the negative gradient of . For every time step, each particle's position and velocity may be integrated with a symplectic integrator method such as Verlet integration. The time evolution of and is called a trajectory. Given the initial positions (e.g., from theoretical knowledge) and velocities (e.g., randomized Gaussian), we can calculate all future (or past) positions and velocities. One frequent source of confusion is the meaning of temperature in MD. Commonly we have experience with macroscopic temperatures, which involve a huge number of particles, but temperature is a statistical quantity. If there is a large enough number of atoms, statistical temperature can be estimated from the instantaneous temperature, which is found by equating the kinetic energy of the system to nkBT/2, where n is the number of degrees of freedom of the system. A temperature-related phenomenon arises due to the small number of atoms that are used in MD simulations. For example, consider simulating the growth of a copper film starting with a substrate containing 500 atoms and a deposition energy of 100 eV. In the real world, the 100 eV from the deposited atom would rapidly be transported through and shared among a large number of atoms ( or more) with no big change in temperature. When there are only 500 atoms, however, the substrate is almost immediately vaporized by the deposition. Something similar happens in biophysical simulations. The temperature of the system in NVE is naturally raised when macromolecules such as proteins undergo exothermic conformational changes and binding. Canonical ensemble (NVT) In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are conserved. It is also sometimes called constant temperature molecular dynamics (CTMD). In NVT, the energy of endothermic and exothermic processes is exchanged with a thermostat. A variety of thermostat algorithms are available to add and remove energy from the boundaries of an MD simulation in a more or less realistic way, approximating the canonical ensemble. Popular methods to control temperature include velocity rescaling, the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat, the Andersen thermostat and Langevin dynamics. The Berendsen thermostat might introduce the flying ice cube effect, which leads to unphysical translations and rotations of the simulated system. It is not trivial to obtain a canonical ensemble distribution of conformations and velocities using these algorithms. How this depends on system size, thermostat choice, thermostat parameters, time step and integrator is the subject of many articles in the field. Isothermal–isobaric (NPT) ensemble In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. In addition to a thermostat, a barostat is needed. It corresponds most closely to laboratory conditions with a flask open to ambient temperature and pressure. In the simulation of biological membranes, isotropic pressure control is not appropriate. For lipid bilayers, pressure control occurs under constant membrane area (NPAT) or constant surface tension "gamma" (NPγT). Generalized ensembles The replica exchange method is a generalized ensemble. It was originally created to deal with the slow dynamics of disordered spin systems. It is also called parallel tempering. The replica exchange MD (REMD) formulation tries to overcome the multiple-minima problem by exchanging the temperature of non-interacting replicas of the system running at several temperatures. Potentials in MD simulations A molecular dynamics simulation requires the definition of a potential function, or a description of the terms by which the particles in the simulation will interact. In chemistry and biology this is usually referred to as a force field and in materials physics as an interatomic potential. Potentials may be defined at many levels of physical accuracy; those most commonly used in chemistry are based on molecular mechanics and embody a classical mechanics treatment of particle-particle interactions that can reproduce structural and conformational changes but usually cannot reproduce chemical reactions. The reduction from a fully quantum description to a classical potential entails two main approximations. The first one is the Born–Oppenheimer approximation, which states that the dynamics of electrons are so fast that they can be considered to react instantaneously to the motion of their nuclei. As a consequence, they may be treated separately. The second one treats the nuclei, which are much heavier than electrons, as point particles that follow classical Newtonian dynamics. In classical molecular dynamics, the effect of the electrons is approximated as one potential energy surface, usually representing the ground state. When finer levels of detail are needed, potentials based on quantum mechanics are used; some methods attempt to create hybrid classical/quantum potentials where the bulk of the system is treated classically but a small region is treated as a quantum system, usually undergoing a chemical transformation. Empirical potentials Empirical potentials used in chemistry are frequently called force fields, while those used in materials physics are called interatomic potentials. Most force fields in chemistry are empirical and consist of a summation of bonded forces associated with chemical bonds, bond angles, and bond dihedrals, and non-bonded forces associated with van der Waals forces and electrostatic charge. Empirical potentials represent quantum-mechanical effects in a limited way through ad hoc functional approximations. These potentials contain free parameters such as atomic charge, van der Waals parameters reflecting estimates of atomic radius, and equilibrium bond length, angle, and dihedral; these are obtained by fitting against detailed electronic calculations (quantum chemical simulations) or experimental physical properties such as elastic constants, lattice parameters and spectroscopic measurements. Because of the non-local nature of non-bonded interactions, they involve at least weak interactions between all particles in the system. Its calculation is normally the bottleneck in the speed of MD simulations. To lower the computational cost, force fields employ numerical approximations such as shifted cutoff radii, reaction field algorithms, particle mesh Ewald summation, or the newer particle–particle-particle–mesh (P3M). Chemistry force fields commonly employ preset bonding arrangements (an exception being ab initio dynamics), and thus are unable to model the process of chemical bond breaking and reactions explicitly. On the other hand, many of the potentials used in physics, such as those based on the bond order formalism can describe several different coordinations of a system and bond breaking. Examples of such potentials include the Brenner potential for hydrocarbons and its further developments for the C-Si-H and C-O-H systems. The ReaxFF potential can be considered a fully reactive hybrid between bond order potentials and chemistry force fields. Pair potentials versus many-body potentials The potential functions representing the non-bonded energy are formulated as a sum over interactions between the particles of the system. The simplest choice, employed in many popular force fields, is the "pair potential", in which the total potential energy can be calculated from the sum of energy contributions between pairs of atoms. Therefore, these force fields are also called "additive force fields". An example of such a pair potential is the non-bonded Lennard-Jones potential (also termed the 6–12 potential), used for calculating van der Waals forces. Another example is the Born (ionic) model of the ionic lattice. The first term in the next equation is Coulomb's law for a pair of ions, the second term is the short-range repulsion explained by Pauli's exclusion principle and the final term is the dispersion interaction term. Usually, a simulation only includes the dipolar term, although sometimes the quadrupolar term is also included. When nl = 6, this potential is also called the Coulomb–Buckingham potential. In many-body potentials, the potential energy includes the effects of three or more particles interacting with each other. In simulations with pairwise potentials, global interactions in the system also exist, but they occur only through pairwise terms. In many-body potentials, the potential energy cannot be found by a sum over pairs of atoms, as these interactions are calculated explicitly as a combination of higher-order terms. In the statistical view, the dependency between the variables cannot in general be expressed using only pairwise products of the degrees of freedom. For example, the Tersoff potential, which was originally used to simulate carbon, silicon, and germanium, and has since been used for a wide range of other materials, involves a sum over groups of three atoms, with the angles between the atoms being an important factor in the potential. Other examples are the embedded-atom method (EAM), the EDIP, and the Tight-Binding Second Moment Approximation (TBSMA) potentials, where the electron density of states in the region of an atom is calculated from a sum of contributions from surrounding atoms, and the potential energy contribution is then a function of this sum. Semi-empirical potentials Semi-empirical potentials make use of the matrix representation from quantum mechanics. However, the values of the matrix elements are found through empirical formulae that estimate the degree of overlap of specific atomic orbitals. The matrix is then diagonalized to determine the occupancy of the different atomic orbitals, and empirical formulae are used once again to determine the energy contributions of the orbitals. There are a wide variety of semi-empirical potentials, termed tight-binding potentials, which vary according to the atoms being modeled. Polarizable potentials Most classical force fields implicitly include the effect of polarizability, e.g., by scaling up the partial charges obtained from quantum chemical calculations. These partial charges are stationary with respect to the mass of the atom. But molecular dynamics simulations can explicitly model polarizability with the introduction of induced dipoles through different methods, such as Drude particles or fluctuating charges. This allows for a dynamic redistribution of charge between atoms which responds to the local chemical environment. For many years, polarizable MD simulations have been touted as the next generation. For homogenous liquids such as water, increased accuracy has been achieved through the inclusion of polarizability. Some promising results have also been achieved for proteins. However, it is still uncertain how to best approximate polarizability in a simulation. The point becomes more important when a particle experiences different environments during its simulation trajectory, e.g. translocation of a drug through a cell membrane. Potentials in ab initio methods In classical molecular dynamics, one potential energy surface (usually the ground state) is represented in the force field. This is a consequence of the Born–Oppenheimer approximation. In excited states, chemical reactions or when a more accurate representation is needed, electronic behavior can be obtained from first principles using a quantum mechanical method, such as density functional theory. This is named Ab Initio Molecular Dynamics (AIMD). Due to the cost of treating the electronic degrees of freedom, the computational burden of these simulations is far higher than classical molecular dynamics. For this reason, AIMD is typically limited to smaller systems and shorter times. Ab initio quantum mechanical and chemical methods may be used to calculate the potential energy of a system on the fly, as needed for conformations in a trajectory. This calculation is usually made in the close neighborhood of the reaction coordinate. Although various approximations may be used, these are based on theoretical considerations, not on empirical fitting. Ab initio calculations produce a vast amount of information that is not available from empirical methods, such as density of electronic states or other electronic properties. A significant advantage of using ab initio methods is the ability to study reactions that involve breaking or formation of covalent bonds, which correspond to multiple electronic states. Moreover, ab initio methods also allow recovering effects beyond the Born–Oppenheimer approximation using approaches like mixed quantum-classical dynamics. Hybrid QM/MM QM (quantum-mechanical) methods are very powerful. However, they are computationally expensive, while the MM (classical or molecular mechanics) methods are fast but suffer from several limits (require extensive parameterization; energy estimates obtained are not very accurate; cannot be used to simulate reactions where covalent bonds are broken/formed; and are limited in their abilities for providing accurate details regarding the chemical environment). A new class of method has emerged that combines the good points of QM (accuracy) and MM (speed) calculations. These methods are termed mixed or hybrid quantum-mechanical and molecular mechanics methods (hybrid QM/MM). The most important advantage of hybrid QM/MM method is the speed. The cost of doing classical molecular dynamics (MM) in the most straightforward case scales O(n2), where n is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with every other particle). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle-mesh Ewald's (PME) method has reduced this to between O(n) to O(n2). In other words, if a system with twice as many atoms is simulated then it would take between two and four times as much computing power. On the other hand, the simplest ab initio calculations typically scale O(n3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(n2.7)). To overcome the limit, a small part of the system is treated quantum-mechanically (typically active-site of an enzyme) and the remaining system is treated classically. In more sophisticated implementations, QM/MM methods exist to treat both light nuclei susceptible to quantum effects (such as hydrogens) and electronic states. This allows generating hydrogen wave-functions (similar to electronic wave-functions). This methodology has been useful in investigating phenomena such as hydrogen tunneling. One example where QM/MM methods have provided new discoveries is the calculation of hydride transfer in the enzyme liver alcohol dehydrogenase. In this case, quantum tunneling is important for the hydrogen, as it determines the reaction rate. Coarse-graining and reduced representations At the other end of the detail scale are coarse-grained and lattice models. Instead of explicitly representing every atom of the system, one uses "pseudo-atoms" to represent groups of atoms. MD simulations on very large systems may require such large computer resources that they cannot easily be studied by traditional all-atom methods. Similarly, simulations of processes on long timescales (beyond about 1 microsecond) are prohibitively expensive, because they require so many time steps. In these cases, one can sometimes tackle the problem by using reduced representations, which are also called coarse-grained models. Examples for coarse graining (CG) methods are discontinuous molecular dynamics (CG-DMD) and Go-models. Coarse-graining is done sometimes taking larger pseudo-atoms. Such united atom approximations have been used in MD simulations of biological membranes. Implementation of such approach on systems where electrical properties are of interest can be challenging owing to the difficulty of using a proper charge distribution on the pseudo-atoms. The aliphatic tails of lipids are represented by a few pseudo-atoms by gathering 2 to 4 methylene groups into each pseudo-atom. The parameterization of these very coarse-grained models must be done empirically, by matching the behavior of the model to appropriate experimental data or all-atom simulations. Ideally, these parameters should account for both enthalpic and entropic contributions to free energy in an implicit way. When coarse-graining is done at higher levels, the accuracy of the dynamic description may be less reliable. But very coarse-grained models have been used successfully to examine a wide range of questions in structural biology, liquid crystal organization, and polymer glasses. Examples of applications of coarse-graining: protein folding and protein structure prediction studies are often carried out using one, or a few, pseudo-atoms per amino acid; liquid crystal phase transitions have been examined in confined geometries and/or during flow using the Gay-Berne potential, which describes anisotropic species; Polymer glasses during deformation have been studied using simple harmonic or FENE springs to connect spheres described by the Lennard-Jones potential; DNA supercoiling has been investigated using 1–3 pseudo-atoms per basepair, and at even lower resolution; Packaging of double-helical DNA into bacteriophage has been investigated with models where one pseudo-atom represents one turn (about 10 basepairs) of the double helix; RNA structure in the ribosome and other large systems has been modeled with one pseudo-atom per nucleotide. The simplest form of coarse-graining is the united atom (sometimes called extended atom) and was used in most early MD simulations of proteins, lipids, and nucleic acids. For example, instead of treating all four atoms of a CH3 methyl group explicitly (or all three atoms of CH2 methylene group), one represents the whole group with one pseudo-atom. It must, of course, be properly parameterized so that its van der Waals interactions with other groups have the proper distance-dependence. Similar considerations apply to the bonds, angles, and torsions in which the pseudo-atom participates. In this kind of united atom representation, one typically eliminates all explicit hydrogen atoms except those that have the capability to participate in hydrogen bonds (polar hydrogens). An example of this is the CHARMM 19 force-field. The polar hydrogens are usually retained in the model, because proper treatment of hydrogen bonds requires a reasonably accurate description of the directionality and the electrostatic interactions between the donor and acceptor groups. A hydroxyl group, for example, can be both a hydrogen bond donor, and a hydrogen bond acceptor, and it would be impossible to treat this with one OH pseudo-atom. About half the atoms in a protein or nucleic acid are non-polar hydrogens, so the use of united atoms can provide a substantial savings in computer time. Machine Learning Force Fields Machine Learning Force Fields] (MLFFs) represent one approach to modeling interatomic interactions in molecular dynamics simulations. MLFFs can achieve accuracy close to that of ab initio methods. Once trained, MLFFs are much faster than direct quantum mechanical calculations. MLFFs address the limitations of traditional force fields by learning complex potential energy surfaces directly from high-level quantum mechanical data. Several software packages now support MLFFs, including VASP and open-source libraries like DeePMD-kit and SchNetPack. Incorporating solvent effects In many simulations of a solute-solvent system the main focus is on the behavior of the solute with little interest of the solvent behavior particularly in those solvent molecules residing in regions far from the solute molecule. Solvents may influence the dynamic behavior of solutes via random collisions and by imposing a frictional drag on the motion of the solute through the solvent. The use of non-rectangular periodic boundary conditions, stochastic boundaries and solvent shells can all help reduce the number of solvent molecules required and enable a larger proportion of the computing time to be spent instead on simulating the solute. It is also possible to incorporate the effects of a solvent without needing any explicit solvent molecules present. One example of this approach is to use a potential mean force (PMF) which describes how the free energy changes as a particular coordinate is varied. The free energy change described by PMF contains the averaged effects of the solvent. Without incorporating the effects of solvent simulations of macromolecules (such as proteins) may yield unrealistic behavior and even small molecules may adopt more compact conformations due to favourable van der Waals forces and electrostatic interactions which would be dampened in the presence of a solvent. Long-range forces A long range interaction is an interaction in which the spatial interaction falls off no faster than where is the dimensionality of the system. Examples include charge-charge interactions between ions and dipole-dipole interactions between molecules. Modelling these forces presents quite a challenge as they are significant over a distance which may be larger than half the box length with simulations of many thousands of particles. Though one solution would be to significantly increase the size of the box length, this brute force approach is less than ideal as the simulation would become computationally very expensive. Spherically truncating the potential is also out of the question as unrealistic behaviour may be observed when the distance is close to the cut off distance. Steered molecular dynamics (SMD) Steered molecular dynamics (SMD) simulations, or force probe simulations, apply forces to a protein in order to manipulate its structure by pulling it along desired degrees of freedom. These experiments can be used to reveal structural changes in a protein at the atomic level. SMD is often used to simulate events such as mechanical unfolding or stretching. There are two typical protocols of SMD: one in which pulling velocity is held constant, and one in which applied force is constant. Typically, part of the studied system (e.g., an atom in a protein) is restrained by a harmonic potential. Forces are then applied to specific atoms at either a constant velocity or a constant force. Umbrella sampling is used to move the system along the desired reaction coordinate by varying, for example, the forces, distances, and angles manipulated in the simulation. Through umbrella sampling, all of the system's configurations—both high-energy and low-energy—are adequately sampled. Then, each configuration's change in free energy can be calculated as the potential of mean force. A popular method of computing PMF is through the weighted histogram analysis method (WHAM), which analyzes a series of umbrella sampling simulations. A lot of important applications of SMD are in the field of drug discovery and biomolecular sciences. For e.g. SMD was used to investigate the stability of Alzheimer's protofibrils, to study the protein ligand interaction in cyclin-dependent kinase 5 and even to show the effect of electric field on thrombin (protein) and aptamer (nucleotide) complex among many other interesting studies. Examples of applications Molecular dynamics is used in many fields of science. First MD simulation of a simplified biological folding process was published in 1975. Its simulation published in Nature paved the way for the vast area of modern computational protein-folding. First MD simulation of a biological process was published in 1976. Its simulation published in Nature paved the way for understanding protein motion as essential in function and not just accessory. MD is the standard method to treat collision cascades in the heat spike regime, i.e., the effects that energetic neutron and ion irradiation have on solids and solid surfaces. The following biophysical examples illustrate notable efforts to produce simulations of a systems of very large size (a complete virus) or very long simulation times (up to 1.112 milliseconds): MD simulation of the full satellite tobacco mosaic virus (STMV) (2006, Size: 1 million atoms, Simulation time: 50 ns, program: NAMD) This virus is a small, icosahedral plant virus that worsens the symptoms of infection by Tobacco Mosaic Virus (TMV). Molecular dynamics simulations were used to probe the mechanisms of viral assembly. The entire STMV particle consists of 60 identical copies of one protein that make up the viral capsid (coating), and a 1063 nucleotide single stranded RNA genome. One key finding is that the capsid is very unstable when there is no RNA inside. The simulation would take one 2006 desktop computer around 35 years to complete. It was thus done in many processors in parallel with continuous communication between them. Folding simulations of the Villin Headpiece in all-atom detail (2006, Size: 20,000 atoms; Simulation time: 500 μs= 500,000 ns, Program: Folding@home) This simulation was run in 200,000 CPU's of participating personal computers around the world. These computers had the Folding@home program installed, a large-scale distributed computing effort coordinated by Vijay Pande at Stanford University. The kinetic properties of the Villin Headpiece protein were probed by using many independent, short trajectories run by CPU's without continuous real-time communication. One method employed was the Pfold value analysis, which measures the probability of folding before unfolding of a specific starting conformation. Pfold gives information about transition state structures and an ordering of conformations along the folding pathway. Each trajectory in a Pfold calculation can be relatively short, but many independent trajectories are needed. Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom application-specific integrated circuits (ASICs) and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton is a 1.112-millisecond simulation of NTL9 at 355 K; a second, independent 1.073-millisecond simulation of this configuration was also performed (and many other simulations of over 250 μs continuous chemical time). In How Fast-Folding Proteins Fold, researchers Kresten Lindorff-Larsen, Stefano Piana, Ron O. Dror, and David E. Shaw discuss "the results of atomic-level molecular dynamics simulations, over periods ranging between 100 μs and 1 ms, that reveal a set of common principles underlying the folding of 12 structurally diverse proteins." Examination of these diverse long trajectories, enabled by specialized, custom hardware, allow them to conclude that "In most cases, folding follows a single dominant route in which elements of the native structure appear in an order highly correlated with their propensity to form in the unfolded state." In a separate study, Anton was used to conduct a 1.013-millisecond simulation of the native-state dynamics of bovine pancreatic trypsin inhibitor (BPTI) at 300 K. Another important application of MD method benefits from its ability of 3-dimensional characterization and analysis of microstructural evolution at atomic scale. MD simulations are used in characterization of grain size evolution, for example, when describing wear and friction of nanocrystalline Al and Al(Zr) materials. Dislocations evolution and grain size evolution are analyzed during the friction process in this simulation. Since MD method provided the full information of the microstructure, the grain size evolution was calculated in 3D using the Polyhedral Template Matching, Grain Segmentation, and Graph clustering methods. In such simulation, MD method provided an accurate measurement of grain size. Making use of these information, the actual grain structures were extracted, measured, and presented. Compared to the traditional method of using SEM with a single 2-dimensional slice of the material, MD provides a 3-dimensional and accurate way to characterize the microstructural evolution at atomic scale. Molecular dynamics algorithms Screened Coulomb potentials implicit solvent model Integrators Symplectic integrator Verlet–Stoermer integration Runge–Kutta integration Beeman's algorithm Constraint algorithms (for constrained systems) Short-range interaction algorithms Cell lists Verlet list Bonded interactions Long-range interaction algorithms Ewald summation Particle mesh Ewald summation (PME) Particle–particle-particle–mesh (P3M) Shifted force method Parallelization strategies Domain decomposition method (Distribution of system data for parallel computing) Ab-initio molecular dynamics Car–Parrinello molecular dynamics Specialized hardware for MD simulations Anton – A specialized, massively parallel supercomputer designed to execute MD simulations MDGRAPE – A special purpose system built for molecular dynamics simulations, especially protein structure prediction Graphics card as a hardware for MD simulations See also Molecular modeling Computational chemistry Force field (chemistry) Comparison of force field implementations Monte Carlo method Molecular design software Molecular mechanics Multiscale Green's function Car–Parrinello method Comparison of software for molecular mechanics modeling Quantum chemistry Discrete element method Comparison of nucleic acid simulation software Molecule editor Mixed quantum-classical dynamics References General references External links The GPUGRID.net Project (GPUGRID.net) The Blue Gene Project (IBM) JawBreakers.org Materials modelling and computer simulation codes A few tips on molecular dynamics Movie of MD simulation of water (YouTube) Computational chemistry Molecular modelling Simulation
0.771164
0.994849
0.767191
Shared Socioeconomic Pathways
Shared Socioeconomic Pathways (SSPs) are climate change scenarios of projected socioeconomic global changes up to 2100 as defined in the IPCC Sixth Assessment Report on climate change in 2021. They are used to derive greenhouse gas emissions scenarios with different climate policies. The SSPs provide narratives describing alternative socio-economic developments. These storylines are a qualitative description of logic relating elements of the narratives to each other. In terms of quantitative elements, they provide data accompanying the scenarios on national population, urbanization and GDP (per capita). The SSPs can be quantified with various Integrated Assessment Models (IAMs) to explore possible future pathways both with regards to socioeconomic and climate pathways. The five scenarios are: SSP1: Sustainability ("Taking the Green Road") SSP2: "Middle of the Road" SSP3: Regional Rivalry ("A Rocky Road") SSP4: Inequality ("A Road Divided") SSP5: Fossil-fueled Development ("Taking the Highway") There are also ongoing efforts to downscaling European shared socioeconomic pathways (SSPs) for agricultural and food systems, combined with representative concentration pathways (RCP) to regionally specific, alternative socioeconomic and climate scenarios. Descriptions of the SSPs SSP1: Sustainability (Taking the Green Road) "The world shifts gradually, but pervasively, toward a more sustainable path, emphasizing more inclusive development that respects predicted environmental boundaries. Management of the global commons slowly improves, educational and health investments accelerate the demographic transition, and the emphasis on economic growth shifts toward a broader emphasis on human well-being. Driven by an increasing commitment to achieving development goals, inequality is reduced both across and within countries. Consumption is oriented toward low material growth and lower resource and energy intensity." SSP2: Middle of the road "The world follows a path in which social, economic, and technological trends do not shift markedly from historical patterns. Development and income growth proceeds unevenly, with some countries making relatively good progress while others fall short of expectations. Global and national institutions work toward but make slow progress in achieving sustainable development goals. Environmental systems experience degradation, although there are some improvements and overall the intensity of resource and energy use declines. Global population growth is moderate and levels off in the second half of the century. Income inequality persists or improves only slowly and challenges to reducing vulnerability to societal and environmental changes remain." SSP3: Regional rivalry (A Rocky Road) "A resurgent nationalism, concerns about competitiveness and security, and regional conflicts push countries to increasingly focus on domestic or, at most, regional issues. Policies shift over time to become increasingly oriented toward national and regional security issues. Countries focus on achieving energy and food security goals within their own regions at the expense of broader-based development. Investments in education and technological development decline. Economic development is slow, consumption is material-intensive, and inequalities persist or worsen over time. Population growth is low in industrialized and high in developing countries. A low international priority for addressing environmental concerns leads to strong environmental degradation in some regions." SSP4: Inequality (A Road Divided) "Highly unequal investments in human capital, combined with increasing disparities in economic opportunity and political power, lead to increasing inequalities and stratification both across and within countries. Over time, a gap widens between an internationally-connected society that contributes to knowledge- and capital-intensive sectors of the global economy, and a fragmented collection of lower-income, poorly educated societies that work in a labor intensive, low-tech economy. Social cohesion degrades and conflict and unrest become increasingly common. Technology development is high in the high-tech economy and sectors. The globally connected energy sector diversifies, with investments in both carbon-intensive fuels like coal and unconventional oil, but also low-carbon energy sources. Environmental policies focus on local issues around middle and high income areas." SSP5: Fossil-Fueled Development (Taking the Highway) "This world places increasing faith in competitive markets, innovation and participatory societies to produce rapid technological progress and development of human capital as the path to sustainable development. Global markets are increasingly integrated. There are also strong investments in health, education, and institutions to enhance human and social capital. At the same time, the push for economic and social development is coupled with the exploitation of abundant fossil fuel resources and the adoption of resource and energy intensive lifestyles around the world. All these factors lead to rapid growth of the global economy, while global population peaks and declines in the 21st century. Local environmental problems like air pollution are successfully managed. There is faith in the ability to effectively manage social and ecological systems, including by geo-engineering if necessary." SSP temperature projections from the IPCC Sixth Assessment Report The IPCC Sixth Assessment Report assessed the projected temperature outcomes of a set of five scenarios that are based on the framework of the SSPs. The names of these scenarios consist of the SSP on which they are based (SSP1-SSP5), combined with the expected level of radiative forcing in the year 2100 (1.9 to 8.5 W/m2). This results in scenario names SSPx-y.z as listed below. The role of SSP4 is missing in this table. See also Climate change scenario Coupled Model Intercomparison Project Representative Concentration Pathway Special Report on Emissions Scenarios (published in 2000) References Sources Riahi et al., The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview. Global Environmental Change, 42, 153-168. Climate change assessment and attribution Futures studies Intergovernmental Panel on Climate Change
0.771489
0.994415
0.76718
Mass balance
In physics, a mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. By accounting for material entering and leaving a system, mass flows can be identified which might have been unknown, or difficult to measure without this technique. The exact conservation law used in the analysis of the system depends on the context of the problem, but all revolve around mass conservation, i.e., that matter cannot disappear or be created spontaneously. Therefore, mass balances are used widely in engineering and environmental analyses. For example, mass balance theory is used to design chemical reactors, to analyse alternative processes to produce chemicals, as well as to model pollution dispersion and other processes of physical systems. Mass balances form the foundation of process engineering design. Closely related and complementary analysis techniques include the population balance, energy balance and the somewhat more complex entropy balance. These techniques are required for thorough design and analysis of systems such as the refrigeration cycle. In environmental monitoring, the term budget calculations is used to describe mass balance equations where they are used to evaluate the monitoring data (comparing input and output, etc.). In biology, the dynamic energy budget theory for metabolic organisation makes explicit use of mass and energy balance. Introduction The general form quoted for a mass balance is The mass that enters a system must, by conservation of mass, either leave the system or accumulate within the system. Mathematically the mass balance for a system without a chemical reaction is as follows: Strictly speaking the above equation holds also for systems with chemical reactions if the terms in the balance equation are taken to refer to total mass, i.e. the sum of all the chemical species of the system. In the absence of a chemical reaction the amount of any chemical species flowing in and out will be the same; this gives rise to an equation for each species present in the system. However, if this is not the case then the mass balance equation must be amended to allow for the generation or depletion (consumption) of each chemical species. Some use one term in this equation to account for chemical reactions, which will be negative for depletion and positive for generation. However, the conventional form of this equation is written to account for both a positive generation term (i.e. product of reaction) and a negative consumption term (the reactants used to produce the products). Although overall one term will account for the total balance on the system, if this balance equation is to be applied to an individual species and then the entire process, both terms are necessary. This modified equation can be used not only for reactive systems, but for population balances such as arise in particle mechanics problems. The equation is given below; note that it simplifies to the earlier equation in the case that the generation term is zero. In the absence of a nuclear reaction the number of atoms flowing in and out must remain the same, even in the presence of a chemical reaction. For a balance to be formed, the boundaries of the system must be clearly defined. Mass balances can be taken over physical systems at multiple scales. Mass balances can be simplified with the assumption of steady state, in which the accumulation term is zero. Illustrative example A simple example can illustrate the concept. Consider the situation in which a slurry is flowing into a settling tank to remove the solids in the tank. Solids are collected at the bottom by means of a conveyor belt partially submerged in the tank, and water exits via an overflow outlet. In this example, there are two substances: solids and water. The water overflow outlet carries an increased concentration of water relative to solids, as compared to the slurry inlet, and the exit of the conveyor belt carries an increased concentration of solids relative to water. Assumptions Steady state Non-reactive system Analysis Suppose that the slurry inlet composition (by mass) is 50% solid and 50% water, with a mass flow of . The tank is assumed to be operating at steady state, and as such accumulation is zero, so input and output must be equal for both the solids and water. If we know that the removal efficiency for the slurry tank is 60%, then the water outlet will contain of solids (40% times times 50% solids). If we measure the flow rate of the combined solids and water, and the water outlet is shown to be , then the amount of water exiting via the conveyor belt must be . This allows us to completely determine how the mass has been distributed in the system with only limited information and using the mass balance relations across the system boundaries. The mass balance for this system can be described in a tabular form: Mass feedback (recycle) Mass balances can be performed across systems which have cyclic flows. In these systems output streams are fed back into the input of a unit, often for further reprocessing. Such systems are common in grinding circuits, where grain is crushed then sieved to only allow fine particles out of the circuit and the larger particles are returned to the roller mill (grinder). However, recycle flows are by no means restricted to solid mechanics operations; they are used in liquid and gas flows, as well. One such example is in cooling towers, where water is pumped through a tower many times, with only a small quantity of water drawn off at each pass (to prevent solids build up) until it has either evaporated or exited with the drawn off water. The mass balance for water is . The use of the recycle aids in increasing overall conversion of input products, which is useful for low per-pass conversion processes (such as the Haber process). Differential mass balances A mass balance can also be taken differentially. The concept is the same as for a large mass balance, but it is performed in the context of a limiting system (for example, one can consider the limiting case in time or, more commonly, volume). A differential mass balance is used to generate differential equations that can provide an effective tool for modelling and understanding the target system. The differential mass balance is usually solved in two steps: first, a set of governing differential equations must be obtained, and then these equations must be solved, either analytically or, for less tractable problems, numerically. The following systems are good examples of the applications of the differential mass balance: Ideal (stirred) batch reactor Ideal tank reactor, also named Continuous Stirred Tank Reactor (CSTR) Ideal Plug Flow Reactor (PFR) Ideal batch reactor The ideal completely mixed batch reactor is a closed system. Isothermal conditions are assumed, and mixing prevents concentration gradients as reactant concentrations decrease and product concentrations increase over time. Many chemistry textbooks implicitly assume that the studied system can be described as a batch reactor when they write about reaction kinetics and chemical equilibrium. The mass balance for a substance A becomes where denotes the rate at which substance A is produced; is the volume (which may be constant or not); the number of moles of substance A. In a fed-batch reactor some reactants/ingredients are added continuously or in pulses (compare making porridge by either first blending all ingredients and then letting it boil, which can be described as a batch reactor, or by first mixing only water and salt and making that boil before the other ingredients are added, which can be described as a fed-batch reactor). Mass balances for fed-batch reactors become a bit more complicated. Reactive example In the first example, we will show how to use a mass balance to derive a relationship between the percent excess air for the combustion of a hydrocarbon-base fuel oil and the percent oxygen in the combustion product gas. First, normal dry air contains of oxygen per mole of air, so there is one mole of in of dry air. For stoichiometric combustion, the relationships between the mass of air and the mass of each combustible element in a fuel oil are: Considering the accuracy of typical analytical procedures, an equation for the mass of air per mass of fuel at stoichiometric combustion is: where refer to the mass fraction of each element in the fuel oil, sulfur burning to , and refers to the air-fuel ratio in mass units. For of fuel oil containing 86.1% C, 13.6% H, 0.2% O, and 0.1% S the stoichiometric mass of air is , so AFR = 14.56. The combustion product mass is then . At exact stoichiometry, should be absent. At 15 percent excess air, the AFR = 16.75, and the mass of the combustion product gas is , which contains of excess oxygen. The combustion gas thus contains 2.84 percent by mass. The relationships between percent excess air and % in the combustion gas are accurately expressed by quadratic equations, valid over the range 0–30 percent excess air: In the second example, we will use the law of mass action to derive the expression for a chemical equilibrium constant. Assume we have a closed reactor in which the following liquid phase reversible reaction occurs: The mass balance for substance A becomes As we have a liquid phase reaction we can (usually) assume a constant volume and since we get or In many textbooks this is given as the definition of reaction rate without specifying the implicit assumption that we are talking about reaction rate in a closed system with only one reaction. This is an unfortunate mistake that has confused many students over the years. According to the law of mass action the forward reaction rate can be written as and the backward reaction rate as The rate at which substance A is produced is thus and since, at equilibrium, the concentration of A is constant we get or, rearranged Ideal tank reactor/continuously stirred tank reactor The continuously mixed tank reactor is an open system with an influent stream of reactants and an effluent stream of products. A lake can be regarded as a tank reactor, and lakes with long turnover times (e.g. with low flux-to-volume ratios) can for many purposes be regarded as continuously stirred (e.g. homogeneous in all respects). The mass balance then becomes where is the volumetric flow into the system; is the volumetric flow out of the system; is the concentration of A in the inflow; is the concentration of A in the outflow. In an open system we can never reach a chemical equilibrium. We can, however, reach a steady state where all state variables (temperature, concentrations, etc.) remain constant. Example Consider a bathtub in which there is some bathing salt dissolved. We now fill in more water, keeping the bottom plug in. What happens? Since there is no reaction, and since there is no outflow . The mass balance becomes or Using a mass balance for total volume, however, it is evident that and that Thus we get Note that there is no reaction and hence no reaction rate or rate law involved, and yet . We can thus draw the conclusion that reaction rate can not be defined in a general manner using . One must first write down a mass balance before a link between and the reaction rate can be found. Many textbooks, however, define reaction rate as without mentioning that this definition implicitly assumes that the system is closed, has a constant volume and that there is only one reaction. Ideal plug flow reactor (PFR) The idealized plug flow reactor is an open system resembling a tube with no mixing in the direction of flow but perfect mixing perpendicular to the direction of flow, often used for systems like rivers and water pipes if the flow is turbulent. When a mass balance is made for a tube, one first considers an infinitesimal part of the tube and make a mass balance over that using the ideal tank reactor model. That mass balance is then integrated over the entire reactor volume to obtain: In numeric solutions, e.g. when using computers, the ideal tube is often translated to a series of tank reactors, as it can be shown that a PFR is equivalent to an infinite number of stirred tanks in series, but the latter is often easier to analyze, especially at steady state. More complex problems In reality, reactors are often non-ideal, in which combinations of the reactor models above are used to describe the system. Not only chemical reaction rates, but also mass transfer rates may be important in the mathematical description of a system, especially in heterogeneous systems. As the chemical reaction rate depends on temperature it is often necessary to make both an energy balance (often a heat balance rather than a full-fledged energy balance) as well as mass balances to fully describe the system. A different reactor model might be needed for the energy balance: A system that is closed with respect to mass might be open with respect to energy e.g. since heat may enter the system through conduction. Commercial use In industrial process plants, using the fact that the mass entering and leaving any portion of a process plant must balance, data validation and reconciliation algorithms may be employed to correct measured flows, provided that enough redundancy of flow measurements exist to permit statistical reconciliation and exclusion of detectably erroneous measurements. Since all real world measured values contain inherent error, the reconciled measurements provide a better basis than the measured values do for financial reporting, optimization, and regulatory reporting. Software packages exist to make this commercially feasible on a daily basis. See also Bioreactor Chemical engineering Continuity equation Dilution (equation) Energy accounting Glacier mass balance Mass flux Material flow analysis Material balance planning Fluid mechanics References External links Material Balance Calculations Material Balance Fundamentals The Material Balance for Chemical Reactors Material and energy balance Heat and material balance method of process control for petrochemical plants and oil refineries, United States Patent 6751527 Mass Chemical process engineering Transport phenomena
0.774496
0.990498
0.767137
Rossby number
The Rossby number (Ro), named for Carl-Gustav Arvid Rossby, is a dimensionless number used in describing fluid flow. The Rossby number is the ratio of inertial force to Coriolis force, terms and in the Navier–Stokes equations respectively. It is commonly used in geophysical phenomena in the oceans and atmosphere, where it characterizes the importance of Coriolis accelerations arising from planetary rotation. It is also known as the Kibel number. The Rossby number (Ro, not Ro) is defined as where U and L are respectively characteristic velocity and length scales of the phenomenon, and is the Coriolis frequency, with being the angular frequency of planetary rotation, and the latitude. A small Rossby number signifies a system strongly affected by Coriolis forces, and a large Rossby number signifies a system in which inertial and centrifugal forces dominate. For example, in tornadoes, the Rossby number is large (≈ 103), in low-pressure systems it is low (≈ 0.1–1), and in oceanic systems it is of the order of unity, but depending on the phenomena can range over several orders of magnitude (≈ 10−2–102). As a result, in tornadoes the Coriolis force is negligible, and balance is between pressure and centrifugal forces (called cyclostrophic balance). Cyclostrophic balance also commonly occurs in the inner core of a tropical cyclone. In low-pressure systems, centrifugal force is negligible, and balance is between Coriolis and pressure forces (called geostrophic balance). In the oceans all three forces are comparable (called cyclogeostrophic balance). For a figure showing spatial and temporal scales of motions in the atmosphere and oceans, see Kantha and Clayson. When the Rossby number is large (either because f is small, such as in the tropics and at lower latitudes; or because L is small, that is, for small-scale motions such as flow in a bathtub; or for large speeds), the effects of planetary rotation are unimportant and can be neglected. When the Rossby number is small, then the effects of planetary rotation are large, and the net acceleration is comparably small, allowing the use of the geostrophic approximation. See also References and notes Further reading For more on numerical analysis and the role of the Rossby number, see: For an historical account of Rossby's reception in the United States, see Atmospheric dynamics Dimensionless numbers of fluid mechanics
0.785123
0.977065
0.767116
Green–Kubo relations
The Green–Kubo relations (Melville S. Green 1954, Ryogo Kubo 1957) give the exact mathematical expression for a transport coefficient in terms of the integral of the equilibrium time correlation function of the time derivative of a corresponding microscopic variable (sometimes termed a "gross variable", as in ): One intuitive way to understand this relation is that relaxations resulting from random fluctuations in equilibrium are indistinguishable from those due to an external perturbation in linear response. Green-Kubo relations are important because they relate a macroscopic transport coefficient to the correlation function of a microscopic variable. In addition, they allow one to measure the transport coefficient without perturbing the system out of equilibrium, which has found much use in molecular dynamics simulations. Thermal and mechanical transport processes Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems. The standard example of an electrical transport process is Ohm's law, which states that, at least for sufficiently small applied voltages, the current I is linearly proportional to the applied voltage V, As the applied voltage increases one expects to see deviations from linear behavior. The coefficient of proportionality is the electrical conductance which is the reciprocal of the electrical resistance. The standard example of a mechanical transport process is Newton's law of viscosity, which states that the shear stress is linearly proportional to the strain rate. The strain rate is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, . Newton's law of viscosity states As the strain rate increases we expect to see deviations from linear behavior Another well known thermal transport process is Fourier's law of heat conduction, stating that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation). Linear constitutive relation Regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In the linear case the flux and the force are said to be conjugate to each other. The relation between a thermodynamic force F and its conjugate thermodynamic flux J is called a linear constitutive relation, L(0) is called a linear transport coefficient. In the case of multiple forces and fluxes acting simultaneously, the fluxes and forces will be related by a linear transport coefficient matrix. Except in special cases, this matrix is symmetric as expressed in the Onsager reciprocal relations. In the 1950s Green and Kubo proved an exact expression for linear transport coefficients which is valid for systems of arbitrary temperature T, and density. They proved that linear transport coefficients are exactly related to the time dependence of equilibrium fluctuations in the conjugate flux, where (with k the Boltzmann constant), and V is the system volume. The integral is over the equilibrium flux autocovariance function. At zero time the autocovariance is positive since it is the mean square value of the flux at equilibrium. Note that at equilibrium the mean value of the flux is zero by definition. At long times the flux at time t, J(t), is uncorrelated with its value a long time earlier J(0) and the autocorrelation function decays to zero. This remarkable relation is frequently used in molecular dynamics computer simulation to compute linear transport coefficients; see Evans and Morriss, "Statistical Mechanics of Nonequilibrium Liquids", Academic Press 1990. Nonlinear response and transient time correlation functions In 1985 Denis Evans and Morriss derived two exact fluctuation expressions for nonlinear transport coefficients—see Evans and Morriss in Mol. Phys, 54, 629(1985). Evans later argued that these are consequences of the extremization of free energy in Response theory as a free energy minimum. Evans and Morriss proved that in a thermostatted system that is at equilibrium at t = 0, the nonlinear transport coefficient can be calculated from the so-called transient time correlation function expression: where the equilibrium flux autocorrelation function is replaced by a thermostatted field dependent transient autocorrelation function. At time zero but at later times since the field is applied . Another exact fluctuation expression derived by Evans and Morriss is the so-called Kawasaki expression for the nonlinear response: The ensemble average of the right hand side of the Kawasaki expression is to be evaluated under the application of both the thermostat and the external field. At first sight the transient time correlation function (TTCF) and Kawasaki expression might appear to be of limited use—because of their innate complexity. However, the TTCF is quite useful in computer simulations for calculating transport coefficients. Both expressions can be used to derive new and useful fluctuation expressions quantities like specific heats, in nonequilibrium steady states. Thus they can be used as a kind of partition function for nonequilibrium steady states. Derivation from the fluctuation theorem and the central limit theorem For a thermostatted steady state, time integrals of the dissipation function are related to the dissipative flux, J, by the equation We note in passing that the long time average of the dissipation function is a product of the thermodynamic force and the average conjugate thermodynamic flux. It is therefore equal to the spontaneous entropy production in the system. The spontaneous entropy production plays a key role in linear irreversible thermodynamics – see de Groot and Mazur "Non-equilibrium thermodynamics" Dover. The fluctuation theorem (FT) is valid for arbitrary averaging times, t. Let's apply the FT in the long time limit while simultaneously reducing the field so that the product is held constant, Because of the particular way we take the double limit, the negative of the mean value of the flux remains a fixed number of standard deviations away from the mean as the averaging time increases (narrowing the distribution) and the field decreases. This means that as the averaging time gets longer the distribution near the mean flux and its negative, is accurately described by the central limit theorem. This means that the distribution is Gaussian near the mean and its negative so that Combining these two relations yields (after some tedious algebra!) the exact Green–Kubo relation for the linear zero field transport coefficient, namely, Here are the details of the proof of Green–Kubo relations from the FT. A proof using only elementary quantum mechanics was given by Robert Zwanzig. Summary This shows the fundamental importance of the fluctuation theorem (FT) in nonequilibrium statistical mechanics. The FT gives a generalisation of the second law of thermodynamics. It is then easy to prove the second law inequality and the Kawasaki identity. When combined with the central limit theorem, the FT also implies the Green–Kubo relations for linear transport coefficients close to equilibrium. The FT is, however, more general than the Green–Kubo Relations because, unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, no one has yet been able to derive the equations for nonlinear response theory from the FT. The FT does not imply or require that the distribution of time-averaged dissipation is Gaussian. There are many examples known when the distribution is non-Gaussian and yet the FT still correctly describes the probability ratios. See also Density matrix Fluctuation theorem Fluctuation–dissipation theorem Green's function (many-body theory) Lindblad equation Linear response function References Theoretical physics Thermodynamic equations Statistical mechanics Non-equilibrium thermodynamics
0.783034
0.979651
0.7671
Henri Poincaré
Jules Henri Poincaré (, ; ; 29 April 185417 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "The Last Universalist", since he excelled in all fields of the discipline as it existed during his lifetime. Due to his scientific success, influence and his discoveries, he has been deemed "the philosopher par excellence of modern science." As a mathematician and physicist, he made many original fundamental contributions to pure and applied mathematics, mathematical physics, and celestial mechanics. In his research on the three-body problem, Poincaré became the first person to discover a chaotic deterministic system which laid the foundations of modern chaos theory. He is also considered to be one of the founders of the field of topology. Poincaré made clear the importance of paying attention to the invariance of laws of physics under different transformations, and was the first to present the Lorentz transformations in their modern symmetrical form. Poincaré discovered the remaining relativistic velocity transformations and recorded them in a letter to Hendrik Lorentz in 1905. Thus he obtained perfect invariance of all of Maxwell's equations, an important step in the formulation of the theory of special relativity. In 1905, Poincaré first proposed gravitational waves (ondes gravifiques) emanating from a body and propagating at the speed of light as being required by the Lorentz transformations. In 1912, he wrote an influential paper which provided a mathematical argument for quantum mechanics. The Poincaré group used in physics and mathematics was named after him. Early in the 20th century he formulated the Poincaré conjecture, which became, over time, one of the famous unsolved problems in mathematics. It was solved in 2002–2003 by Grigori Perelman. Life Poincaré was born on 29 April 1854 in Cité Ducale neighborhood, Nancy, Meurthe-et-Moselle, into an influential French family. His father Léon Poincaré (1828–1892) was a professor of medicine at the University of Nancy. His younger sister Aline married the spiritual philosopher Émile Boutroux. Another notable member of Henri's family was his cousin, Raymond Poincaré, a fellow member of the Académie française, who was President of France from 1913 to 1920, and three-time Prime Minister of France between 1913 and 1929. Education During his childhood he was seriously ill for a time with diphtheria and received special instruction from his mother, Eugénie Launois (1830–1897). In 1862, Henri entered the Lycée in Nancy (now renamed the in his honour, along with Henri Poincaré University, also in Nancy). He spent eleven years at the Lycée and during this time he proved to be one of the top students in every topic he studied. He excelled in written composition. His mathematics teacher described him as a "monster of mathematics" and he won first prizes in the concours général, a competition between the top pupils from all the Lycées across France. His poorest subjects were music and physical education, where he was described as "average at best". However, poor eyesight and a tendency towards absentmindedness may explain these difficulties. He graduated from the Lycée in 1871 with a baccalauréat in both letters and sciences. During the Franco-Prussian War of 1870, he served alongside his father in the Ambulance Corps. Poincaré entered the École Polytechnique as the top qualifier in 1873 and graduated in 1875. There he studied mathematics as a student of Charles Hermite, continuing to excel and publishing his first paper (Démonstration nouvelle des propriétés de l'indicatrice d'une surface) in 1874. From November 1875 to June 1878 he studied at the École des Mines, while continuing the study of mathematics in addition to the mining engineering syllabus, and received the degree of ordinary mining engineer in March 1879. As a graduate of the École des Mines, he joined the Corps des Mines as an inspector for the Vesoul region in northeast France. He was on the scene of a mining disaster at Magny in August 1879 in which 18 miners died. He carried out the official investigation into the accident in a characteristically thorough and humane way. At the same time, Poincaré was preparing for his Doctorate in Science in mathematics under the supervision of Charles Hermite. His doctoral thesis was in the field of differential equations. It was named Sur les propriétés des fonctions définies par les équations aux différences partielles. Poincaré devised a new way of studying the properties of these equations. He not only faced the question of determining the integral of such equations, but also was the first person to study their general geometric properties. He realised that they could be used to model the behaviour of multiple bodies in free motion within the Solar System. Poincaré graduated from the University of Paris in 1879. First scientific achievements After receiving his degree, Poincaré began teaching as junior lecturer in mathematics at the University of Caen in Normandy (in December 1879). At the same time he published his first major article concerning the treatment of a class of automorphic functions. There, in Caen, he met his future wife, Louise Poulain d'Andecy (1857–1934), granddaughter of Isidore Geoffroy Saint-Hilaire and great-granddaughter of Étienne Geoffroy Saint-Hilaire and on 20 April 1881, they married. Together they had four children: Jeanne (born 1887), Yvonne (born 1889), Henriette (born 1891), and Léon (born 1893). Poincaré immediately established himself among the greatest mathematicians of Europe, attracting the attention of many prominent mathematicians. In 1881 Poincaré was invited to take a teaching position at the Faculty of Sciences of the University of Paris; he accepted the invitation. During the years 1883 to 1897, he taught mathematical analysis in the École Polytechnique. In 1881–1882, Poincaré created a new branch of mathematics: qualitative theory of differential equations. He showed how it is possible to derive the most important information about the behavior of a family of solutions without having to solve the equation (since this may not always be possible). He successfully used this approach to problems in celestial mechanics and mathematical physics. Career He never fully abandoned his career in the mining administration to mathematics. He worked at the Ministry of Public Services as an engineer in charge of northern railway development from 1881 to 1885. He eventually became chief engineer of the Corps des Mines in 1893 and inspector general in 1910. Beginning in 1881 and for the rest of his career, he taught at the University of Paris (the Sorbonne). He was initially appointed as the maître de conférences d'analyse (associate professor of analysis). Eventually, he held the chairs of Physical and Experimental Mechanics, Mathematical Physics and Theory of Probability, and Celestial Mechanics and Astronomy. In 1887, at the young age of 32, Poincaré was elected to the French Academy of Sciences. He became its president in 1906, and was elected to the Académie française on 5 March 1908. In 1887, he won Oscar II, King of Sweden's mathematical competition for a resolution of the three-body problem concerning the free motion of multiple orbiting bodies. (See three-body problem section below.) In 1893, Poincaré joined the French Bureau des Longitudes, which engaged him in the synchronisation of time around the world. In 1897 Poincaré backed an unsuccessful proposal for the decimalisation of circular measure, and hence time and longitude. It was this post which led him to consider the question of establishing international time zones and the synchronisation of time between bodies in relative motion. (See work on relativity section below.) In 1904, he intervened in the trials of Alfred Dreyfus, attacking the spurious scientific claims regarding evidence brought against Dreyfus. Poincaré was the President of the Société Astronomique de France (SAF), the French astronomical society, from 1901 to 1903. Students Poincaré had two notable doctoral students at the University of Paris, Louis Bachelier (1900) and Dimitrie Pompeiu (1905). Death In 1912, Poincaré underwent surgery for a prostate problem and subsequently died from an embolism on 17 July 1912, in Paris. He was 58 years of age. He is buried in the Poincaré family vault in the Cemetery of Montparnasse, Paris, in section 16 close to the gate Rue Émile-Richard. A former French Minister of Education, Claude Allègre, proposed in 2004 that Poincaré be reburied in the Panthéon in Paris, which is reserved for French citizens of the highest honour. Work Summary Poincaré made many contributions to different fields of pure and applied mathematics such as: celestial mechanics, fluid mechanics, optics, electricity, telegraphy, capillarity, elasticity, thermodynamics, potential theory, quantum theory, theory of relativity and physical cosmology. He was also a populariser of mathematics and physics and wrote several books for the lay public. Among the specific topics he contributed to are the following: algebraic topology (a field that Poincaré virtually invented) the theory of analytic functions of several complex variables the theory of abelian functions algebraic geometry the Poincaré conjecture, proven in 2003 by Grigori Perelman. Poincaré recurrence theorem hyperbolic geometry number theory the three-body problem the theory of diophantine equations electromagnetism the special theory of relativity the fundamental group In the field of differential equations Poincaré has given many results that are critical for the qualitative theory of differential equations, for example the Poincaré sphere and the Poincaré map. Poincaré on "everybody's belief" in the Normal Law of Errors (see normal distribution for an account of that "law") Published an influential paper providing a novel mathematical argument in support of quantum mechanics. Three-body problem The problem of finding the general solution to the motion of more than two orbiting bodies in the Solar System had eluded mathematicians since Newton's time. This was known originally as the three-body problem and later the n-body problem, where n is any number of more than two orbiting bodies. The n-body solution was considered very important and challenging at the close of the 19th century. Indeed, in 1887, in honour of his 60th birthday, Oscar II, King of Sweden, advised by Gösta Mittag-Leffler, established a prize for anyone who could find the solution to the problem. The announcement was quite specific: Given a system of arbitrarily many mass points that attract each according to Newton's law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly. In case the problem could not be solved, any other important contribution to classical mechanics would then be considered to be prizeworthy. The prize was finally awarded to Poincaré, even though he did not solve the original problem. One of the judges, the distinguished Karl Weierstrass, said, "This work cannot indeed be considered as furnishing the complete solution of the question proposed, but that it is nevertheless of such importance that its publication will inaugurate a new era in the history of celestial mechanics." (The first version of his contribution even contained a serious error; for details see the article by Diacu and the book by Barrow-Green). The version finally printed contained many important ideas which led to the theory of chaos. The problem as stated originally was finally solved by Karl F. Sundman for n = 3 in 1912 and was generalised to the case of n > 3 bodies by Qiudong Wang in the 1990s. The series solutions have very slow convergence. It would take millions of terms to determine the motion of the particles for even very short intervals of time, so they are unusable in numerical work. Work on relativity Local time Poincaré's work at the Bureau des Longitudes on establishing international time zones led him to consider how clocks at rest on the Earth, which would be moving at different speeds relative to absolute space (or the "luminiferous aether"), could be synchronised. At the same time Dutch theorist Hendrik Lorentz was developing Maxwell's theory into a theory of the motion of charged particles ("electrons" or "ions"), and their interaction with radiation. In 1895 Lorentz had introduced an auxiliary quantity (without physical interpretation) called "local time" and introduced the hypothesis of length contraction to explain the failure of optical and electrical experiments to detect motion relative to the aether (see Michelson–Morley experiment). Poincaré was a constant interpreter (and sometimes friendly critic) of Lorentz's theory. Poincaré as a philosopher was interested in the "deeper meaning". Thus he interpreted Lorentz's theory and in so doing he came up with many insights that are now associated with special relativity. In The Measure of Time (1898), Poincaré said, "A little reflection is sufficient to understand that all these affirmations have by themselves no meaning. They can have one only as the result of a convention." He also argued that scientists have to set the constancy of the speed of light as a postulate to give physical theories the simplest form. Based on these assumptions he discussed in 1900 Lorentz's "wonderful invention" of local time and remarked that it arose when moving clocks are synchronised by exchanging light signals assumed to travel with the same speed in both directions in a moving frame. Principle of relativity and Lorentz transformations In 1881 Poincaré described hyperbolic geometry in terms of the hyperboloid model, formulating transformations leaving invariant the Lorentz interval , which makes them mathematically equivalent to the Lorentz transformations in 2+1 dimensions. In addition, Poincaré's other models of hyperbolic geometry (Poincaré disk model, Poincaré half-plane model) as well as the Beltrami–Klein model can be related to the relativistic velocity space (see Gyrovector space). In 1892 Poincaré developed a mathematical theory of light including polarization. His vision of the action of polarizers and retarders, acting on a sphere representing polarized states, is called the Poincaré sphere. It was shown that the Poincaré sphere possesses an underlying Lorentzian symmetry, by which it can be used as a geometrical representation of Lorentz transformations and velocity additions. He discussed the "principle of relative motion" in two papers in 1900 and named it the principle of relativity in 1904, according to which no physical experiment can discriminate between a state of uniform motion and a state of rest. In 1905 Poincaré wrote to Lorentz about Lorentz's paper of 1904, which Poincaré described as a "paper of supreme importance". In this letter he pointed out an error Lorentz had made when he had applied his transformation to one of Maxwell's equations, that for charge-occupied space, and also questioned the time dilation factor given by Lorentz. In a second letter to Lorentz, Poincaré gave his own reason why Lorentz's time dilation factor was indeed correct after all—it was necessary to make the Lorentz transformation form a group—and he gave what is now known as the relativistic velocity-addition law. Poincaré later delivered a paper at the meeting of the Academy of Sciences in Paris on 5 June 1905 in which these issues were addressed. In the published version of that he wrote: The essential point, established by Lorentz, is that the equations of the electromagnetic field are not altered by a certain transformation (which I will call by the name of Lorentz) of the form: and showed that the arbitrary function must be unity for all (Lorentz had set by a different argument) to make the transformations form a group. In an enlarged version of the paper that appeared in 1906 Poincaré pointed out that the combination is invariant. He noted that a Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth imaginary coordinate, and he used an early form of four-vectors. Poincaré expressed a lack of interest in a four-dimensional reformulation of his new mechanics in 1907, because in his opinion the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit. So it was Hermann Minkowski who worked out the consequences of this notion in 1907. Mass–energy relation Like others before, Poincaré (1900) discovered a relation between mass and electromagnetic energy. While studying the conflict between the action/reaction principle and Lorentz ether theory, he tried to determine whether the center of gravity still moves with a uniform velocity when electromagnetic fields are included. He noticed that the action/reaction principle does not hold for matter alone, but that the electromagnetic field has its own momentum. Poincaré concluded that the electromagnetic field energy of an electromagnetic wave behaves like a fictitious fluid (fluide fictif) with a mass density of E/c2. If the center of mass frame is defined by both the mass of matter and the mass of the fictitious fluid, and if the fictitious fluid is indestructible—it's neither created or destroyed—then the motion of the center of mass frame remains uniform. But electromagnetic energy can be converted into other forms of energy. So Poincaré assumed that there exists a non-electric energy fluid at each point of space, into which electromagnetic energy can be transformed and which also carries a mass proportional to the energy. In this way, the motion of the center of mass remains uniform. Poincaré said that one should not be too surprised by these assumptions, since they are only mathematical fictions. However, Poincaré's resolution led to a paradox when changing frames: if a Hertzian oscillator radiates in a certain direction, it will suffer a recoil from the inertia of the fictitious fluid. Poincaré performed a Lorentz boost (to order v/c) to the frame of the moving source. He noted that energy conservation holds in both frames, but that the law of conservation of momentum is violated. This would allow perpetual motion, a notion which he abhorred. The laws of nature would have to be different in the frames of reference, and the relativity principle would not hold. Therefore, he argued that also in this case there has to be another compensating mechanism in the ether. Poincaré himself came back to this topic in his St. Louis lecture (1904). He rejected the possibility that energy carries mass and criticized his own solution to compensate the above-mentioned problems: In the above quote he refers to the Hertz assumption of total aether entrainment that was falsified by the Fizeau experiment but that experiment does indeed show that that light is partially "carried along" with a substance. Finally in 1908 he revisits the problem and ends with abandoning the principle of reaction altogether in favor of supporting a solution based in the inertia of aether itself. He also discussed two other unexplained effects: (1) non-conservation of mass implied by Lorentz's variable mass , Abraham's theory of variable mass and Kaufmann's experiments on the mass of fast moving electrons and (2) the non-conservation of energy in the radium experiments of Marie Curie. It was Albert Einstein's concept of mass–energy equivalence (1905) that a body losing energy as radiation or heat was losing mass of amount m = E/c2 that resolved Poincaré's paradox, without using any compensating mechanism within the ether. The Hertzian oscillator loses mass in the emission process, and momentum is conserved in any frame. However, concerning Poincaré's solution of the Center of Gravity problem, Einstein noted that Poincaré's formulation and his own from 1906 were mathematically equivalent. Gravitational waves In 1905 Poincaré first proposed gravitational waves (ondes gravifiques) emanating from a body and propagating at the speed of light. He wrote: Poincaré and Einstein Einstein's first paper on relativity was published three months after Poincaré's short paper, but before Poincaré's longer version. Einstein relied on the principle of relativity to derive the Lorentz transformations and used a similar clock synchronisation procedure (Einstein synchronisation) to the one that Poincaré (1900) had described, but Einstein's paper was remarkable in that it contained no references at all. Poincaré never acknowledged Einstein's work on special relativity. However, Einstein expressed sympathy with Poincaré's outlook obliquely in a letter to Hans Vaihinger on 3 May 1919, when Einstein considered Vaihinger's general outlook to be close to his own and Poincaré's to be close to Vaihinger's. In public, Einstein acknowledged Poincaré posthumously in the text of a lecture in 1921 titled "Geometrie und Erfahrung (Geometry and Experience)" in connection with non-Euclidean geometry, but not in connection with special relativity. A few years before his death, Einstein commented on Poincaré as being one of the pioneers of relativity, saying "Lorentz had already recognized that the transformation named after him is essential for the analysis of Maxwell's equations, and Poincaré deepened this insight still further ....". Assessments on Poincaré and relativity Poincaré's work in the development of special relativity is well recognised, though most historians stress that despite many similarities with Einstein's work, the two had very different research agendas and interpretations of the work. Poincaré developed a similar physical interpretation of local time and noticed the connection to signal velocity, but contrary to Einstein he continued to use the ether-concept in his papers and argued that clocks at rest in the ether show the "true" time, and moving clocks show the local time. So Poincaré tried to keep the relativity principle in accordance with classical concepts, while Einstein developed a mathematically equivalent kinematics based on the new physical concepts of the relativity of space and time. While this is the view of most historians, a minority go much further, such as E. T. Whittaker, who held that Poincaré and Lorentz were the true discoverers of relativity. Algebra and number theory Poincaré introduced group theory to physics, and was the first to study the group of Lorentz transformations. He also made major contributions to the theory of discrete groups and their representations. Topology The subject is clearly defined by Felix Klein in his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced, as suggested by Johann Benedict Listing, instead of previously used "Analysis situs". Some important concepts were introduced by Enrico Betti and Bernhard Riemann. But the foundation of this science, for a space of any dimension, was created by Poincaré. His first article on this topic appeared in 1894. His research in geometry led to the abstract topological definition of homotopy and homology. He also first introduced the basic concepts and invariants of combinatorial topology, such as Betti numbers and the fundamental group. Poincaré proved a formula relating the number of edges, vertices and faces of n-dimensional polyhedron (the Euler–Poincaré theorem) and gave the first precise formulation of the intuitive notion of dimension. Astronomy and celestial mechanics Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). They introduced the small parameter method, fixed points, integral invariants, variational equations, the convergence of the asymptotic expansions. Generalizing a theory of Bruns (1887), Poincaré showed that the three-body problem is not integrable. In other words, the general solution of the three-body problem can not be expressed in terms of algebraic and transcendental functions through unambiguous coordinates and velocities of the bodies. His work in this area was the first major achievement in celestial mechanics since Isaac Newton. These monographs include an idea of Poincaré, which later became the basis for mathematical "chaos theory" (see, in particular, the Poincaré recurrence theorem) and the general theory of dynamical systems. Poincaré authored important works on astronomy for the equilibrium figures of a gravitating rotating fluid. He introduced the important concept of bifurcation points and proved the existence of equilibrium figures such as the non-ellipsoids, including ring-shaped and pear-shaped figures, and their stability. For this discovery, Poincaré received the Gold Medal of the Royal Astronomical Society (1900). Differential equations and mathematical physics After defending his doctoral thesis on the study of singular points of the system of differential equations, Poincaré wrote a series of memoirs under the title "On curves defined by differential equations" (1881–1882). In these articles, he built a new branch of mathematics, called "qualitative theory of differential equations". Poincaré showed that even if the differential equation can not be solved in terms of known functions, yet from the very form of the equation, a wealth of information about the properties and behavior of the solutions can be found. In particular, Poincaré investigated the nature of the trajectories of the integral curves in the plane, gave a classification of singular points (saddle, focus, center, node), introduced the concept of a limit cycle and the loop index, and showed that the number of limit cycles is always finite, except for some special cases. Poincaré also developed a general theory of integral invariants and solutions of the variational equations. For the finite-difference equations, he created a new direction – the asymptotic analysis of the solutions. He applied all these achievements to study practical problems of mathematical physics and celestial mechanics, and the methods used were the basis of its topological works. Character Poincaré's work habits have been compared to a bee flying from flower to flower. Poincaré was interested in the way his mind worked; he studied his habits and gave a talk about his observations in 1908 at the Institute of General Psychology in Paris. He linked his way of thinking to how he made several discoveries. The mathematician Darboux claimed he was un intuitif (an intuitive), arguing that this is demonstrated by the fact that he worked so often by visual representation. Jacques Hadamard wrote that Poincaré's research demonstrated marvelous clarity and Poincaré himself wrote that he believed that logic was not a way to invent but a way to structure ideas and that logic limits ideas. Toulouse's characterisation Poincaré's mental organisation was interesting not only to Poincaré himself but also to Édouard Toulouse, a psychologist of the Psychology Laboratory of the School of Higher Studies in Paris. Toulouse wrote a book entitled Henri Poincaré (1910). In it, he discussed Poincaré's regular schedule: He worked during the same times each day in short periods of time. He undertook mathematical research for four hours a day, between 10 a.m. and noon then again from 5 p.m. to 7 p.m.. He would read articles in journals later in the evening. His normal work habit was to solve a problem completely in his head, then commit the completed problem to paper. He was ambidextrous and nearsighted. His ability to visualise what he heard proved particularly useful when he attended lectures, since his eyesight was so poor that he could not see properly what the lecturer wrote on the blackboard. These abilities were offset to some extent by his shortcomings: He was physically clumsy and artistically inept. He was always in a rush and disliked going back for changes or corrections. He never spent a long time on a problem since he believed that the subconscious would continue working on the problem while he consciously worked on another problem. In addition, Toulouse stated that most mathematicians worked from principles already established while Poincaré started from basic principles each time (O'Connor et al., 2002). His method of thinking is well summarised as: Publications Honours Awards Oscar II, King of Sweden's mathematical competition (1887) Foreign member of the Royal Netherlands Academy of Arts and Sciences (1897) American Philosophical Society (1899) Gold Medal of the Royal Astronomical Society of London (1900) Bolyai Prize (1905) Matteucci Medal (1905) French Academy of Sciences (1906) Académie française (1909) Bruce Medal (1911) Named after him Institut Henri Poincaré (mathematics and theoretical physics centre) Poincaré Prize (Mathematical Physics International Prize) Annales Henri Poincaré (Scientific Journal) Poincaré Seminar (nicknamed "Bourbaphy") The crater Poincaré on the Moon Asteroid 2021 Poincaré List of things named after Henri Poincaré Henri Poincaré did not receive the Nobel Prize in Physics, but he had influential advocates like Henri Becquerel or committee member Gösta Mittag-Leffler. The nomination archive reveals that Poincaré received a total of 51 nominations between 1904 and 1912, the year of his death. Of the 58 nominations for the 1910 Nobel Prize, 34 named Poincaré. Nominators included Nobel laureates Hendrik Lorentz and Pieter Zeeman (both of 1902), Marie Curie (of 1903), Albert Michelson (of 1907), Gabriel Lippmann (of 1908) and Guglielmo Marconi (of 1909). The fact that renowned theoretical physicists like Poincaré, Boltzmann or Gibbs were not awarded the Nobel Prize is seen as evidence that the Nobel committee had more regard for experimentation than theory. In Poincaré's case, several of those who nominated him pointed out that the greatest problem was to name a specific discovery, invention, or technique. Philosophy Poincaré had philosophical views opposite to those of Bertrand Russell and Gottlob Frege, who believed that mathematics was a branch of logic. Poincaré strongly disagreed, claiming that intuition was the life of mathematics. Poincaré gives an interesting point of view in his 1902 book Science and Hypothesis: Poincaré believed that arithmetic is synthetic. He argued that Peano's axioms cannot be proven non-circularly with the principle of induction (Murzi, 1998), therefore concluding that arithmetic is a priori synthetic and not analytic. Poincaré then went on to say that mathematics cannot be deduced from logic since it is not analytic. His views were similar to those of Immanuel Kant (Kolak, 2001, Folina 1992). He strongly opposed Cantorian set theory, objecting to its use of impredicative definitions. However, Poincaré did not share Kantian views in all branches of philosophy and mathematics. For example, in geometry, Poincaré believed that the structure of non-Euclidean space can be known analytically. Poincaré held that convention plays an important role in physics. His view (and some later, more extreme versions of it) came to be known as "conventionalism". Poincaré believed that Newton's first law was not empirical but is a conventional framework assumption for mechanics (Gargani, 2012). He also believed that the geometry of physical space is conventional. He considered examples in which either the geometry of the physical fields or gradients of temperature can be changed, either describing a space as non-Euclidean measured by rigid rulers, or as a Euclidean space where the rulers are expanded or shrunk by a variable heat distribution. However, Poincaré thought that we were so accustomed to Euclidean geometry that we would prefer to change the physical laws to save Euclidean geometry rather than shift to non-Euclidean physical geometry. Free will Poincaré's famous lectures before the Société de Psychologie in Paris (published as Science and Hypothesis, The Value of Science, and Science and Method) were cited by Jacques Hadamard as the source for the idea that creativity and invention consist of two mental stages, first random combinations of possible solutions to a problem, followed by a critical evaluation. Although he most often spoke of a deterministic universe, Poincaré said that the subconscious generation of new possibilities involves chance. It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinations... all the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousness... A few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming conscious... In the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance. Poincaré's two stages—random combinations followed by selection—became the basis for Daniel Dennett's two-stage model of free will. Bibliography Poincaré's writings in English translation Popular writings on the philosophy of science: ; reprinted in 1921; this book includes the English translations of Science and Hypothesis (1902), The Value of Science (1905), Science and Method (1908). 1905. "", The Walter Scott Publishing Co. 1906. "", Athenæum 1913. "The New Mechanics", The Monist, Vol. XXIII. 1913. "The Relativity of Space", The Monist, Vol. XXIII. 1913. 1956. Chance. In James R. Newman, ed., The World of Mathematics (4 Vols). 1958. The Value of Science, New York: Dover. On algebraic topology: 1895. . The first systematic study of topology. On celestial mechanics: 1890. 1892–99. New Methods of Celestial Mechanics, 3 vols. English trans., 1967. . 1905. "The Capture Hypothesis of J. J. See", The Monist, Vol. XV. 1905–10. Lessons of Celestial Mechanics. On the philosophy of mathematics: Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Univ. Press. Contains the following works by Poincaré: 1894, "On the Nature of Mathematical Reasoning", 972–81. 1898, "On the Foundations of Geometry", 982–1011. 1900, "Intuition and Logic in Mathematics", 1012–20. 1905–06, "Mathematics and Logic, I–III", 1021–70. 1910, "On Transfinite Numbers", 1071–74. 1905. "The Principles of Mathematical Physics", The Monist, Vol. XV. 1910. "The Future of Mathematics", The Monist, Vol. XX. 1910. "Mathematical Creation", The Monist, Vol. XX. Other: 1904. Maxwell's Theory and Wireless Telegraphy, New York, McGraw Publishing Company. 1905. "The New Logics", The Monist, Vol. XV. 1905. "The Latest Efforts of the Logisticians", The Monist, Vol. XV. Exhaustive bibliography of English translations: 1892–2017. . See also Concepts Poincaré–Andronov–Hopf bifurcation Poincaré complex – an abstraction of the singular chain complex of a closed, orientable manifold Poincaré duality Poincaré disk model Poincaré expansion Poincaré gauge Poincaré group Poincaré half-plane model Poincaré homology sphere Poincaré inequality Poincaré lemma Poincaré map Poincaré residue Poincaré series (modular form) Poincaré space Poincaré metric Poincaré plot Poincaré polynomial Poincaré series Poincaré sphere Poincaré–Einstein synchronisation Poincaré–Lelong equation Poincaré–Lindstedt method Poincaré–Lindstedt perturbation theory Poincaré–Steklov operator Euler–Poincaré characteristic Neumann–Poincaré operator Reflecting Function Theorems Here is a list of theorems proved by Poincaré: Poincaré's recurrence theorem: certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Poincaré–Bendixson theorem: a statement about the long-term behaviour of orbits of continuous dynamical systems on the plane, cylinder, or two-sphere. Poincaré–Hopf theorem: a generalization of the hairy-ball theorem, which states that there is no smooth vector field on a sphere having no sources or sinks. Poincaré–Lefschetz duality theorem: a version of Poincaré duality in geometric topology, applying to a manifold with boundary Poincaré separation theorem: gives the upper and lower bounds of eigenvalues of a real symmetric matrix B'AB that can be considered as the orthogonal projection of a larger real symmetric matrix A onto a linear subspace spanned by the columns of B. Poincaré–Birkhoff theorem: every area-preserving, orientation-preserving homeomorphism of an annulus that rotates the two boundaries in opposite directions has at least two fixed points. Poincaré–Birkhoff–Witt theorem: an explicit description of the universal enveloping algebra of a Lie algebra. Poincaré–Bjerknes circulation theorem: theorem about a conservation of quantity for the rotating frame. Poincaré conjecture (now a theorem): Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. Poincaré–Miranda theorem: a generalization of the intermediate value theorem to n dimensions. Other French epistemology History of special relativity List of things named after Henri Poincaré Institut Henri Poincaré, Paris Brouwer fixed-point theorem Relativity priority dispute Epistemic structural realism References Footnotes Sources Bell, Eric Temple, 1986. Men of Mathematics (reissue edition). Touchstone Books. . Belliver, André, 1956. Henri Poincaré ou la vocation souveraine. Paris: Gallimard. Bernstein, Peter L, 1996. "Against the Gods: A Remarkable Story of Risk". (p. 199–200). John Wiley & Sons. Boyer, B. Carl, 1968. A History of Mathematics: Henri Poincaré, John Wiley & Sons. Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870–1940. Princeton Uni. Press. . Internet version published in Journal of the ACMS 2004. Folina, Janet, 1992. Poincaré and the Philosophy of Mathematics. Macmillan, New York. Gray, Jeremy, 1986. Linear differential equations and group theory from Riemann to Poincaré, Birkhauser Gray, Jeremy, 2013. Henri Poincaré: A scientific biography. Princeton University Press Kolak, Daniel, 2001. Lovers of Wisdom, 2nd ed. Wadsworth. Gargani, Julien, 2012. Poincaré, le hasard et l'étude des systèmes complexes, L'Harmattan. Murzi, 1998. "Henri Poincaré". O'Connor, J. John, and Robertson, F. Edmund, 2002, "Jules Henri Poincaré". University of St. Andrews, Scotland. Peterson, Ivars, 1995. Newton's Clock: Chaos in the Solar System (reissue edition). W H Freeman & Co. . Sageret, Jules, 1911. Henri Poincaré. Paris: Mercure de France. Toulouse, E.,1910. Henri Poincaré.—(Source biography in French) at University of Michigan Historic Math Collection. — Verhulst, Ferdinand, 2012 Henri Poincaré. Impatient Genius. N.Y.: Springer. Henri Poincaré, l'œuvre scientifique, l'œuvre philosophique, by Vito Volterra, Jacques Hadamard, Paul Langevin and Pierre Boutroux, Felix Alcan, 1914. Henri Poincaré, l'œuvre mathématique, by Vito Volterra. Henri Poincaré, le problème des trois corps, by Jacques Hadamard. Henri Poincaré, le physicien, by Paul Langevin. Henri Poincaré, l'œuvre philosophique, by Pierre Boutroux. Further reading Secondary sources to work on relativity Non-mainstream sources External links Henri Poincaré's Bibliography Internet Encyclopedia of Philosophy: "Henri Poincaré "—by Mauro Murzi. Internet Encyclopedia of Philosophy: "Poincaré’s Philosophy of Mathematics"—by Janet Folina. Henri Poincaré on Information Philosopher A timeline of Poincaré's life University of Nantes (in French). Henri Poincaré Papers University of Nantes (in French). Bruce Medal page Collins, Graham P., "Henri Poincaré, His Conjecture, Copacabana and Higher Dimensions," Scientific American, 9 June 2004. BBC in Our Time, "Discussion of the Poincaré conjecture," 2 November 2006, hosted by Melvyn Bragg. Poincare Contemplates Copernicus at MathPages High Anxieties – The Mathematics of Chaos (2008) BBC documentary directed by David Malone looking at the influence of Poincaré's discoveries on 20th Century mathematics. 1854 births 1912 deaths 19th-century French essayists 19th-century French male writers 19th-century French mathematicians 19th-century French non-fiction writers 19th-century French philosophers 20th-century essayists 20th-century French male writers 20th-century French mathematicians 20th-century French non-fiction writers 20th-century French philosophers Algebraic geometers Burials at Montparnasse Cemetery Chaos theorists Continental philosophers Corps des mines Corresponding members of the Saint Petersburg Academy of Sciences Deaths from embolism Determinists Dynamical systems theorists École Polytechnique alumni French fluid dynamicists Foreign associates of the National Academy of Sciences Foreign members of the Royal Society French male essayists French male non-fiction writers French male writers French military personnel of the Franco-Prussian War French mining engineers French geometers Lecturers French mathematical analysts Members of the Académie Française Members of the Royal Netherlands Academy of Arts and Sciences Mines Paris - PSL alumni Officers of the French Academy of Sciences Scientists from Nancy, France Philosophers of logic Philosophers of mathematics Philosophers of psychology French philosophers of science French philosophy academics Philosophy writers Recipients of the Bruce Medal Recipients of the Gold Medal of the Royal Astronomical Society French relativity theorists Thermodynamicists Topologists Academic staff of the University of Paris Recipients of the Matteucci Medal
0.768913
0.997626
0.767087
De Broglie–Bohm theory
The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration. Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function. There are several equivalent mathematical formulations of the theory. Overview De Broglie–Bohm theory is based on the following postulates: There is a configuration of the universe, described by coordinates , which is an element of the configuration space . The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions of particles, or, in case of field theory, the space of field configurations . The configuration evolves (for spin=0) according to the guiding equation where is the probability current or probability flux, and is the momentum operator. Here, is the standard complex-valued wavefunction from quantum theory, which evolves according to Schrödinger's equation This completes the specification of the theory for any quantum theory with Hamilton operator of type . The configuration is distributed according to at some moment of time , and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics. Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2). Double-slit experiment The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen). If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears. In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space. Theory The pilot wave The de Broglie–Bohm theory describes a pilot wave in a configuration space and trajectories of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation). The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force". Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle. The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description. In what follows below, the setup for one particle moving in is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still , but configuration space becomes . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of for particle positions, while represents the complex-valued wavefunction on configuration space. Guiding equation For a spinless single particle moving in , the particle's velocity is For many particles labeled for the -th particle their velocities are The main fact to notice is that this velocity field depends on the actual positions of all of the particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe. Schrödinger's equation The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function on : For many particles, the equation is the same except that and are now on configuration space, : This is the same wavefunction as in conventional quantum mechanics. Relation to the Born rule In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by . And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies . For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that , by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., ) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical. The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect. It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate. It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as . The conditional wavefunction of a subsystem In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as , where denotes the configuration variables associated to some subsystem (I) of the universe, and denotes the remaining configuration variables. Denote respectively by and the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by It follows immediately from the fact that satisfies the guiding equation that also the configuration satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction replaced with the conditional wavefunction . Also, the fact that is random with probability density given by the square modulus of implies that the conditional probability density of given is given by the square modulus of the (normalized) conditional wavefunction (in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula). Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to (this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then does satisfy a Schrödinger equation. More generally, assume that the universal wave function can be written in the form where solves Schrödinger equation and, for all and . Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to , and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then satisfies a Schrödinger equation. The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems. Extensions Relativity Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time. A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time. Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction. The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial. Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity. Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings. Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out. Spin To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term: where — the mass, charge and magnetic moment of the –th particle — the appropriate spin operator acting in the –th particle's spin space — spin quantum number of the –th particle ( for electron) is vector potential in is the magnetic field in is the covariant derivative, involving the vector potential, ascribed to the coordinates of –th particle (in SI units) — the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form where is a tensor product, so this spin space is 12-dimensional is the inner product in spin space : Stochastic electrodynamics Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field. Quantum field theory In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space. Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place. Curved space To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrödinger's equation. For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in Schrödinger's equation becomes a local self-adjoint operator acting on that space. The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion. In a general spacetime with curvature and torsion, the guiding equation for the four-velocity of an elementary fermion particle iswhere the wave function is a spinor, is the corresponding adjoint, are the Dirac matrices, and is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion. Exploiting nonlocality De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking to the probability density function as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of . It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place. Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle. Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated. Results Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell). The basis for agreement with standard quantum mechanics is that the particles are distributed according to . This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution. Measuring spin and polarization According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment. In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics. Measurements, the quantum formalism, and observer independence De Broglie–Bohm theory gives the almost results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics. Collapse of the wavefunction De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with Schrödinger's equation and the guiding equation, with an initial distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details). It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results. Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by Schrödinger's equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs. Operators as observables In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to , and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. Hidden variables De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description. Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence. Different predictions A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions. Experimental realization of this concept disagreed with the Bohm predictions. where they differed from standard quantum mechanics. These conclusions have been the subject of debate. Heisenberg's uncertainty principle The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of and the momentum with an accuracy of , then In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory. To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation. For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation. Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments. In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory. Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality". Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect. The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored." The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail. Classical limit Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis. Quantum trajectory method Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.) This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics". Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100. There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account. The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system. Similarities with the many-worlds interpretation Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds: Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches: David Deutsch has expressed the same point more "acerbically": This conclusion has been challenged by Detlef Dürr and Justin Lazarovici: The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission. Occam's-razor criticism Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said: In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor. According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms. According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function. Derivations De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory. Schrödinger's equation can be derived by using Einstein's light quanta hypothesis: and de Broglie's hypothesis: . The guiding equation can be derived in a similar fashion. We assume a plane wave: . Notice that . Assuming that for the particle's actual velocity, we have that . Thus, we have the guiding equation. Notice that this derivation does not use Schrödinger's equation. Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation for the density . This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrödinger's equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: Note that corresponds to the probability density . Continuity equation: . Hamilton–Jacobi equation: The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential and velocity field The potential is the classical potential that appears in Schrödinger's equation, and the other term involving is the quantum potential, terminology introduced by Bohm. This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by , which is a symptom of this being a first-order theory, not a second-order theory. A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrödinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator , the equation to satisfy for all functions (with associated multiplication operator ) is , where is the local Hermitian inner product on the value space of the wavefunction. This formulation allows for stochastic theories such as the creation and annihilation of particles. A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities: A physical system consists in a spatiotemporally propagating wave and a point particle guided by it. The wave is described mathematically by a solution to Schrödinger's wave equation. The particle motion is described by a solution to in dependence on initial condition , with the phase of .The fourth postulate is subsidiary yet consistent with the first three: The probability to find the particle in the differential volume at time t equals . History The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982: Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries. De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference. Pilot-wave theory Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a paper, that was widely (and erroneously, as shown by Jeffrey Bub) believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In 1926, Erwin Madelung had developed a hydrodynamic version of Schrödinger's equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum Euler equations (fluid dynamics), differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics. Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential. After publishing a popular textbook on Quantum Mechanics that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's theorem. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows: I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed. He subsequently described Bohm's theory as "artificial metaphysics". According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee. In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics. Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's). The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing." Bohmian mechanics Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods. Causal interpretation and ontological interpretation Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993). This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory. In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952. William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles. Hydrodynamic quantum analogs Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories. The analogs have been compared to the Faraday wave. These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved. Another classical analog has been reported in surface gravity waves. Surrealistic trajectories In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors. In 2016, Mahler et al. verified the ESSW predictions. However they propose the surealistic effect is a consequence the nonlocality inherent in Bohm's theory. See also Madelung equations Local hidden-variable theory Superfluid vacuum theory Fluid analogs in quantum mechanics Probability current Notes References Sources (full text) (full text) (Demonstrates incompleteness of the Bohm interpretation in the face of fractal, differentiable-nowhere wavefunctions.) (Describes a Bohmian resolution to the dilemma posed by non-differentiable wavefunctions.) Bohmian mechanics on arxiv.org Further reading John S. Bell: Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy, Cambridge University Press, 2004, David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, Routledge Chapman & Hall, 1993, Detlef Dürr, Sheldon Goldstein, Nino Zanghì: Quantum Physics Without Quantum Philosophy, Springer, 2012, Detlef Dürr, Stefan Teufel: Bohmian Mechanics: The Physics and Mathematics of Quantum Theory, Springer, 2009, Peter R. Holland: The quantum theory of motion, Cambridge University Press, 1993 (re-printed 2000, transferred to digital printing 2004), External links "Pilot-Wave Hydrodynamics" Bush, J. W. M., Annual Review of Fluid Mechanics, 2015 "Bohmian Mechanics" (Stanford Encyclopedia of Philosophy) "Bohmian-Mechanics.net", the homepage of the international research network on Bohmian Mechanics that was started by D. Dürr, S. Goldstein and N. Zanghì. Workgroup Bohmian Mechanics at LMU Munich (D. Dürr) Bohmian Mechanics Group at University of Innsbruck (G. Grübl) "Pilot waves, Bohmian metaphysics, and the foundations of quantum mechanics" , lecture course on de Broglie-Bohm theory by Mike Towler, Cambridge University. "21st-century directions in de Broglie-Bohm theory and beyond", August 2010 international conference on de Broglie-Bohm theory. Site contains slides for all the talks – the latest cutting-edge deBB research. "Observing the Trajectories of a Single Photon Using Weak Measurement" "Bohmian trajectories are no longer 'hidden variables'" The David Bohm Society De Broglie–Bohm theory inspired visualization of atomic orbitals. Interpretations of quantum mechanics Quantum measurement
0.771637
0.994091
0.767077
Rayleigh number
In fluid mechanics, the Rayleigh number (, after Lord Rayleigh) for a fluid is a dimensionless number associated with buoyancy-driven flow, also known as free (or natural) convection. It characterises the fluid's flow regime: a value in a certain lower range denotes laminar flow; a value in a higher range, turbulent flow. Below a certain critical value, there is no fluid motion and heat transfer is by conduction rather than convection. For most engineering purposes, the Rayleigh number is large, somewhere around 106 to 108. The Rayleigh number is defined as the product of the Grashof number, which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number, which describes the relationship between momentum diffusivity and thermal diffusivity: . Hence it may also be viewed as the ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities: . It is closely related to the Nusselt number. Derivation The Rayleigh number describes the behaviour of fluids (such as water or air) when the mass density of the fluid is non-uniform. The mass density differences are usually caused by temperature differences. Typically a fluid expands and becomes less dense as it is heated. Gravity causes denser parts of the fluid to sink, which is called convection. Lord Rayleigh studied the case of Rayleigh-Bénard convection. When the Rayleigh number, Ra, is below a critical value for a fluid, there is no flow and heat transfer is purely by conduction; when it exceeds that value, heat is transferred by natural convection. When the mass density difference is caused by temperature difference, Ra is, by definition, the ratio of the time scale for diffusive thermal transport to the time scale for convective thermal transport at speed : This means the Rayleigh number is a type of Péclet number. For a volume of fluid of size in all three dimensions and mass density difference , the force due to gravity is of the order , where is acceleration due to gravity. From the Stokes equation, when the volume of fluid is sinking, viscous drag is of the order , where is the dynamic viscosity of the fluid. When these two forces are equated, the speed . Thus the time scale for transport via flow is . The time scale for thermal diffusion across a distance is , where is the thermal diffusivity. Thus the Rayleigh number Ra is where we approximated the density difference for a fluid of average mass density , thermal expansion coefficient and a temperature difference across distance . The Rayleigh number can be written as the product of the Grashof number and the Prandtl number: Classical definition For free convection near a vertical wall, the Rayleigh number is defined as: where: x is the characteristic length Rax is the Rayleigh number for characteristic length x g is acceleration due to gravity β is the thermal expansion coefficient (equals to 1/T, for ideal gases, where T is absolute temperature). is the kinematic viscosity α is the thermal diffusivity Ts is the surface temperature T∞ is the quiescent temperature (fluid temperature far from the surface of the object) Grx is the Grashof number for characteristic length x Pr is the Prandtl number In the above, the fluid properties Pr, ν, α and β are evaluated at the film temperature, which is defined as: For a uniform wall heating flux, the modified Rayleigh number is defined as: where: q″o is the uniform surface heat flux k is the thermal conductivity. Other applications Solidifying alloys The Rayleigh number can also be used as a criterion to predict convectional instabilities, such as A-segregates, in the mushy zone of a solidifying alloy. The mushy zone Rayleigh number is defined as: where: K is the mean permeability (of the initial portion of the mush) L is the characteristic length scale α is the thermal diffusivity ν is the kinematic viscosity R is the solidification or isotherm speed. A-segregates are predicted to form when the Rayleigh number exceeds a certain critical value. This critical value is independent of the composition of the alloy, and this is the main advantage of the Rayleigh number criterion over other criteria for prediction of convectional instabilities, such as Suzuki criterion. Torabi Rad et al. showed that for steel alloys the critical Rayleigh number is 17. Pickering et al. explored Torabi Rad's criterion, and further verified its effectiveness. Critical Rayleigh numbers for lead–tin and nickel-based super-alloys were also developed. Porous media The Rayleigh number above is for convection in a bulk fluid such as air or water, but convection can also occur when the fluid is inside and fills a porous medium, such as porous rock saturated with water. Then the Rayleigh number, sometimes called the Rayleigh-Darcy number, is different. In a bulk fluid, i.e., not in a porous medium, from the Stokes equation, the falling speed of a domain of size of liquid . In porous medium, this expression is replaced by that from Darcy's law , with the permeability of the porous medium. The Rayleigh or Rayleigh-Darcy number is then This also applies to A-segregates, in the mushy zone of a solidifying alloy. Geophysical applications In geophysics, the Rayleigh number is of fundamental importance: it indicates the presence and strength of convection within a fluid body such as the Earth's mantle. The mantle is a solid that behaves as a fluid over geological time scales. The Rayleigh number for the Earth's mantle due to internal heating alone, RaH, is given by: where: H is the rate of radiogenic heat production per unit mass η is the dynamic viscosity k is the thermal conductivity D is the depth of the mantle. A Rayleigh number for bottom heating of the mantle from the core, RaT, can also be defined as: where: ΔTsa is the superadiabatic temperature difference (the superadiabatic temperature difference is the actual temperature difference minus the temperature difference in a fluid whose entropy gradient is zero, but has the same profile of the other variables appearing in the equation of state) between the reference mantle temperature and the core–mantle boundary CP is the specific heat capacity at constant pressure. High values for the Earth's mantle indicates that convection within the Earth is vigorous and time-varying, and that convection is responsible for almost all the heat transported from the deep interior to the surface. See also Grashof number Prandtl number Reynolds number Péclet number Nusselt number Rayleigh–Bénard convection Notes References External links Rayleigh number calculator Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics Dimensionless numbers
0.774247
0.990682
0.767032
Quantum field theory
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory. History Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Theoretical background Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity. A brief overview of these theoretical precursors follows. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact". It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick. Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day. The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted. Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization. Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli. In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred. It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators. Quantum electrodynamics Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s. Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators. With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world. In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations. In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation. The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom. It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory. QFT naturally incorporated antiparticles in its formalism. Infinities and renormalization Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta. It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community. Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions. In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift. Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations. The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'. By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities". At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams. The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram. It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework. Non-renormalizability Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades. The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities. The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant , which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods. With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations. Source theory Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory, but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed. In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities. Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg. Standard model In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups. In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge. Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass. By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored, until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible. These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory. Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. Condensed-matter-physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. Principles For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one. Classical fields A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantization and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory follows. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field, , is where is the Lagrangian density, is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian: we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein–Gordon equation. The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where is a complex number (normalized by convention), denotes complex conjugation, and is the frequency of the normal mode: Thus each normal mode corresponding to a single can be seen as a classical harmonic oscillator with frequency . Canonical quantization The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by where is a complex number (normalized by convention), and is the oscillator's frequency. Note that is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label of a quantum field. For a quantum harmonic oscillator, is promoted to a linear operator : Complex numbers and are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is The Hamiltonian of the simple harmonic oscillator can be written as The vacuum state , which is the lowest energy state, is defined by and has energy One can easily check that which implies that increases the energy of the simple harmonic oscillator by . For example, the state is an eigenstate of energy . Any energy eigenstate state of a single harmonic oscillator can be obtained from by successively applying the creation operator : and any state of the system can be expressed as a linear combination of the states A similar procedure can be applied to the real scalar field , by promoting it to a quantum field operator , while the annihilation operator , the creation operator and the angular frequency are now for a particular : Their commutation relations are: where is the Dirac delta function. The vacuum state is defined by Any quantum state of the field can be obtained from by successively applying creation operators (or by a linear combination of such states), e.g. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization. The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields, vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field: where is a spacetime index, , etc. The summation over the index has been omitted following the Einstein notation. If the parameter is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time to some final state at , the total time is divided into small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let be the Hamiltonian (i.e. generator of time evolution), then Taking the limit , the above product of integrals becomes the Feynman path integral: where is the Lagrangian involving and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian via Legendre transformation. The initial and final conditions of the path integral are respectively In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function In calculations, one often encounters expression likein the free or interacting theory, respectively. Here, and are position four-vectors, is the time ordering operator that shuffles its operands so the time-components and increase from right to left, and is the ground state (vacuum state) of the interacting theory, different from the free ground state . This expression represents the probability amplitude for the field to propagate from to , and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short. The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be In an interacting theory, where the Lagrangian or Hamiltonian contains terms or that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantization, the two-point correlation function can be written as: where is an infinitesimal number and is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in -theory, the interacting term of the Hamiltonian is , and the expansion of the two-point correlator in terms of becomesThis perturbation expansion expresses the interacting two-point function in terms of quantities that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written where is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in , reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any -point correlation function in the free theory to a sum of products of two-point correlation functions. For example, Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory. This makes the Feynman propagator one of the most important quantities in quantum field theory. Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the term in the two-point correlation function in the theory is After applying Wick's theorem, one of the terms is This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labeled and ). internal vertices connected with four edges and represented by dots (here labeled ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules: For every internal vertex , write down a factor . For every edge that connects two vertices and , write down a factor . Divide by the symmetry factor of the diagram. With the symmetry factor , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space. In order to compute the -point correlation function to the -th order, list all valid Feynman diagrams with external points and or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, is equal to the sum of (expressions corresponding to) all connected diagrams with external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the interaction theory discussed above, every vertex must have four legs. In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method. Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing loops are referred to as -loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction. Lines whose end points are vertices can be thought of as the propagation of virtual particles. Renormalization Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass and the coupling constant , have no physical meaning — , , and the field strength are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off , obtain expressions for the physical quantities, and then take the limit . This is an example of regularization, a class of methods to treat divergences in QFT, with being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of theory, the field strength is first redefined: where is the bare field, is the renormalized field, and is a constant to be determined. The Lagrangian density becomes: where and are the experimentally measurable, renormalized, mass and coupling constant, respectively, and are constants to be determined. The first three terms are the Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator . Compute Feynman diagrams, in which divergent terms will depend on . Then, define , , and such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit is taken. In this way, meaningful finite quantities are obtained. It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT, while quantum gravity is non-renormalizable. Renormalization group The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales. The way in which each parameter changes with scale is described by its β function. Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation. As an example, the coupling constant in QED, namely the elementary charge , has the following β function: where is the energy scale under which the measurement of is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant. The coupling constant in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group , has the following β function: where is the number of quark flavours. In the case where (the Standard Model has ), the coupling constant decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom. Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off , i.e. that the theory is no longer valid at energies higher than , and all degrees of freedom above the scale are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory. The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them. According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off from calculations in such a theory merely indicates that new physical phenomena appear at scales above , where a new theory is necessary. Other theories The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field representing the electron field and a vector field representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: where are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass and the (bare) elementary charge . The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories. Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry If the following transformation to the fields is performed at every spacetime point (a local transformation), then the QED Lagrangian remains unchanged, or invariant: where is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory. Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any , is an element of the group, thus QED is said to have gauge symmetry. The photon field may be referred to as the gauge boson. is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories). Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an gauge symmetry. It contains three Dirac fields representing quark fields as well as eight vector fields representing gluon fields, which are the gauge bosons. The QCD Lagrangian density is: where is the gauge covariant derivative: where is the coupling constant, are the eight generators of in the fundamental representation ( matrices), and are the structure constants of . Repeated indices are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: where is an element of at every spacetime point : The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change. For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group , in which all anomalies exactly cancel. The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group. Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law. For example, the symmetry of QED implies charge conservation. Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field , being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description. To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally. A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization. Spontaneous symmetry-breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it. To illustrate the mechanism, consider a linear sigma model containing real scalar fields, described by the Lagrangian density: where and are real parameters. The theory admits an global symmetry: The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field satisfying Without loss of generality, let the ground state be in the -th direction: The original fields can be rewritten as: and the original Lagrangian density as: where . The original global symmetry is no longer manifest, leaving only the subgroup . The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken. Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, has continuous symmetries (the dimension of its Lie algebra), while has . The number of broken symmetries is their difference, , which corresponds to the massless fields . On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson. In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures. In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism. Supersymmetry All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions. The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations and the Lorentz transformations . In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators , called supercharges, which themselves transform as Weyl fermions. The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, , which generate the corresponding supersymmetry, supersymmetry, and so on. Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory. The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group. Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), supersymmetric Yang–Mills theory, and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa. If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity. Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter. Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments. Other spacetimes The theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT, while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions. In Minkowski space, the flat metric is used to raise and lower spacetime indices in the Lagrangian, e.g. where is the inverse of satisfying . For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: where is the inverse of . For a real scalar field, the Lagrangian density in a general spacetime background is where , and denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. Topological quantum field theory The correlation functions and physical predictions of a QFT depend on the spacetime metric . For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric. QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers. The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond. Perturbative and non-perturbative methods Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model. Mathematical rigor In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined. However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues. Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics, which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc. Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms. Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms. One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation). Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows. See also Abraham–Lorentz force AdS/CFT correspondence Axiomatic quantum field theory Introduction to quantum mechanics Common integrals in quantum field theory Conformal field theory Constructive quantum field theory Dirac's equation Form factor (quantum field theory) Feynman diagram Green–Kubo relations Green's function (many-body theory) Group field theory Lattice field theory List of quantum field theories Local quantum field theory Noncommutative quantum field theory Quantization of a field Quantum electrodynamics Quantum field theory in curved spacetime Quantum chromodynamics Quantum flavordynamics Quantum hadrodynamics Quantum hydrodynamics Quantum triviality Relation between Schrödinger's equation and the path integral formulation of quantum mechanics Relationship between string theory and quantum field theory Schwinger–Dyson equation Static forces and virtual-particle exchange Symmetry in quantum mechanics Topological quantum field theory Ward–Takahashi identity Wheeler–Feynman absorber theory Wigner's classification Wigner's theorem References Bibliography Further reading General readers Introductory texts ; Advanced texts External links Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann. Siegel, Warren, 2005. Fields. . Quantum Field Theory by P. J. Mulders Quantum mechanics Mathematical physics
0.767923
0.998835
0.767028
Specific orbital energy
In the gravitational two-body problem, the specific orbital energy (or vis-viva energy) of two orbiting bodies is the constant sum of their mutual potential energy and their kinetic energy, divided by the reduced mass. According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time: where is the relative orbital speed; is the orbital distance between the bodies; is the sum of the standard gravitational parameters of the bodies; is the specific relative angular momentum in the sense of relative angular momentum divided by the reduced mass; is the orbital eccentricity; is the semi-major axis. It is typically expressed in (megajoule per kilogram) or (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy. Equation forms for different orbits For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to: where is the standard gravitational parameter; is semi-major axis of the orbit. For a parabolic orbit this equation simplifies to For a hyperbolic trajectory this specific orbital energy is either given by or the same as for an ellipse, depending on the convention for the sign of a. In this case the specific orbital energy is also referred to as characteristic energy (or ) and is equal to the excess specific energy compared to that for a parabolic orbit. It is related to the hyperbolic excess velocity (the orbital velocity at infinity) by It is relevant for interplanetary missions. Thus, if orbital position vector and orbital velocity vector are known at one position, and is known, then the energy can be computed and from that, for any other position, the orbital speed. Rate of change For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is where is the standard gravitational parameter; is semi-major axis of the orbit. In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy. Additional energy If the central body has radius R, then the additional specific energy of an elliptic orbit compared to being stationary at the surface is The quantity is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and just little more than the additional specific energy is ; which is the kinetic energy of the horizontal component of the velocity, i.e. , . Examples ISS The International Space Station has an orbital period of 91.74 minutes (5504s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738km. The specific orbital energy associated with this orbit is −29.6MJ/kg: the potential energy is −59.2MJ/kg, and the kinetic energy 29.6MJ/kg. Compared with the potential energy at the surface, which is −62.6MJ/kg., the extra potential energy is 3.4MJ/kg, and the total extra energy is 33.0MJ/kg. The average speed is 7.7km/s, the net delta-v to reach this orbit is 8.1km/s (the actual delta-v is typically 1.5–2.0km/s more for atmospheric drag and gravity drag). The increase per meter would be 4.4J/kg; this rate corresponds to one half of the local gravity of 8.8m/s2. For an altitude of 100km (radius is 6471km): The energy is −30.8MJ/kg: the potential energy is −61.6MJ/kg, and the kinetic energy 30.8MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 1.0MJ/kg, the total extra energy is 31.8MJ/kg. The increase per meter would be 4.8J/kg; this rate corresponds to one half of the local gravity of 9.5m/s2. The speed is 7.8km/s, the net delta-v to reach this orbit is 8.0km/s. Taking into account the rotation of the Earth, the delta-v is up to 0.46km/s less (starting at the equator and going east) or more (if going west). Voyager 1 For Voyager 1, with respect to the Sun: = 132,712,440,018 km3⋅s−2 is the standard gravitational parameter of the Sun r = 17 billion kilometers v = 17.1 km/s Hence: Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by However, Voyager 1 does not have enough velocity to leave the Milky Way. The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun. Applying thrust Assume: a is the acceleration due to thrust (the time-rate at which delta-v is spent) g is the gravitational field strength v is the velocity of the rocket Then the time-rate of change of the specific energy of the rocket is : an amount for the kinetic energy and an amount for the potential energy. The change of the specific energy of the rocket per unit change of delta-v is which is |v| times the cosine of the angle between v and a. Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v, and when |v| is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag. When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis. When applying delta-v to decrease specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v, and again when |v| is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis. If a is in the direction of v: See also Specific energy change of rockets Characteristic energy C3 (Double the specific orbital energy) References Astrodynamics Orbits Physical quantities Mass-specific quantities
0.778329
0.985453
0.767006
Ilium/Olympos
Ilium/Olympos is a series of two science fiction novels by Dan Simmons. The events are set in motion by beings who appear to be ancient Greek gods. Like Simmons' earlier series, the Hyperion Cantos, it is a form of "literary science fiction"; it relies heavily on intertextuality, in this case with Homer and Shakespeare as well as references to Marcel Proust's À la recherche du temps perdu (or In Search of Lost Time) and Vladimir Nabokov's novel Ada or Ardor: A Family Chronicle. As with most of his science fiction and in particular with Hyperion, Ilium demonstrates that Simmons writes in the soft science fiction tradition of Ray Bradbury and Ursula K. Le Guin. Ilium is based on a literary approach similar to most of Bradbury's work, but describes larger segments of society and broader historical events. As in Le Guin's Hainish series, Simmons places the action of Ilium in a vast and complex universe made of relatively plausible technological and scientific elements. Yet Ilium is different from any of the works of Bradbury and Le Guin in its exploration of the very far future of humanity, and in the extra human or post-human themes associated with this. It deals with the concept of technological singularity where technological change starts to occur beyond the ability of humanity to presently predict or comprehend. The first book, Ilium, received the Locus Award for Best Science Fiction novel in 2004. Plot introduction The series centers on three main character groups: that of the scholic Hockenberry, Helen and Greek and Trojan warriors from the Iliad; Daeman, Harman, Ada and the other humans of Earth; and the moravecs, specifically Mahnmut the Europan and Orphu of Io. The novels are written in first-person, present-tense when centered on Hockenberry's character, but features third-person, past-tense narrative in all other instances. Much like Simmons' Hyperion where the characters' stories are told over the course of the novels and the actual events serve as a frame, the three groups of characters' stories are told over the course of the novels and their stories do not begin to converge until the end. Characters in Ilium/Olympos Old-style humans The "old-style" humans of Earth exist at what the post-humans claimed would be a stable, minimum herd population of one million. In reality, their numbers are much smaller than that, around 300,000, because each woman is allowed to have only one child. Their DNA incorporates moth genetics which allows sperm-storage and the choice of father-sperm years after sexual intercourse has actually occurred. This reproductive method causes many children not to know their father, as well as helps to break incest taboos in that the firmary, which controls the fertilization, protects against a child of close relatives being born. The old style humans never appear any older than about 40 since every twenty years they are physically rejuvenated. Ada: the owner of Ardis Hall and Harman's lover. She is just past her first twenty. She hosts Odysseus/Noman for his time on Earth. Daeman: a pudgy man approaching his second twenty. Both a ladies' man and a lepidopterist. Also terrified of dinosaurs. At the start of Ilium he is a pudgy, immature man-child who wishes to have sex with his cousin (as incest taboos have all but ceased to exist in his society), Ada (whom he had a brief relationship with when she was a teenager), but by the end of the tale he is a mature leader who is very fit and strong. His mother's name is Marina. Hannah: Ada's younger friend. Both inventor and artist. Develops a romantic interest in Odysseus. Harman: Ada's lover. 99 years old. Only human with the ability to read, other than Savi. Savi: the Wandering Jew. The only old-style human not gathered up in the final fax 1,400 years earlier. She has survived the years by spending most of them sleeping in cryo crèches and spending only a few months awake at a time every few decades. Moravecs Named after the roboticist Hans Moravec, they are autonomous, sentient, self-evolving biomechanical organisms that dwell on the Jovian moons. They were seeded throughout the outer Solar System by humans during the Lost Age. Most moravecs are self-described humanists and study Lost Age culture, including literature, television programs and movies. Mahnmut the Europan: explorer of Europa's oceans and skipper of the submersible, The Dark Lady. An amateur Shakespearean scholar. Orphu of Io: a heavily armored, 1,200-year-old hard-vac moravec that is shaped not unlike a crab. Weighing eight tons and measuring six meters in length, Orphu works in the sulfur-torus of Io, and is a Proust enthusiast. rockvecs: a subgroup of the moravecs, the rockvecs live on the Asteroid Belt and are more adapted for combat and hostile environments than the moravecs. Scholics Dead scholars from previous centuries that were rebuilt by the Olympian gods from their DNA. Their duties are to observe the Trojan War and report the discrepancies that occur between it and Homer's Iliad. Dr. Thomas Hockenberry: Ph.D. in classical studies and a Homeric scholar. Died of cancer in 2006 and is resurrected by the Olympian Gods as a scholic. Lover of Helen of Troy. He is the oldest surviving scholic. Dr. Keith Nightenhelser: Hockenberry's oldest friend and a fellow scholic. (The real Nightenhelser was Simmons' roommate at Wabash College and is currently a professor at DePauw University.) Others Achaeans and Trojans: the heroes and minor characters are drawn from Homer's epics, as well as the works of Virgil, Proclus, Pindar, Aeschylus, Euripides, and classical Greek mythology. Ariel: a character from The Tempest and the avatar of the evolved, self-aware biosphere. Using locks of Harman's hair, Daeman's hair, and her own hair, Savi makes a deal with Ariel in order that they might pass without being attacked by the calibani. Caliban: a monster, son of Sycorax and servant of Prospero, whom John Clute describes as "a cross between Gollum and the alien of Alien." He is cloned to create the calibani, weaker clones of himself. Caliban speaks in strange speech patterns, with much of his dialogue taken from the dramatic monologue "Caliban upon Setebos" by Robert Browning. Simmons chooses not to portray Caliban as the "oppressed but noble native soul straining under the yoke of capitalist-colonial-imperialism" that current interpretations employ to portray him, which he views as "a weak, pale, politically correct shadow of the slithery monstrosity that made audiences shiver in Shakespeare's day ... Shakespeare and his audiences understood that Caliban was a monsterand a really monstrous monster, ready to rape and impregnate Prospero's lovely daughter at the slightest opportunity." Odysseus: Odysseus after his Odyssey, ten years older than the Odysseus who fights in the Trojan War. In Olympos, he adopts the name Noman, which is a reference to the name Odysseus gives to Polyphemus the Cyclops on their encounter, in Greek, Outis, meaning "no man" or "nobody". He is a different entity than the Odysseus on Mars. Olympian Gods: former post-humans who were transformed into gods by Prospero's technology. They do not remember the science behind their technology, save for Zeus and Hephaestus, and they are described both as preliterate and post-literate, for which reason they enlist the services of Thomas Hockenberry and other scholics. They dwell on Olympus Mons on Mars and use quantum teleportation in order to get to the recreation of Troy on an alternate Earth. Though the events of the Trojan War are being recreated with the knowledge of Homer's Iliad, the only ones who know its outcome are the scholics and Zeus as Zeus has forbidden the other gods from knowing. post-humans: former humans who enhanced themselves far beyond the normal bounds of humanity and dwelt in orbital rings above the Earth until Prospero turned some into Olympian gods. The others were slaughtered by Caliban. They had no need of bodies, but when they took on human form they only took on the shape of women. Prospero: a character from The Tempest who is the avatar of the self-aware, post-Internet logosphere, a reference to Vladimir Vernadsky's idea of the noosphere. Setebos: Sycorax and Caliban's god. The god is described as "many-handed as a cuttlefish" in reference to "Caliban upon Setebos" by Robert Browning and is described by Prospero as being an "arbitrary god of great power, a September eleven god, an Auschwitz god." Sycorax: a witch and Caliban's mother. Also known as Circe or Demyx or Calypso. The Quiet: an unknown entity (presumably, God, from the Demogorgon's speeches and the words of Prospero) said to incarnate himself in different forms all across the universe. He is Setebos' nemesis, which could create a kind of God-Against-the Devil picture as Setebos is the background antagonist and Prospero and Ariel, servants of The Quiet, are the background protagonists. zeks: the Little Green Men of Mars. A chlorophyll-based lifeform that comes from the Earth of an alternate universe. Their name comes from a slang term related to the Russian word sharashka, which is a scientific or technical institute staffed with prisoners. The prisoners of these Soviet labor camps were called zeks. (This description of the origin of the term is a mistake of the author. Not only sharashka prisoners were called zeks, it is a common term for all Gulag camp prisoners, derived from the word zaklyuchennyi, inmate. The camp described in the A Day in the Life of Ivan Denisovich is a regular labor camp, not a sharashka.) Science of Ilium/Olympos As much of the action derives from fiction involving gods and wizards, Simmons rationalises most of this through his use of far-future technology and science, including: String theory: interdimensional transport is conducted via Brane Holes. Nanotechnology provides the gods' immortality and powers, and many of the cybernetic functions possessed by some of the humans. Reference to Vladimir Vernadsky's idea of the noosphere is made to explain the origins of powerful entities such as Ariel and Prospero, the former arising from a network of datalogging mote machines, and the latter of whom derives from a post-Internet logosphere. Quantum theory and Quantum Gravity are also used to account for a number of other things, from Achilles' immortality (his mother, Thetis, set the quantum probability for his death to zero for all means of death other than by Paris' bow) to teleportation and shapeshifting powers. ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals. Pantheistic solipsism is used to explain how 'mythical' characters have entered the "real" world. Weapons Old style humansOther than flechette rifles scavenged from caches, crossbows are the main form of weapon as old style humans have forgotten almost everything and can only build crossbows. GodsTasers, energy shields and titanium lances. MoravecsWeapons of mass destruction including the Device, ship-based weapons, kinetic missiles. Miscellaneous What follows is a definition of terms that are either used within Ilium or are related to its science, technology and fictional history: ARNists: short for "recombinant RNA artists". ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals. Simmons borrows this term from his Hyperion Cantos. E-ring/P-ring: short for "equatorial ring" or "polar ring" respectively. The rings described are not solid, but rather similar to the rings around Jupiter or Saturn: hundreds of thousands of large individual solid elements, built and occupied by the post-humans before Caliban and Prospero were stranded there and Caliban began murdering the post-humans. The rings are visible from the Earth's surface, but the old-style humans do not know exactly what they are. Faxnodes: much as the transporter of Star Trek works, the faxnode system takes a living organism, maps out its structure, breaks down its atoms and assembles a copy at the faxport at the intended destination. This copy is a facsimile, or fax, of the original. (Unlike most science-fiction transporter technology, it is revealed late in the story that the matter is not "changed into energy" or "sent" anywhere; a traveler's body is completely destroyed, and re-created from scratch at the destination.) Final fax: the 9,113 Jews of Savi's time to live through the Rubicon virus are suspended in a fax beam by Prospero and Ariel with the understanding that once the two get the Earth back into order, they will be released. Firmary: short for "infirmary". A room in the e-ring that the humans of Earth fax to every Twenty (every twentieth birthday) for physical rejuvenation, or when hurt or killed in order to be healed. If they were killed, the firmary removes all memory of their death in order to lessen the psychological impact of the event. Global Caliphate: an empire that, among other things, attempts to destroy the Jewish population of Earth. They released the Rubicon virus to kill all Jews on Earth as well as programmed the voynix to kill any remaining Jews who escaped the infection. Quantum theory and quantum gravity: used to account for a number of other things, including Achilles' immortality (in that Thetis set the quantum probability for his death to zero for all other means of death other than by Paris' bow), teleportation, and shapeshifting powers. Rubicon virus: created by the Global Caliphate and released with the intention of exterminating those of Jewish descent. It had the reverse effect, killing eleven billion people (ninety-seven percent of the world's population), but Israeli scientists were able to develop an inoculation against the virus and inoculate their own people's DNA, but did not have the time to save the rest of humanity. Turin cloth: a cloth used by the people of Earth that, when draped over the eyes, allows them to view the events of the Trojan War, which they believe is just a drama being created for their entertainment. Named after the Shroud of Turin Voynix: named after the Voynich manuscript. The voynix are biomechanical, self-replicating, programmable robots. They originated in an alternate universe, and were brought into the Ilium universe before 3000 A.D. The Global Caliphate somehow gained access to these proto-voynix and after replicating three million of them, battled the New European Union around 3000 A.D. In 3200 A.D., the Global Caliphate upgraded the voynix and programmed them to kill Jews. Using time travel technology acquired from the French (previously used to investigate the Voynich Manuscript and which resulted in the destruction of Paris), the Global Caliphate sent the voynix forward in time to 4600 A.D. Upon their arrival they begin to replicate rapidly in the Mediterranean Basin. As the post-human operations there were put at risk, Prospero and Sycorax created the calibani to fend off the voynix, and eventually Prospero reprogrammed them into inactivation. After the final fax, they were reprogrammed to serve the new old-style humans. Literary and cultural influences Simmons references such historical figures, fictional characters and works as Christopher Marlowe, Bram Stoker's Dracula, Plato, Gollum, the Disney character Pluto, Samuel Beckett, and William Butler Yeats' "The Second Coming", among others. As well as referencing these works and figures, he uses others more extensively, shaping his novel by the examples he chooses, such as 9/11 and its effects on the Earth and its nations. Ilium is thematically influenced by extropianism, peopled as it is with post-humans of the far future. It therefore continues to explore the theme pioneered by H. G. Wells in The Time Machine, a work which is also referenced several times in Simmons' work. One of the most notable references is when the old woman Savi calls the current people of Earth eloi, using the word as an expression of her disgust of their self-indulgent society, lack of culture and ignorance of their past. Ilium also includes allusions to the work of Nabokov. The most apparent of these are the inclusion of Ardis Hall and the names of Ada, Daeman and Marina, all borrowed from Ada or Ardor: A Family Chronicle. The society that the old-style humans live in also resembles that of Antiterra, a parallel of our Earth circa 19th century, which features a society in which there exists a lack of repression and Christian morality, shown by Daeman's intent to seduce his cousin. Simmons also includes references to Nabokov's fondness for butterflies, such as the butterfly genetics incorporated in the old-style humans and Daeman's enthusiasm as a lepidopterist. Mahnmut of Europa is identified as a Shakespearean scholar as in the first chapter he is introduced where he analyzes Sonnet 116 in order to send it to his correspondent, Orphu of Io, and it is here that Shakespeare's influence on Ilium begins. Mahnmut's submersible is named The Dark Lady, an allusion to a figure in Shakespeare's sonnets. There is also, of course, The Tempests presence in the characters of Prospero, Ariel and Caliban. There are also multiple references to other Shakespeare works and characters such as Falstaff, Henry IV, Part I and Twelfth Night. Shakespeare himself even makes an appearance in a dream to Mahnmut and quotes from Sonnet 31. Proustian memory investigations had a heavy hand in the novel's making, which helps explain why Simmons chose Ada or Ardor: A Family Chronicle over something more well-understood of Nabokov's, such as Pale Fire. Ada or Ardor was written in such a structure as to mimic someone recalling their own memories, a subject which Proust explores in his work À la recherche du temps perdu. Orphu of Io is more interested in Proust than Mahnmut's Shakespeare, as he considers Proust "perhaps the ultimate explorer of time, memory, and perception." Simmons' portrayal of Odysseus speaking to the old-style humans at Ardis Hall is also reminiscent of the Bibles Jesus teaching his disciples. Odysseus is even addressed as "Teacher" by one of his listeners in a way reminiscent of Jesus being addressed as "Rabbi," which is commonly translated as "Teacher". Movie adaptation In January 2004, it was announced that the screenplay he wrote for his novels Ilium and Olympos would be made into a film by Digital Domain and Barnet Bain Films, with Simmons acting as executive producer. Ilium is described as an "epic tale that spans 5,000 years and sweeps across the entire solar system, including themes and characters from Homer's The Iliad and Shakespeare's The Tempest." Awards and recognition IliumLocus Award winner, Hugo Award nominee, 2004 OlymposLocus Award shortlist, 2006 References Science fiction book series Works by Dan Simmons Science fantasy novels Novels set on Mars Classical mythology in popular culture Greek and Roman deities in fiction Fiction about nanotechnology Quantum fiction Fiction about resurrection Fiction about teleportation Biological weapons in popular culture Self-replicating machines in fiction Novels about time travel Novels based on the Iliad Novels based on the Odyssey Modern adaptations of the Odyssey Modern adaptations of the Iliad
0.777487
0.986471
0.766968
Maupertuis's principle
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis, 1698 – 1759) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). It is a special case of the more generally stated principle of least action. Using the calculus of variations, it results in an integral equation formulation of the equations of motion for the system. Mathematical formulation Maupertuis's principle states that the true path of a system described by generalized coordinates between two specified states and is a minimum or a saddle point of the abbreviated action functional, where are the conjugate momenta of the generalized coordinates, defined by the equation where is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in . Note that the abbreviated action is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the paths between the two specified states). Jacobi's formulation For many systems, the kinetic energy is quadratic in the generalized velocities although the mass tensor may be a complicated function of the generalized coordinates . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities provided that the potential energy does not involve the generalized velocities. By defining a normalized distance or metric in the space of generalized coordinates one may immediately recognize the mass tensor as a metric tensor. The kinetic energy may be written in a massless form or, Therefore, the abbreviated action can be written since the kinetic energy equals the (constant) total energy minus the potential energy . In particular, if the potential energy is a constant, then Jacobi's principle reduces to minimizing the path length in the space of the generalized coordinates, which is equivalent to Hertz's principle of least curvature. Comparison with Hamilton's principle Hamilton's principle and Maupertuis's principle are occasionally confused with each other and both have been called the principle of least action. They differ from each other in three important ways: their definition of the action... the solution that they determine... ...and the constraints on the variation. History Maupertuis was the first to publish a principle of least action, as a way of adapting Fermat's principle for waves to a corpuscular (particle) theory of light. Pierre de Fermat had explained Snell's law for the refraction of light by assuming light follows the path of shortest time, not distance. This troubled Maupertuis, since he felt that time and distance should be on an equal footing: "why should light prefer the path of shortest time over that of distance?" Maupertuis defined his action as , which was to be minimized over all paths connecting two specified points. Here is the velocity of light the corpuscular theory. Fermat had minimized where is wave velocity; the two velocities are reciprocal so the two forms are equivalent. Koenig's claim In 1751, Maupertuis's priority for the principle of least action was challenged in print (Nova Acta Eruditorum of Leipzig) by an old acquaintance, Johann Samuel Koenig, who quoted a 1707 letter purportedly from Gottfried Wilhelm Leibniz to Jakob Hermann that described results similar to those derived by Leonhard Euler in 1744. Maupertuis and others demanded that Koenig produce the original of the letter to authenticate its having been written by Leibniz. Leibniz died in 1716 and Hermann in 1733, so neither could vouch for Koenig. Koenig claimed to have the letter copied from the original owned by Samuel Henzi, and no clue as to the whereabouts of the original, as Henzi had been executed in 1749 for organizing the Henzi conspiracy for overthrowing the aristocratic government of Bern. Subsequently, the Berlin Academy under Euler's direction declared the letter to be a forgery and that Maupertuis, could continue to claim priority for having invented the principle. Curiously Voltaire got involved in the quarrel by composing Diatribe du docteur Akakia ("Diatribe of Doctor Akakia") to satirize Maupertuis' scientific theories (not limited to the principle of least action). While this work damaged Maupertuis's reputation, his claim to priority for least action remains secure. See also Analytical mechanics Hamilton's principle Gauss's principle of least constraint (also describes Hertz's principle of least curvature) Hamilton–Jacobi equation References Pierre Louis Maupertuis, Accord de différentes loix de la nature qui avoient jusqu'ici paru incompatibles (original 1744 French text); Accord between different laws of Nature that seemed incompatible (English translation) Leonhard Euler, Methodus inveniendi/Additamentum II (original 1744 Latin text); Methodus inveniendi/Appendix 2 (English translation) Pierre Louis Maupertuis, Les loix du mouvement et du repos déduites d'un principe metaphysique (original 1746 French text); Derivation of the laws of motion and equilibrium from a metaphysical principle (English translation) Leonhard Euler, Exposé concernant l'examen de la lettre de M. de Leibnitz (original 1752 French text); Investigation of the letter of Leibniz (English translation) König J. S. "De universali principio aequilibrii et motus", Nova Acta Eruditorum, 1751, 125–135, 162–176. J. J. O'Connor and E. F. Robertson, "The Berlin Academy and forgery", (2003), at The MacTutor History of Mathematics archive. C. I. Gerhardt, (1898) "Über die vier Briefe von Leibniz, die Samuel König in dem Appel au public, Leide MDCCLIII, veröffentlicht hat", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, I, 419–427. W. Kabitz, (1913) "Über eine in Gotha aufgefundene Abschrift des von S. König in seinem Streite mit Maupertuis und der Akademie veröffentlichten, seinerzeit für unecht erklärten Leibnizbriefes", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, II, 632–638. L. D. Landau and E. M. Lifshitz, (1976) Mechanics, 3rd. ed., Pergamon Press, pp. 140–143. (hardcover) and (softcover) G. C. J. Jacobi, Vorlesungen über Dynamik, gehalten an der Universität Königsberg im Wintersemester 1842–1843. A. Clebsch (ed.) (1866); Reimer; Berlin. 290 pages, available online Œuvres complètes volume 8 at Gallica-Math from the Gallica Bibliothèque nationale de France. H. Hertz, (1896) Principles of Mechanics, in Miscellaneous Papers, vol. III, Macmillan. Calculus of variations Hamiltonian mechanics Mathematical principles
0.791986
0.968391
0.766952
Oersted
The oersted (,; symbol Oe) is the coherent derived unit of the auxiliary magnetic field H in the centimetre–gram–second system of units (CGS). It is equivalent to 1 dyne per maxwell. Difference between CGS and SI systems In the CGS system, the unit of the H-field is the oersted and the unit of the B-field is the gauss. In the SI system, the unit ampere per meter (A/m), which is equivalent to newton per weber, is used for the H-field and the unit of tesla is used for the B-field. History The unit was established by the IEC in the 1930s in honour of Danish physicist Hans Christian Ørsted. Ørsted discovered the connection between magnetism and electric current when a magnetic field produced by a current-carrying copper bar deflected a magnetised needle during a lecture demonstration. Definition The oersted is defined as a dyne per unit pole. The oersted is (≈) amperes per meter, in terms of SI units. The H-field strength inside a long solenoid wound with 79.58 turns per meter of a wire carrying 1 A is approximately 1 oersted. The preceding statement is exactly correct if the solenoid considered is infinite in length with the current evenly distributed over its surface. The oersted is closely related to the gauss (G), the CGS unit of magnetic flux density. In vacuum, if the magnetizing field strength is 1 Oe, then the magnetic field density is 1 G, whereas in a medium having permeability (relative to permeability of vacuum), their relation is Because oersteds are used to measure magnetizing field strength, they are also related to the magnetomotive force (mmf) of current in a single-winding wire-loop: Stored energy The stored energy in a magnet, called magnet performance or maximum energy product (often abbreviated BHmax), is typically measured in units of megagauss-oersteds (MG⋅Oe). See also Centimetre–gram–second system of units Ampere's model of magnetization References Centimetre–gram–second system of units Units of magnetic induction
0.77812
0.985643
0.766949
Acceleration (special relativity)
Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor (which is mainly determined by mass). However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators. One can derive transformation formulas for ordinary accelerations in three spatial dimensions (three-acceleration or coordinate acceleration) as measured in an external inertial frame of reference, as well as for the special case of proper acceleration measured by a comoving accelerometer. Another useful formalism is four-acceleration, as its components can be connected in different inertial frames by a Lorentz transformation. Also equations of motion can be formulated which connect acceleration and force. Equations for several forms of acceleration of bodies and their curved world lines follow from these formulas by integration. Well known special cases are hyperbolic motion for constant longitudinal proper acceleration or uniform circular motion. Eventually, it is also possible to describe these phenomena in accelerated frames in the context of special relativity, see Proper reference frame (flat spacetime). In such frames, effects arise which are analogous to homogeneous gravitational fields, which have some formal similarities to the real, inhomogeneous gravitational fields of curved spacetime in general relativity. In the case of hyperbolic motion one can use Rindler coordinates, in the case of uniform circular motion one can use Born coordinates. Concerning the historical development, relativistic equations containing accelerations can already be found in the early years of relativity, as summarized in early textbooks by Max von Laue (1911, 1921) or Wolfgang Pauli (1921). For instance, equations of motion and acceleration transformations were developed in the papers of Hendrik Antoon Lorentz (1899, 1904), Henri Poincaré (1905), Albert Einstein (1905), Max Planck (1906), and four-acceleration, proper acceleration, hyperbolic motion, accelerating reference frames, Born rigidity, have been analyzed by Einstein (1907), Hermann Minkowski (1907, 1908), Max Born (1909), Gustav Herglotz (1909), Arnold Sommerfeld (1910), von Laue (1911), Friedrich Kottler (1912, 1914), see section on history. Three-acceleration In accordance with both Newtonian mechanics and SR, three-acceleration or coordinate acceleration is the first derivative of velocity with respect to coordinate time or the second derivative of the location with respect to coordinate time: . However, the theories sharply differ in their predictions in terms of the relation between three-accelerations measured in different inertial frames. In Newtonian mechanics, time is absolute by in accordance with the Galilean transformation, therefore the three-acceleration derived from it is equal too in all inertial frames: . On the contrary in SR, both and depend on the Lorentz transformation, therefore also three-acceleration and its components vary in different inertial frames. When the relative velocity between the frames is directed in the x-direction by with as Lorentz factor, the Lorentz transformation has the form or for arbitrary velocities of magnitude : In order to find out the transformation of three-acceleration, one has to differentiate the spatial coordinates and of the Lorentz transformation with respect to and , from which the transformation of three-velocity (also called velocity-addition formula) between and follows, and eventually by another differentiation with respect to and the transformation of three-acceleration between and follows. Starting from, this procedure gives the transformation where the accelerations are parallel (x-direction) or perpendicular (y-, z-direction) to the velocity: or starting from this procedure gives the result for the general case of arbitrary directions of velocities and accelerations: This means, if there are two inertial frames and with relative velocity , then in the acceleration of an object with momentary velocity is measured, while in the same object has an acceleration and has the momentary velocity . As with the velocity addition formulas, also these acceleration transformations guarantee that the resultant speed of the accelerated object can never reach or surpass the speed of light. Four-acceleration If four-vectors are used instead of three-vectors, namely as four-position and as four-velocity, then the four-acceleration of an object is obtained by differentiation with respect to proper time instead of coordinate time: where is the object's three-acceleration and its momentary three-velocity of magnitude with the corresponding Lorentz factor . If only the spatial part is considered, and when the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, the expression is reduced to: Unlike the three-acceleration previously discussed, it is not necessary to derive a new transformation for four-acceleration, because as with all four-vectors, the components of and in two inertial frames with relative speed are connected by a Lorentz transformation analogous to (, ). Another property of four-vectors is the invariance of the inner product or its magnitude , which gives in this case: Proper acceleration In infinitesimal small durations there is always one inertial frame, which momentarily has the same velocity as the accelerated body, and in which the Lorentz transformation holds. The corresponding three-acceleration in these frames can be directly measured by an accelerometer, and is called proper acceleration or rest acceleration. The relation of in a momentary inertial frame and measured in an external inertial frame follows from (, ) with , , and . So in terms of, when the velocity is directed in the x-direction by and when only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, it follows: Generalized by for arbitrary directions of of magnitude : There is also a close relationship to the magnitude of four-acceleration: As it is invariant, it can be determined in the momentary inertial frame , in which and by it follows : Thus the magnitude of four-acceleration corresponds to the magnitude of proper acceleration. By combining this with, an alternative method for the determination of the connection between in and in is given, namely from which follows again when the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered. Acceleration and force Assuming constant mass , the four-force as a function of three-force is related to four-acceleration by , thus: The relation between three-force and three-acceleration for arbitrary directions of the velocity is thus When the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered Therefore, the Newtonian definition of mass as the ratio of three-force and three-acceleration is disadvantageous in SR, because such a mass would depend both on velocity and direction. Consequently, the following mass definitions used in older textbooks are not used anymore: as "longitudinal mass", as "transverse mass". The relation between three-acceleration and three-force can also be obtained from the equation of motion where is the three-momentum. The corresponding transformation of three-force between in and in (when the relative velocity between the frames is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered) follows by substitution of the relevant transformation formulas for , , , , or from the Lorentz transformed components of four-force, with the result: Or generalized for arbitrary directions of , as well as with magnitude : Proper acceleration and proper force The force in a momentary inertial frame measured by a comoving spring balance can be called proper force. It follows from (, ) by setting and as well as and . Thus by where only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered: Generalized by for arbitrary directions of of magnitude : Since in momentary inertial frames one has four-force and four-acceleration , equation produces the Newtonian relation , therefore (, , ) can be summarized By that, the apparent contradiction in the historical definitions of transverse mass can be explained. Einstein (1905) described the relation between three-acceleration and proper force , while Lorentz (1899, 1904) and Planck (1906) described the relation between three-acceleration and three-force . Curved world lines By integration of the equations of motion one obtains the curved world lines of accelerated bodies corresponding to a sequence of momentary inertial frames (here, the expression "curved" is related to the form of the worldlines in Minkowski diagrams, which should not be confused with "curved" spacetime of general relativity). In connection with this, the so-called clock hypothesis of clock postulate has to be considered: The proper time of comoving clocks is independent of acceleration, that is, the time dilation of these clocks as seen in an external inertial frame only depends on its relative velocity with respect to that frame. Two simple cases of curved world lines are now provided by integration of equation for proper acceleration: a) Hyperbolic motion: The constant, longitudinal proper acceleration by leads to the world line The worldline corresponds to the hyperbolic equation , from which the name hyperbolic motion is derived. These equations are often used for the calculation of various scenarios of the twin paradox or Bell's spaceship paradox, or in relation to space travel using constant acceleration. b) The constant, transverse proper acceleration by can be seen as a centripetal acceleration, leading to the worldline of a body in uniform rotation where is the tangential speed, is the orbital radius, is the angular velocity as a function of coordinate time, and as the proper angular velocity. A classification of curved worldlines can be obtained by using the differential geometry of triple curves, which can be expressed by spacetime Frenet-Serret formulas. In particular, it can be shown that hyperbolic motion and uniform circular motion are special cases of motions having constant curvatures and torsions, satisfying the condition of Born rigidity. A body is called Born rigid if the spacetime distance between its infinitesimally separated worldlines or points remains constant during acceleration. Accelerated reference frames Instead of inertial frames, these accelerated motions and curved worldlines can also be described using accelerated or curvilinear coordinates. The proper reference frame established that way is closely related to Fermi coordinates. For instance, the coordinates for an hyperbolically accelerated reference frame are sometimes called Rindler coordinates, or those of a uniformly rotating reference frame are called rotating cylindrical coordinates (or sometimes Born coordinates). In terms of the equivalence principle, the effects arising in these accelerated frames are analogous to effects in a homogeneous, fictitious gravitational field. In this way it can be seen, that the employment of accelerating frames in SR produces important mathematical relations, which (when further developed) play a fundamental role in the description of real, inhomogeneous gravitational fields in terms of curved spacetime in general relativity. History For further information see von Laue, Pauli, Miller, Zahar, Gourgoulhon, and the historical sources in history of special relativity. 1899 Hendrik Lorentz derived the correct (up to a certain factor ) relations for accelerations, forces and masses between a resting electrostatic systems of particles (in a stationary aether), and a system emerging from it by adding a translation, with as the Lorentz factor: , , for by; , , for by; , , for , thus longitudinal and transverse mass by; Lorentz explained that he has no means of determining the value of . If he had set , his expressions would have assumed the exact relativistic form. 1904 Lorentz derived the previous relations in a more detailed way, namely with respect to the properties of particles resting in the system and the moving system , with the new auxiliary variable equal to compared to the one in 1899, thus: for as a function of by; for as a function of by; for as a function of by; for longitudinal and transverse mass as a function of the rest mass by (, ). This time, Lorentz could show that , by which his formulas assume the exact relativistic form. He also formulated the equation of motion with which corresponds to with , with , , , , , and as electromagnetic rest mass. Furthermore, he argued, that these formulas should not only hold for forces and masses of electrically charged particles, but for other processes as well so that the earth's motion through the aether remains undetectable. 1905 Henri Poincaré introduced the transformation of three-force: with , and as the Lorentz factor, the charge density. Or in modern notation: , , , and . As Lorentz, he set . 1905 Albert Einstein derived the equations of motions on the basis of his special theory of relativity, which represent the relation between equally valid inertial frames without the action of a mechanical aether. Einstein concluded, that in a momentary inertial frame the equations of motion retain their Newtonian form: . This corresponds to , because and and . By transformation into a relatively moving system he obtained the equations for the electrical and magnetic components observed in that frame: . This corresponds to with , because and and and . Consequently, Einstein determined the longitudinal and transverse mass, even though he related it to the force in the momentary rest frame measured by a comoving spring balance, and to the three-acceleration in system : This corresponds to with . 1905 Poincaré introduces the transformation of three-acceleration: where as well as and and . Furthermore, he introduced the four-force in the form: where and and . 1906 Max Planck derived the equation of motion with and and The equations correspond to with , with and and , in agreement with those given by Lorentz (1904). 1907 Einstein analyzed a uniformly accelerated reference frame and obtained formulas for coordinate dependent time dilation and speed of light, analogous to those given by Kottler-Møller-Rindler coordinates. 1907 Hermann Minkowski defined the relation between the four-force (which he called the moving force) and the four acceleration corresponding to . 1908 Minkowski denotes the second derivative with respect to proper time as "acceleration vector" (four-acceleration). He showed, that its magnitude at an arbitrary point of the worldline is , where is the magnitude of a vector directed from the center of the corresponding "curvature hyperbola" to . 1909 Max Born denotes the motion with constant magnitude of Minkowski's acceleration vector as "hyperbolic motion", in the course of his study of rigidly accelerated motion. He set (now called proper velocity) and as Lorentz factor and as proper time, with the transformation equations . which corresponds to with and . Eliminating Born derived the hyperbolic equation , and defined the magnitude of acceleration as . He also noticed that his transformation can be used to transform into a "hyperbolically accelerated reference system". 1909 Gustav Herglotz extends Born's investigation to all possible cases of rigidly accelerated motion, including uniform rotation. 1910 Arnold Sommerfeld brought Born's formulas for hyperbolic motion in a more concise form with as the imaginary time variable and as an imaginary angle: He noted that when are variable and is constant, they describe the worldline of a charged body in hyperbolic motion. But if are constant and is variable, they denote the transformation into its rest frame. 1911 Sommerfeld explicitly used the expression "proper acceleration" for the quantity in , which corresponds to, as the acceleration in the momentary inertial frame. 1911 Herglotz explicitly used the expression "rest acceleration" instead of proper acceleration. He wrote it in the form and which corresponds to, where is the Lorentz factor and or are the longitudinal and transverse components of rest acceleration. 1911 Max von Laue derived in the first edition of his monograph "Das Relativitätsprinzip" the transformation for three-acceleration by differentiation of the velocity addition equivalent to as well as to Poincaré (1905/6). From that he derived the transformation of rest acceleration (equivalent to ), and eventually the formulas for hyperbolic motion which corresponds to: thus , and the transformation into a hyperbolic reference system with imaginary angle : . He also wrote the transformation of three-force as equivalent to as well as to Poincaré (1905). 1912–1914 Friedrich Kottler obtained general covariance of Maxwell's equations, and used four-dimensional Frenet-Serret formulas to analyze the Born rigid motions given by Herglotz (1909). He also obtained the proper reference frames for hyperbolic motion and uniform circular motion. 1913 von Laue replaced in the second edition of his book the transformation of three-acceleration by Minkowski's acceleration vector for which he coined the name "four-acceleration", defined by with as four-velocity. He showed, that the magnitude of four-acceleration corresponds to the rest acceleration by , which corresponds to. Subsequently, he derived the same formulas as in 1911 for the transformation of rest acceleration and hyperbolic motion, and the hyperbolic reference frame. References Bibliography ; First edition 1911, second expanded edition 1913, third expanded edition 1919. In English: Historical papers External links Mathpages: Transverse Mass in Einstein's Electrodynamics, Accelerated Travels, Born Rigidity, Acceleration, and Inertia, Does A Uniformly Accelerating Charge Radiate? Physics FAQ: Acceleration in Special Relativity, The Relativistic Rocket Special relativity Special relativity
0.779389
0.984037
0.766947
Matter wave
Matter waves are a central part of the theory of quantum mechanics, being half of wave–particle duality. At all scales where measurements have been practical, matter exhibits wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. The concept that matter behaves like a wave was proposed by French physicist Louis de Broglie in 1924, and so matter waves are also known as de Broglie waves. The de Broglie wavelength is the wavelength, , associated with a particle with momentum through the Planck constant, : Wave-like behavior of matter has been experimentally demonstrated, first for electrons in 1927 and for other elementary particles, neutral atoms and molecules in the years since. Introduction Background At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations, while matter was thought to consist of localized particles (see history of wave and particle duality). In 1900, this division was questioned when, investigating the theory of black-body radiation, Max Planck proposed that the thermal energy of oscillating atoms is divided into discrete portions, or quanta. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, now called photons. These quanta would have an energy given by the Planck–Einstein relation: and a momentum vector where (lowercase Greek letter nu) and (lowercase Greek letter lambda) denote the frequency and wavelength of the light, the speed of light, and the Planck constant. In the modern convention, frequency is symbolized by as is done in the rest of this article. Einstein's postulate was verified experimentally by K. T. Compton and O. W. Richardson and by A. L. Hughes in 1912 then more carefully including a measurement of the Planck constant in 1916 by Robert Millikan De Broglie hypothesis De Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties. His thesis started from the hypothesis, "that to each portion of energy with a proper mass one may associate a periodic phenomenon of the frequency , such that one finds: . The frequency is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory." (This frequency is also known as Compton frequency.) To find the wavelength equivalent to a moving body, de Broglie set the total energy from special relativity for that body equal to : (Modern physics no longer uses this form of the total energy; the energy–momentum relation has proven more useful.) De Broglie identified the velocity of the particle, , with the wave group velocity in free space: (The modern definition of group velocity uses angular frequency and wave number ). By applying the differentials to the energy equation and identifying the relativistic momentum: then integrating, de Broglie arrived as his formula for the relationship between the wavelength, , associated with an electron and the modulus of its momentum, , through the Planck constant, : Schrödinger's (matter) wave equation Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Erwin Schrödinger decided to find a proper three-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics (see Hamilton's optico-mechanical analogy), encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system – the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action. In 1926, Schrödinger published the wave equation that now bears his name – the matter wave analogue of Maxwell's equations – and used it to derive the energy spectrum of hydrogen. Frequencies of solutions of the non-relativistic Schrödinger equation differ from de Broglie waves by the Compton frequency since the energy corresponding to the rest mass of a particle is not part of the non-relativistic Schrödinger equation. The Schrödinger equation describes the time evolution of a wavefunction, a function that assigns a complex number to each point in space. Schrödinger tried to interpret the modulus squared of the wavefunction as a charge density. This approach was, however, unsuccessful. Max Born proposed that the modulus squared of the wavefunction is instead a probability density, a successful proposal now known as the Born rule. The following year, 1927, C. G. Darwin (grandson of the famous biologist) explored Schrödinger's equation in several idealized scenarios. For an unbound electron in free space he worked out the propagation of the wave, assuming an initial Gaussian wave packet. Darwin showed that at time later the position of the packet traveling at velocity would be where is the uncertainty in the initial position. This position uncertainty creates uncertainty in velocity (the extra second term in the square root) consistent with Heisenberg's uncertainty relation The wave packet spreads out as show in the figure. Experimental confirmation In 1927, matter waves were first experimentally confirmed to occur in George Paget Thomson and Alexander Reid's diffraction experiment and the Davisson–Germer experiment, both for electrons. The de Broglie hypothesis and the existence of matter waves has been confirmed for other elementary particles, neutral atoms and even molecules have been shown to be wave-like. The first electron wave interference patterns directly demonstrating wave–particle duality used electron biprisms (essentially a wire placed in an electron microscope) and measured single electrons building up the diffraction pattern. Recently, a close copy of the famous double-slit experiment using electrons through physical apertures gave the movie shown. Electrons In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The diffracted electron intensity was measured, and was determined to have a similar angular dependence to diffraction patterns predicted by Bragg for x-rays. At the same time George Paget Thomson and Alexander Reid at the University of Aberdeen were independently firing electrons at thin celluloid foils and later metal films, observing rings which can be similarly interpreted. (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.) Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be exhibited only by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. The matter wave interpretation was placed onto a solid foundation in 1928 by Hans Bethe, who solved the Schrödinger equation, showing how this could explain the experimental results. His approach is similar to what is used in modern electron diffraction approaches. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, these experiments showed the wave nature of matter. Neutrons Neutrons, produced in nuclear reactors with kinetic energy of around , thermalize to around as they scatter from light atoms. The resulting de Broglie wavelength (around ) matches interatomic spacing and neutrons scatter strongly from hydrogen atoms. Consequently, neutron matter waves are used in crystallography, especially for biological materials. Neutrons were discovered in the early 1930s, and their diffraction was observed in 1936. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull, they developed neutron diffraction throughout the 1940s. In the 1970s, a neutron interferometer demonstrated the action of gravity in relation to wave–particle duality. The double-slit experiment was performed using neutrons in 1988. Atoms Interference of atom matter waves was first observed by Immanuel Estermann and Otto Stern in 1930, when a Na beam was diffracted off a surface of NaCl. The short de Broglie wavelength of atoms prevented progress for many years until two technological breakthroughs revived interest: microlithography allowing precise small devices and laser cooling allowing atoms to be slowed, increasing their de Broglie wavelength. The double-slit experiment on atoms was performed in 1991. Advances in laser cooling allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. Molecules Recent experiments confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a de Broglie wavelength of the most probable C60 velocity as . More recent experiments prove the quantum nature of molecules made of 810 atoms and with a mass of . As of 2019, this has been pushed to molecules of . In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity. Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms. Others Matter wave was detected in van der Waals molecules, rho mesons, Bose-Einstein condensate. Traveling matter waves Waves have more complicated concepts for velocity than solid objects. The simplest approach is to focus on the description in terms of plane matter waves for a free particle, that is a wave function described by where is a position in real space, is the wave vector in units of inverse meters, is the angular frequency with units of inverse time and is time. (Here the physics definition for the wave vector is used, which is times the wave vector used in crystallography, see wavevector.) The de Broglie equations relate the wavelength to the modulus of the momentum , and frequency to the total energy of a free particle as written above: where is the Planck constant. The equations can also be written as Here, is the reduced Planck constant. The second equation is also referred to as the Planck–Einstein relation. Group velocity In the de Broglie hypothesis, the velocity of a particle equals the group velocity of the matter wave. In isotropic media or a vacuum the group velocity of a wave is defined by: The relationship between the angular frequency and wavevector is called the dispersion relationship. For the non-relativistic case this is: where is the rest mass. Applying the derivative gives the (non-relativistic) matter wave group velocity: For comparison, the group velocity of light, with a dispersion , is the speed of light . As an alternative, using the relativistic dispersion relationship for matter waves then This relativistic form relates to the phase velocity as discussed below. For non-isotropic media we use the Energy–momentum form instead: But (see below), since the phase velocity is , then where is the velocity of the center of mass of the particle, identical to the group velocity. Phase velocity The phase velocity in isotropic media is defined as: Using the relativistic group velocity above: This shows that as reported by R.W. Ditchburn in 1948 and J. L. Synge in 1952. Electromagnetic waves also obey , as both and . Since for matter waves, , it follows that , but only the group velocity carries information. The superluminal phase velocity therefore does not violate special relativity, as it does not carry information. For non-isotropic media, then Using the relativistic relations for energy and momentum yields The variable can either be interpreted as the speed of the particle or the group velocity of the corresponding matter wave—the two are the same. Since the particle speed for any particle that has nonzero mass (according to special relativity), the phase velocity of matter waves always exceeds c, i.e., which approaches c when the particle speed is relativistic. The superluminal phase velocity does not violate special relativity, similar to the case above for non-isotropic media. See the article on Dispersion (optics) for further details. Special relativity Using two formulas from special relativity, one for the relativistic mass energy and one for the relativistic momentum allows the equations for de Broglie wavelength and frequency to be written as where is the velocity, the Lorentz factor, and the speed of light in vacuum. This shows that as the velocity of a particle approaches zero (rest) the de Broglie wavelength approaches infinity. Four-vectors Using four-vectors, the de Broglie relations form a single equation: which is frame-independent. Likewise, the relation between group/particle velocity and phase velocity is given in frame-independent form by: where Four-momentum Four-wavevector Four-velocity General matter waves The preceding sections refer specifically to free particles for which the wavefunctions are plane waves. There are significant numbers of other matter waves, which can be broadly split into three classes: single-particle matter waves, collective matter waves and standing waves. Single-particle matter waves The more general description of matter waves corresponding to a single particle type (e.g. a single electron or neutron only) would have a form similar to where now there is an additional spatial term in the front, and the energy has been written more generally as a function of the wave vector. The various terms given before still apply, although the energy is no longer always proportional to the wave vector squared. A common approach is to define an effective mass which in general is a tensor given by so that in the simple case where all directions are the same the form is similar to that of a free wave above.In general the group velocity would be replaced by the probability current where is the del or gradient operator. The momentum would then be described using the kinetic momentum operator, The wavelength is still described as the inverse of the modulus of the wavevector, although measurement is more complex. There are many cases where this approach is used to describe single-particle matter waves: Bloch wave, which form the basis of much of band structure as described in Ashcroft and Mermin, and are also used to describe the diffraction of high-energy electrons by solids. Waves with angular momentum such as electron vortex beams. Evanescent waves, where the component of the wavevector in one direction is complex. These are common when matter waves are being reflected, particularly for grazing-incidence diffraction. Collective matter waves Other classes of matter waves involve more than one particle, so are called collective waves and are often quasiparticles. Many of these occur in solids – see Ashcroft and Mermin. Examples include: In solids, an electron quasiparticle is an electron where interactions with other electrons in the solid have been included. An electron quasiparticle has the same charge and spin as a "normal" (elementary particle) electron and, like a normal electron, it is a fermion. However, its effective mass can differ substantially from that of a normal electron. Its electric field is also modified, as a result of electric field screening. A hole is a quasiparticle which can be thought of as a vacancy of an electron in a state; it is most commonly used in the context of empty states in the valence band of a semiconductor. A hole has the opposite charge of an electron. A polaron is a quasiparticle where an electron interacts with the polarization of nearby atoms. An exciton is an electron and hole pair which are bound together. A Cooper pair is two electrons bound together so they behave as a single matter wave. Standing matter waves The third class are matter waves which have a wavevector, a wavelength and vary with time, but have a zero group velocity or probability flux. The simplest of these, similar to the notation above would be These occur as part of the particle in a box, and other cases such as in a ring. This can, and arguably should be, extended to many other cases. For instance, in early work de Broglie used the concept that an electron matter wave must be continuous in a ring to connect to the Bohr–Sommerfeld condition in the early approaches to quantum mechanics. In that sense atomic orbitals around atoms, and also molecular orbitals are electron matter waves. Matter waves vs. electromagnetic waves (light) Schrödinger applied Hamilton's optico-mechanical analogy to develop his wave mechanics for subatomic particles Consequently, wave solutions to the Schrödinger equation share many properties with results of light wave optics. In particular, Kirchhoff's diffraction formula works well for electron optics and for atomic optics. The approximation works well as long as the electric fields change more slowly than the de Broglie wavelength. Macroscopic apparatus fulfill this condition; slow electrons moving in solids do not. Beyond the equations of motion, other aspects of matter wave optics differ from the corresponding light optics cases. Sensitivity of matter waves to environmental condition. Many examples of electromagnetic (light) diffraction occur in air under many environmental conditions. Obviously visible light interacts weakly with air molecules. By contrast, strongly interacting particles like slow electrons and molecules require vacuum: the matter wave properties rapidly fade when they are exposed to even low pressures of gas. With special apparatus, high velocity electrons can be used to study liquids and gases. Neutrons, an important exception, interact primarily by collisions with nuclei, and thus travel several hundred feet in air. Dispersion. Light waves of all frequencies travel at the same speed of light while matter wave velocity varies strongly with frequency. The relationship between frequency (proportional to energy) and wavenumber or velocity (proportional to momentum) is called a dispersion relation. Light waves in a vacuum have linear dispersion relation between frequency: . For matter waves the relation is non-linear: This non-relativistic matter wave dispersion relation says the frequency in vacuum varies with wavenumber in two parts: a constant part due to the de Broglie frequency of the rest mass and a quadratic part due to kinetic energy. The quadratic term causes rapid spreading of wave packets of matter waves. Coherence The visibility of diffraction features using an optical theory approach depends on the beam coherence, which at the quantum level is equivalent to a density matrix approach. As with light, transverse coherence (across the direction of propagation) can be increased by collimation. Electron optical systems use stabilized high voltage to give a narrow energy spread in combination with collimating (parallelizing) lenses and pointed filament sources to achieve good coherence. Because light at all frequencies travels the same velocity, longitudinal and temporal coherence are linked; in matter waves these are independent. For example, for atoms, velocity (energy) selection controls longitudinal coherence and pulsing or chopping controls temporal coherence. Optically shaped matter waves Optical manipulation of matter plays a critical role in matter wave optics: "Light waves can act as refractive, reflective, and absorptive structures for matter waves, just as glass interacts with light waves." Laser light momentum transfer can cool matter particles and alter the internal excitation state of atoms. Multi-particle experiments While single-particle free-space optical and matter wave equations are identical, multiparticle systems like coincidence experiments are not. Applications of matter waves The following subsections provide links to pages describing applications of matter waves as probes of materials or of fundamental quantum properties. In most cases these involve some method of producing travelling matter waves which initially have the simple form , then using these to probe materials. As shown in the table below, matter wave mass ranges over 6 orders of magnitude and energy over 9 orders but the wavelengths are all in the picometre range, comparable to atomic spacings. (Atomic diameters range from 62 to 520 pm, and the typical length of a carbon–carbon single bond is 154 pm.) Reaching longer wavelengths requires special techniques like laser cooling to reach lower energies; shorter wavelengths make diffraction effects more difficult to discern. Therefore, many applications focus on material structures, in parallel with applications of electromagnetic waves, especially X-rays. Unlike light, matter wave particles may have mass, electric charge, magnetic moments, and internal structure, presenting new challenges and opportunities. Electrons Electron diffraction patterns emerge when energetic electrons reflect or penetrate ordered solids; analysis of the patterns leads to models of the atomic arrangement in the solids. They are used for imaging from the micron to atomic scale using electron microscopes, in transmission, using scanning, and for surfaces at low energies. The measurements of the energy they lose in electron energy loss spectroscopy provides information about the chemistry and electronic structure of materials. Beams of electrons also lead to characteristic X-rays in energy dispersive spectroscopy which can produce information about chemical content at the nanoscale. Quantum tunneling explains how electrons escape from metals in an electrostatic field at energies less than classical predictions allow: the matter wave penetrates of the work function barrier in the metal. Scanning tunneling microscope leverages quantum tunneling to image the top atomic layer of solid surfaces. Electron holography, the electron matter wave analog of optical holography, probes the electric and magnetic fields in thin films. Neutrons Neutron diffraction complements x-ray diffraction through the different scattering cross sections and sensitivity to magnetism. Small-angle neutron scattering provides way to obtain structure of disordered systems that is sensitivity to light elements, isotopes and magnetic moments. Neutron reflectometry is a neutron diffraction technique for measuring the structure of thin films. Neutral atoms Atom interferometers, similar to optical interferometers, measure the difference in phase between atomic matter waves along different paths. Atom optics mimic many light optic devices, including mirrors, atom focusing zone plates. Scanning helium microscopy uses He atom waves to image solid structures non-destructively. Quantum reflection uses matter wave behavior to explain grazing angle atomic reflection, the basis of some atomic mirrors. Quantum decoherence measurements rely on Rb atom wave interference. Molecules Quantum superposition revealed by interference of matter waves from large molecules probes the limits of wave–particle duality and quantum macroscopicity. Matter-wave interfererometers generate nanostructures on molecular beams that can be read with nanometer accuracy and therefore be used for highly sensitive force measurements, from which one can deduce a plethora or properties of individualized complex molecules. See also Wave-particle duality Bohr model Compton wavelength Faraday wave Kapitsa–Dirac effect Matter wave clock Schrödinger equation Thermal de Broglie wavelength De Broglie–Bohm theory References Further reading L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis (Paris), 1924; L. de Broglie, Ann. Phys. (Paris) 3, 22 (1925). English translation by A.F. Kracklauer. Broglie, Louis de, The wave nature of the electron Nobel Lecture, 12, 1929 Tipler, Paul A. and Ralph A. Llewellyn (2003). Modern Physics. 4th ed. New York; W. H. Freeman and Co. . pp. 203–4, 222–3, 236. "Scientific Papers Presented to Max Born on his retirement from the Tait Chair of Natural Philosophy in the University of Edinburgh", 1953 (Oliver and Boyd) External links Waves Matter Foundational quantum physics
0.768971
0.997359
0.76694
Ergodic theory
Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc. Thus, the statistics with which we are concerned are properties of the dynamics. Ergodic theory, like probability theory, is based on general notions of measure theory. Its initial development was motivated by problems of statistical physics. A central concern of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long time. The first result in this direction is the Poincaré recurrence theorem, which claims that almost all points in any subset of the phase space eventually revisit the set. Systems for which the Poincaré recurrence theorem holds are conservative systems; thus all ergodic systems are conservative. More precise information is provided by various ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two of the most important theorems are those of Birkhoff (1931) and von Neumann which assert the existence of a time average along each trajectory. For the special class of ergodic systems, this time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state. Stronger properties, such as mixing and equidistribution, have also been extensively studied. The problem of metric classification of systems is another important part of the abstract ergodic theory. An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems. The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space. Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature. Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions). Ergodic transformations Ergodic theory is often concerned with ergodic transformations. The intuition behind such transformations, which act on a given set, is that they do a thorough job "stirring" the elements of that set. E.g. if the set is a quantity of hot oatmeal in a bowl, and if a spoonful of syrup is dropped into the bowl, then iterations of the inverse of an ergodic transformation of the oatmeal will not allow the syrup to remain in a local subregion of the oatmeal, but will distribute the syrup evenly throughout. At the same time, these iterations will not compress or dilate any portion of the oatmeal: they preserve the measure that is density. The formal definition is as follows: Let be a measure-preserving transformation on a measure space , with . Then is ergodic if for every in with (that is, is invariant), either or . The operator Δ here is the symmetric difference of sets, equivalent to the exclusive-or operation with respect to set membership. The condition that the symmetric difference be measure zero is called being essentially invariant. Examples An irrational rotation of the circle R/Z, T: x → x + θ, where θ is irrational, is ergodic. This transformation has even stronger properties of unique ergodicity, minimality, and equidistribution. By contrast, if θ = p/q is rational (in lowest terms) then T is periodic, with period q, and thus cannot be ergodic: for any interval I of length a, 0 < a < 1/q, its orbit under T (that is, the union of I, T(I), ..., Tq−1(I), which contains the image of I under any number of applications of T) is a T-invariant mod 0 set that is a union of q intervals of length a, hence it has measure qa strictly between 0 and 1. Let G be a compact abelian group, μ the normalized Haar measure, and T a group automorphism of G. Let G* be the Pontryagin dual group, consisting of the continuous characters of G, and T* be the corresponding adjoint automorphism of G*. The automorphism T is ergodic if and only if the equality (T*)n(χ) = χ is possible only when n = 0 or χ is the trivial character of G. In particular, if G is the n-dimensional torus and the automorphism T is represented by a unimodular matrix A then T is ergodic if and only if no eigenvalue of A is a root of unity. A Bernoulli shift is ergodic. More generally, ergodicity of the shift transformation associated with a sequence of i.i.d. random variables and some more general stationary processes follows from Kolmogorov's zero–one law. Ergodicity of a continuous dynamical system means that its trajectories "spread around" the phase space. A system with a compact phase space which has a non-constant first integral cannot be ergodic. This applies, in particular, to Hamiltonian systems with a first integral I functionally independent from the Hamilton function H and a compact level set X = {(p,q): H(p,q) = E} of constant energy. Liouville's theorem implies the existence of a finite invariant measure on X, but the dynamics of the system is constrained to the level sets of I on X, hence the system possesses invariant sets of positive but less than full measure. A property of continuous dynamical systems that is the opposite of ergodicity is complete integrability. Ergodic theorems Let T: X → X be a measure-preserving transformation on a measure space (X, Σ, μ) and suppose ƒ is a μ-integrable function, i.e. ƒ ∈ L1(μ). Then we define the following averages: Time average: This is defined as the average (if it exists) over iterations of T starting from some initial point x: Space average: If μ(X) is finite and nonzero, we can consider the space or phase average of ƒ: In general the time average and space average may be different. But if the transformation is ergodic, and the measure is invariant, then the time average is equal to the space average almost everywhere. This is the celebrated ergodic theorem, in an abstract form due to George David Birkhoff. (Actually, Birkhoff's paper considers not the abstract general case but only the case of dynamical systems arising from differential equations on a smooth manifold.) The equidistribution theorem is a special case of the ergodic theorem, dealing specifically with the distribution of probabilities on the unit interval. More precisely, the pointwise or strong ergodic theorem states that the limit in the definition of the time average of ƒ exists for almost every x and that the (almost everywhere defined) limit function is integrable: Furthermore, is T-invariant, that is to say holds almost everywhere, and if μ(X) is finite, then the normalization is the same: In particular, if T is ergodic, then must be a constant (almost everywhere), and so one has that almost everywhere. Joining the first to the last claim and assuming that μ(X) is finite and nonzero, one has that for almost all x, i.e., for all x except for a set of measure zero. For an ergodic transformation, the time average equals the space average almost surely. As an example, assume that the measure space (X, Σ, μ) models the particles of a gas as above, and let ƒ(x) denote the velocity of the particle at position x. Then the pointwise ergodic theorems says that the average velocity of all particles at some given time is equal to the average velocity of one particle over time. A generalization of Birkhoff's theorem is Kingman's subadditive ergodic theorem. Probabilistic formulation: Birkhoff–Khinchin theorem Birkhoff–Khinchin theorem. Let ƒ be measurable, E(|ƒ|) < ∞, and T be a measure-preserving map. Then with probability 1: where is the conditional expectation given the σ-algebra of invariant sets of T. Corollary (Pointwise Ergodic Theorem): In particular, if T is also ergodic, then is the trivial σ-algebra, and thus with probability 1: Mean ergodic theorem Von Neumann's mean ergodic theorem, holds in Hilbert spaces. Let U be a unitary operator on a Hilbert space H; more generally, an isometric linear operator (that is, a not necessarily surjective linear operator satisfying ‖Ux‖ = ‖x‖ for all x in H, or equivalently, satisfying U*U = I, but not necessarily UU* = I). Let P be the orthogonal projection onto {ψ ∈ H | Uψ = ψ} = ker(I − U). Then, for any x in H, we have: where the limit is with respect to the norm on H. In other words, the sequence of averages converges to P in the strong operator topology. Indeed, it is not difficult to see that in this case any admits an orthogonal decomposition into parts from and respectively. The former part is invariant in all the partial sums as grows, while for the latter part, from the telescoping series one would have: This theorem specializes to the case in which the Hilbert space H consists of L2 functions on a measure space and U is an operator of the form where T is a measure-preserving endomorphism of X, thought of in applications as representing a time-step of a discrete dynamical system. The ergodic theorem then asserts that the average behavior of a function ƒ over sufficiently large time-scales is approximated by the orthogonal component of ƒ which is time-invariant. In another form of the mean ergodic theorem, let Ut be a strongly continuous one-parameter group of unitary operators on H. Then the operator converges in the strong operator topology as T → ∞. In fact, this result also extends to the case of strongly continuous one-parameter semigroup of contractive operators on a reflexive space. Remark: Some intuition for the mean ergodic theorem can be developed by considering the case where complex numbers of unit length are regarded as unitary transformations on the complex plane (by left multiplication). If we pick a single complex number of unit length (which we think of as U), it is intuitive that its powers will fill up the circle. Since the circle is symmetric around 0, it makes sense that the averages of the powers of U will converge to 0. Also, 0 is the only fixed point of U, and so the projection onto the space of fixed points must be the zero operator (which agrees with the limit just described). Convergence of the ergodic means in the Lp norms Let (X, Σ, μ) be as above a probability space with a measure preserving transformation T, and let 1 ≤ p ≤ ∞. The conditional expectation with respect to the sub-σ-algebra ΣT of the T-invariant sets is a linear projector ET of norm 1 of the Banach space Lp(X, Σ, μ) onto its closed subspace Lp(X, ΣT, μ). The latter may also be characterized as the space of all T-invariant Lp-functions on X. The ergodic means, as linear operators on Lp(X, Σ, μ) also have unit operator norm; and, as a simple consequence of the Birkhoff–Khinchin theorem, converge to the projector ET in the strong operator topology of Lp if 1 ≤ p ≤ ∞, and in the weak operator topology if p = ∞. More is true if 1 < p ≤ ∞ then the Wiener–Yoshida–Kakutani ergodic dominated convergence theorem states that the ergodic means of ƒ ∈ Lp are dominated in Lp; however, if ƒ ∈ L1, the ergodic means may fail to be equidominated in Lp. Finally, if ƒ is assumed to be in the Zygmund class, that is |ƒ| log+(|ƒ|) is integrable, then the ergodic means are even dominated in L1. Sojourn time Let (X, Σ, μ) be a measure space such that μ(X) is finite and nonzero. The time spent in a measurable set A is called the sojourn time. An immediate consequence of the ergodic theorem is that, in an ergodic system, the relative measure of A is equal to the mean sojourn time: for all x except for a set of measure zero, where χA is the indicator function of A. The occurrence times of a measurable set A is defined as the set k1, k2, k3, ..., of times k such that Tk(x) is in A, sorted in increasing order. The differences between consecutive occurrence times Ri = ki − ki−1 are called the recurrence times of A. Another consequence of the ergodic theorem is that the average recurrence time of A is inversely proportional to the measure of A, assuming that the initial point x is in A, so that k0 = 0. (See almost surely.) That is, the smaller A is, the longer it takes to return to it. Ergodic flows on manifolds The ergodicity of the geodesic flow on compact Riemann surfaces of variable negative curvature and on compact manifolds of constant negative curvature of any dimension was proved by Eberhard Hopf in 1939, although special cases had been studied earlier: see for example, Hadamard's billiards (1898) and Artin billiard (1924). The relation between geodesic flows on Riemann surfaces and one-parameter subgroups on SL(2, R) was described in 1952 by S. V. Fomin and I. M. Gelfand. The article on Anosov flows provides an example of ergodic flows on SL(2, R) and on Riemann surfaces of negative curvature. Much of the development described there generalizes to hyperbolic manifolds, since they can be viewed as quotients of the hyperbolic space by the action of a lattice in the semisimple Lie group SO(n,1). Ergodicity of the geodesic flow on Riemannian symmetric spaces was demonstrated by F. I. Mautner in 1957. In 1967 D. V. Anosov and Ya. G. Sinai proved ergodicity of the geodesic flow on compact manifolds of variable negative sectional curvature. A simple criterion for the ergodicity of a homogeneous flow on a homogeneous space of a semisimple Lie group was given by Calvin C. Moore in 1966. Many of the theorems and results from this area of study are typical of rigidity theory. In the 1930s G. A. Hedlund proved that the horocycle flow on a compact hyperbolic surface is minimal and ergodic. Unique ergodicity of the flow was established by Hillel Furstenberg in 1972. Ratner's theorems provide a major generalization of ergodicity for unipotent flows on the homogeneous spaces of the form Γ \ G, where G is a Lie group and Γ is a lattice in G. In the last 20 years, there have been many works trying to find a measure-classification theorem similar to Ratner's theorems but for diagonalizable actions, motivated by conjectures of Furstenberg and Margulis. An important partial result (solving those conjectures with an extra assumption of positive entropy) was proved by Elon Lindenstrauss, and he was awarded the Fields medal in 2010 for this result. See also Chaos theory Ergodic hypothesis Ergodic process Kruskal principle Lindy effect Lyapunov time – the time limit to the predictability of the system Maximal ergodic theorem Ornstein isomorphism theorem Statistical mechanics Symbolic dynamics References Historical references . . . . . . . . Modern references Vladimir Igorevich Arnol'd and André Avez, Ergodic Problems of Classical Mechanics. New York: W.A. Benjamin. 1968. Leo Breiman, Probability. Original edition published by Addison–Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. . (See Chapter 6.) (A survey of topics in ergodic theory; with exercises.) Karl Petersen. Ergodic Theory (Cambridge Studies in Advanced Mathematics). Cambridge: Cambridge University Press. 1990. Françoise Pène, Stochastic properties of dynamical systems, Cours spécialisés de la SMF, Volume 30, 2022 Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, . (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.) A. N. Shiryaev, Probability, 2nd ed., Springer 1996, Sec. V.3. . (A detailed discussion about the priority of the discovery and publication of the ergodic theorems by Birkhoff and von Neumann, based on a letter of the latter to his friend Howard Percy Robertson.) Andrzej Lasota, Michael C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Second Edition, Springer, 1994. Manfred Einsiedler and Thomas Ward, Ergodic Theory with a view towards Number Theory. Springer, 2011. Jane Hawkins, Ergodic Dynamics: From Basic Theory to Applications, Springer, 2021. External links Ergodic Theory (16 June 2015) Notes by Cosma Rohilla Shalizi Ergodic theorem passes the test From Physics World
0.769805
0.996271
0.766935
Mechanical advantage
Mechanical advantage is a measure of the force amplification achieved by using a tool, mechanical device or machine system. The device trades off input forces against movement to obtain a desired amplification in the output force. The model for this is the law of the lever. Machine components designed to manage forces and movement in this way are called mechanisms. An ideal mechanism transmits power without adding to or subtracting from it. This means the ideal machine does not include a power source, is frictionless, and is constructed from rigid bodies that do not deflect or wear. The performance of a real system relative to this ideal is expressed in terms of efficiency factors that take into account departures from the ideal. Levers The lever is a movable bar that pivots on a fulcrum attached to or positioned on or across a fixed point. The lever operates by applying forces at different distances from the fulcrum, or pivot. The location of the fulcrum determines a lever's class. Where a lever rotates continuously, it functions as a rotary 2nd-class lever. The motion of the lever's end-point describes a fixed orbit, where mechanical energy can be exchanged. (see a hand-crank as an example.) In modern times, this kind of rotary leverage is widely used; see a (rotary) 2nd-class lever; see gears, pulleys or friction drive, used in a mechanical power transmission scheme. It is common for mechanical advantage to be manipulated in a 'collapsed' form, via the use of more than one gear (a gearset). In such a gearset, gears having smaller radii and less inherent mechanical advantage are used. In order to make use of non-collapsed mechanical advantage, it is necessary to use a 'true length' rotary lever. See, also, the incorporation of mechanical advantage into the design of certain types of electric motors; one design is an 'outrunner'. As the lever pivots on the fulcrum, points farther from this pivot move faster than points closer to the pivot. The power into and out of the lever is the same, so must come out the same when calculations are being done. Power is the product of force and velocity, so forces applied to points farther from the pivot must be less than when applied to points closer in. If a and b are distances from the fulcrum to points A and B and if force FA applied to A is the input force and FB exerted at B is the output, the ratio of the velocities of points A and B is given by a/b so the ratio of the output force to the input force, or mechanical advantage, is given by This is the law of the lever, which Archimedes formulated using geometric reasoning. It shows that if the distance a from the fulcrum to where the input force is applied (point A) is greater than the distance b from fulcrum to where the output force is applied (point B), then the lever amplifies the input force. If the distance from the fulcrum to the input force is less than from the fulcrum to the output force, then the lever reduces the input force. To Archimedes, who recognized the profound implications and practicalities of the law of the lever, has been attributed the famous claim, "Give me a place to stand and with a lever I will move the whole world." The use of velocity in the static analysis of a lever is an application of the principle of virtual work. Speed ratio The requirement for power input to an ideal mechanism to equal power output provides a simple way to compute mechanical advantage from the input-output speed ratio of the system. The power input to a gear train with a torque TA applied to the drive pulley which rotates at an angular velocity of ωA is P=TAωA. Because the power flow is constant, the torque TB and angular velocity ωB of the output gear must satisfy the relation which yields This shows that for an ideal mechanism the input-output speed ratio equals the mechanical advantage of the system. This applies to all mechanical systems ranging from robots to linkages. Gear trains Gear teeth are designed so that the number of teeth on a gear is proportional to the radius of its pitch circle, and so that the pitch circles of meshing gears roll on each other without slipping. The speed ratio for a pair of meshing gears can be computed from ratio of the radii of the pitch circles and the ratio of the number of teeth on each gear, its gear ratio. The velocity v of the point of contact on the pitch circles is the same on both gears, and is given by where input gear A has radius rA and meshes with output gear B of radius rB, therefore, where NA is the number of teeth on the input gear and NB is the number of teeth on the output gear. The mechanical advantage of a pair of meshing gears for which the input gear has NA teeth and the output gear has NB teeth is given by This shows that if the output gear GB has more teeth than the input gear GA, then the gear train amplifies the input torque. And, if the output gear has fewer teeth than the input gear, then the gear train reduces the input torque. If the output gear of a gear train rotates more slowly than the input gear, then the gear train is called a speed reducer (Force multiplier). In this case, because the output gear must have more teeth than the input gear, the speed reducer will amplify the input torque. Chain and belt drives Mechanisms consisting of two sprockets connected by a chain, or two pulleys connected by a belt are designed to provide a specific mechanical advantage in power transmission systems. The velocity v of the chain or belt is the same when in contact with the two sprockets or pulleys: where the input sprocket or pulley A meshes with the chain or belt along the pitch radius rA and the output sprocket or pulley B meshes with this chain or belt along the pitch radius rB, therefore where NA is the number of teeth on the input sprocket and NB is the number of teeth on the output sprocket. For a toothed belt drive, the number of teeth on the sprocket can be used. For friction belt drives the pitch radius of the input and output pulleys must be used. The mechanical advantage of a pair of a chain drive or toothed belt drive with an input sprocket with NA teeth and the output sprocket has NB teeth is given by The mechanical advantage for friction belt drives is given by Chains and belts dissipate power through friction, stretch and wear, which means the power output is actually less than the power input, which means the mechanical advantage of the real system will be less than that calculated for an ideal mechanism. A chain or belt drive can lose as much as 5% of the power through the system in friction heat, deformation and wear, in which case the efficiency of the drive is 95%. Example: bicycle chain drive Consider the 18-speed bicycle with 7 in (radius) cranks and 26 in (diameter) wheels. If the sprockets at the crank and at the rear drive wheel are the same size, then the ratio of the output force on the tire to the input force on the pedal can be calculated from the law of the lever to be Now, assume that the front sprockets have a choice of 28 and 52 teeth, and that the rear sprockets have a choice of 16 and 32 teeth. Using different combinations, we can compute the following speed ratios between the front and rear sprockets The ratio of the force driving the bicycle to the force on the pedal, which is the total mechanical advantage of the bicycle, is the product of the speed ratio (or teeth ratio of output sprocket/input sprocket) and the crank-wheel lever ratio. Notice that in every case the force on the pedals is greater than the force driving the bicycle forward (in the illustration above, the corresponding backward-directed reaction force on the ground is indicated). Block and tackle A block and tackle is an assembly of a rope and pulleys that is used to lift loads. A number of pulleys are assembled together to form the blocks, one that is fixed and one that moves with the load. The rope is threaded through the pulleys to provide mechanical advantage that amplifies that force applied to the rope. In order to determine the mechanical advantage of a block and tackle system consider the simple case of a gun tackle, which has a single mounted, or fixed, pulley and a single movable pulley. The rope is threaded around the fixed block and falls down to the moving block where it is threaded around the pulley and brought back up to be knotted to the fixed block. Let S be the distance from the axle of the fixed block to the end of the rope, which is A where the input force is applied. Let R be the distance from the axle of the fixed block to the axle of the moving block, which is B where the load is applied. The total length of the rope L can be written as where K is the constant length of rope that passes over the pulleys and does not change as the block and tackle moves. The velocities VA and VB of the points A and B are related by the constant length of the rope, that is or The negative sign shows that the velocity of the load is opposite to the velocity of the applied force, which means as we pull down on the rope the load moves up. Let VA be positive downwards and VB be positive upwards, so this relationship can be written as the speed ratio where 2 is the number of rope sections supporting the moving block. Let FA be the input force applied at A the end of the rope, and let FB be the force at B on the moving block. Like the velocities FA is directed downwards and FB is directed upwards. For an ideal block and tackle system there is no friction in the pulleys and no deflection or wear in the rope, which means the power input by the applied force FAVA must equal the power out acting on the load FBVB, that is The ratio of the output force to the input force is the mechanical advantage of an ideal gun tackle system, This analysis generalizes to an ideal block and tackle with a moving block supported by n rope sections, This shows that the force exerted by an ideal block and tackle is n times the input force, where n is the number of sections of rope that support the moving block. Efficiency Mechanical advantage that is computed using the assumption that no power is lost through deflection, friction and wear of a machine is the maximum performance that can be achieved. For this reason, it is often called the ideal mechanical advantage (IMA). In operation, deflection, friction and wear will reduce the mechanical advantage. The amount of this reduction from the ideal to the actual mechanical advantage (AMA) is defined by a factor called efficiency, a quantity which is determined by experimentation. As an example, using a block and tackle with six rope sections and a load, the operator of an ideal system would be required to pull the rope six feet and exert of force to lift the load one foot. Both the ratios Fout / Fin and Vin / Vout show that the IMA is six. For the first ratio, of force input results in of force out. In an actual system, the force out would be less than 600 pounds due to friction in the pulleys. The second ratio also yields a MA of 6 in the ideal case but a smaller value in the practical scenario; it does not properly account for energy losses such as rope stretch. Subtracting those losses from the IMA or using the first ratio yields the AMA. Ideal mechanical advantage The ideal mechanical advantage (IMA), or theoretical mechanical advantage, is the mechanical advantage of a device with the assumption that its components do not flex, there is no friction, and there is no wear. It is calculated using the physical dimensions of the device and defines the maximum performance the device can achieve. The assumptions of an ideal machine are equivalent to the requirement that the machine does not store or dissipate energy; the power into the machine thus equals the power out. Therefore, the power P is constant through the machine and force times velocity into the machine equals the force times velocity outthat is, The ideal mechanical advantage is the ratio of the force out of the machine (load) to the force into the machine (effort), or Applying the constant power relationship yields a formula for this ideal mechanical advantage in terms of the speed ratio: The speed ratio of a machine can be calculated from its physical dimensions. The assumption of constant power thus allows use of the speed ratio to determine the maximum value for the mechanical advantage. Actual mechanical advantage The actual mechanical advantage (AMA) is the mechanical advantage determined by physical measurement of the input and output forces. Actual mechanical advantage takes into account energy loss due to deflection, friction, and wear. The AMA of a machine is calculated as the ratio of the measured force output to the measured force input, where the input and output forces are determined experimentally. The ratio of the experimentally determined mechanical advantage to the ideal mechanical advantage is the mechanical efficiency η of the machine, See also Outline of machines Compound lever Simple machine Mechanical advantage device Gear ratio Chain drive Belt (mechanical) Roller chain Bicycle chain Bicycle gearing Transmission (mechanics) On the Equilibrium of Planes'' Mechanical efficiency Wedge References . . External links Gears and pulleys Nice demonstration of mechanical advantage Mechanical advantage — video Mechanics Mechanisms (engineering) es:Velocidad de transmisión
0.771873
0.993555
0.766898
Angular momentum of light
The angular momentum of light is a vector quantity that expresses the amount of dynamical rotation present in the electromagnetic field of the light. While traveling approximately in a straight line, a beam of light can also be rotating (or "spinning, or "twisting) around its own axis. This rotation, while not visible to the naked eye, can be revealed by the interaction of the light beam with matter. There are two distinct forms of rotation of a light beam, one involving its polarization and the other its wavefront shape. These two forms of rotation are therefore associated with two distinct forms of angular momentum, respectively named light spin angular momentum (SAM) and light orbital angular momentum (OAM). The total angular momentum of light (or, more generally, of the electromagnetic field and the other force fields) and matter is conserved in time. Introduction Light, or more generally an electromagnetic wave, carries not only energy but also momentum, which is a characteristic property of all objects in translational motion. The existence of this momentum becomes apparent in the "radiation pressure phenomenon, in which a light beam transfers its momentum to an absorbing or scattering object, generating a mechanical pressure on it in the process. Light may also carry angular momentum, which is a property of all objects in rotational motion. For example, a light beam can be rotating around its own axis while it propagates forward. Again, the existence of this angular momentum can be made evident by transferring it to small absorbing or scattering particles, which are thus subject to an optical torque. For a light beam, one can usually distinguish two "forms of rotation, the first associated with the dynamical rotation of the electric and magnetic fields around the propagation direction, and the second with the dynamical rotation of light rays around the main beam axis. These two rotations are associated with two forms of angular momentum, namely SAM and OAM. However this distinction becomes blurred for strongly focused or diverging beams, and in the general case only the total angular momentum of a light field can be defined. An important limiting case in which the distinction is instead clear and unambiguous is that of a "paraxial light beam, that is a well collimated beam in which all light rays (or, more precisely, all Fourier components of the optical field) only form small angles with the beam axis. For such a beam, SAM is strictly related with the optical polarization, and in particular with the so-called circular polarization. OAM is related with the spatial field distribution, and in particular with the wavefront helical shape. In addition to these two terms, if the origin of coordinates is located outside the beam axis, there is a third angular momentum contribution obtained as the cross-product of the beam position and its total momentum. This third term is also called "orbital, because it depends on the spatial distribution of the field. However, since its value is dependent from the choice of the origin, it is termed "external orbital angular momentum, as opposed to the "internal OAM appearing for helical beams. Mathematical expressions for the angular momentum of light One commonly used expression for the total angular momentum of an electromagnetic field is the following one, in which there is no explicit distinction between the two forms of rotation: where and are the electric and magnetic fields, respectively, is the vacuum permittivity and we are using SI units. However, another expression of the angular momentum naturally arising from Noether’s theorem is the following one, in which there are two separate terms that may be associated with SAM and OAM: where is the vector potential of the magnetic field, and the i-superscripted symbols denote the cartesian components of the corresponding vectors. These two expressions can be proved to be equivalent to each other for any electromagnetic field that satisfies Maxwell’s equations with no source charges and vanishes fast enough outside a finite region of space. The two terms in the second expression however are physically ambiguous, as they are not gauge-invariant. A gauge-invariant version can be obtained by replacing the vector potential A and the electric field E with their “transverse” or radiative component and , thus obtaining the following expression: A justification for taking this step is yet to be provided. The latter expression has further problems, as it can be shown that the two terms are not true angular momenta as they do not obey the correct quantum commutation rules. Their sum, that is the total angular momentum, instead does. An equivalent but simpler expression for a monochromatic wave of frequency ω, using the complex notation for the fields, is the following: Let us now consider the paraxial limit, with the beam axis assumed to coincide with the z axis of the coordinate system. In this limit the only significant component of the angular momentum is the z one, that is the angular momentum measuring the light beam rotation around its own axis, while the other two components are negligible. where and denote the left and right circular polarization components, respectively. Exchange of spin and orbital angular momentum with matter When a light beam carrying nonzero angular momentum impinges on an absorbing particle, its angular momentum can be transferred on the particle, thus setting it in rotational motion. This occurs both with SAM and OAM. However, if the particle is not at the beam center the two angular momenta will give rise to different kinds of rotation of the particle. SAM will give rise to a rotation of the particle around its own center, i.e., to a particle spinning. OAM, instead, will generate a revolution of the particle around the beam axis. These phenomena are schematically illustrated in the figure. In the case of transparent media, in the paraxial limit, the optical SAM is mainly exchanged with anisotropic systems, for example birefringent crystals. Indeed, thin slabs of birefringent crystals are commonly used to manipulate the light polarization. Whenever the polarization ellipticity is changed, in the process, there is an exchange of SAM between light and the crystal. If the crystal is free to rotate, it will do so. Otherwise, the SAM is finally transferred to the holder and to the Earth. Spiral phase plate (SPP) In the paraxial limit, the OAM of a light beam can be exchanged with material media that have a transverse spatial inhomogeneity. For example, a light beam can acquire OAM by crossing a spiral phase plate, with an inhomogeneous thickness (see figure). Pitch-fork hologram A more convenient approach for generating OAM is based on using diffraction on a fork-like or pitchfork hologram (see figure). Holograms can be also generated dynamically under the control of a computer by using a spatial light modulator. As a result, this allows one to obtain arbitrary values of the orbital angular momentum. Q-plate Another method for generating OAM is based on the SAM-OAM coupling that may occur in a medium which is both anisotropic and inhomogeneous. In particular, the so-called q-plate is a device, currently realized using liquid crystals, polymers or sub-wavelength gratings, which can generate OAM by exploiting a SAM sign-change. In this case, the OAM sign is controlled by the input polarization. Cylindrical mode converters OAM can also be generated by converting a Hermite-Gaussian beam into a Laguerre-Gaussian one by using an astigmatic system with two well-aligned cylindrical lenses placed at a specific distance (see figure) in order to introduce a well-defined relative phase between horizontal and vertical Hermite-Gaussian beams. Possible applications of the orbital angular momentum of light The applications of the spin angular momentum of light are undistinguishable from the innumerable applications of the light polarization and will not be discussed here. The possible applications of the orbital angular momentum of light are instead currently the subject of research. In particular, the following applications have been already demonstrated in research laboratories, although they have not yet reached the stage of commercialization: Orientational manipulation of particles or particle aggregates in optical tweezers High-bandwidth information encoding in free-space optical communication Higher-dimensional quantum information encoding, for possible future quantum cryptography or quantum computation applications Sensitive optical detection See also Angular momentum Circular polarization Electromagnetic wave Helmholtz equation Light Light orbital angular momentum Light spin angular momentum Optical vortices Orbital angular momentum multiplexing Polarization (waves) Photon polarization References External links Phorbitech Glasgow Optics Group Leiden Institute of Physics ICFO Università Di Napoli "Federico II" Università Di Roma "La Sapienza" University of Ottawa Further reading Angular momentum of light Light
0.784274
0.977825
0.766883