title
stringlengths
3
69
text
stringlengths
776
102k
relevans
float64
0.76
0.82
popularity
float64
0.96
1
ranking
float64
0.76
0.81
Ornstein–Uhlenbeck process
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck. The Ornstein–Uhlenbeck process is a stationary Gauss–Markov process, which means that it is a Gaussian process, a Markov process, and is temporally homogeneous. In fact, it is the only nontrivial process that satisfies these three conditions, up to allowing linear transformations of the space and time variables. Over time, the process tends to drift towards its mean function: such a process is called mean-reverting. The process can be considered to be a modification of the random walk in continuous time, or Wiener process, in which the properties of the process have been changed so that there is a tendency of the walk to move back towards a central location, with a greater attraction when the process is further away from the center. The Ornstein–Uhlenbeck process can also be considered as the continuous-time analogue of the discrete-time AR(1) process. Definition The Ornstein–Uhlenbeck process is defined by the following stochastic differential equation: where and are parameters and denotes the Wiener process. An additional drift term is sometimes added: where is a constant. The Ornstein–Uhlenbeck process is sometimes also written as a Langevin equation of the form where , also known as white noise, stands in for the supposed derivative of the Wiener process. However, does not exist because the Wiener process is nowhere differentiable, and so the Langevin equation only makes sense if interpreted in distributional sense. In physics and engineering disciplines, it is a common representation for the Ornstein–Uhlenbeck process and similar stochastic differential equations by tacitly assuming that the noise term is a derivative of a differentiable (e.g. Fourier) interpolation of the Wiener process. Fokker–Planck equation representation The Ornstein–Uhlenbeck process can also be described in terms of a probability density function, , which specifies the probability of finding the process in the state at time . This function satisfies the Fokker–Planck equation where . This is a linear parabolic partial differential equation which can be solved by a variety of techniques. The transition probability, also known as the Green's function, is a Gaussian with mean and variance : This gives the probability of the state occurring at time given initial state at time . Equivalently, is the solution of the Fokker–Planck equation with initial condition . Mathematical properties Conditioned on a particular value of , the mean is and the covariance is For the stationary (unconditioned) process, the mean of is , and the covariance of and is . The Ornstein–Uhlenbeck process is an example of a Gaussian process that has a bounded variance and admits a stationary probability distribution, in contrast to the Wiener process; the difference between the two is in their "drift" term. For the Wiener process the drift term is constant, whereas for the Ornstein–Uhlenbeck process it is dependent on the current value of the process: if the current value of the process is less than the (long-term) mean, the drift will be positive; if the current value of the process is greater than the (long-term) mean, the drift will be negative. In other words, the mean acts as an equilibrium level for the process. This gives the process its informative name, "mean-reverting." Properties of sample paths A temporally homogeneous Ornstein–Uhlenbeck process can be represented as a scaled, time-transformed Wiener process: where is the standard Wiener process. This is roughly Theorem 1.2 in . Equivalently, with the change of variable this becomes Using this mapping, one can translate known properties of into corresponding statements for . For instance, the law of the iterated logarithm for becomes Formal solution The stochastic differential equation for can be formally solved by variation of parameters. Writing we get Integrating from to we get whereupon we see From this representation, the first moment (i.e. the mean) is shown to be assuming is constant. Moreover, the Itō isometry can be used to calculate the covariance function by Since the Itô integral of deterministic integrand is normally distributed, it follows that Kolmogorov equations The infinitesimal generator of the process isIf we let , then the eigenvalue equation simplifies to: which is the defining equation for Hermite polynomials. Its solutions are , with , which implies that the mean first passage time for a particle to hit a point on the boundary is on the order of . Numerical simulation By using discretely sampled data at time intervals of width , the maximum likelihood estimators for the parameters of the Ornstein–Uhlenbeck process are asymptotically normal to their true values. More precisely, To simulate an OU process numerically with standard deviation and correlation time , one method is to apply the finite-difference formula where is a normally distributed random number with zero mean and unit variance, sampled independently at every time-step . Scaling limit interpretation The Ornstein–Uhlenbeck process can be interpreted as a scaling limit of a discrete process, in the same way that Brownian motion is a scaling limit of random walks. Consider an urn containing blue and yellow balls. At each step a ball is chosen at random and replaced by a ball of the opposite colour. Let be the number of blue balls in the urn after steps. Then converges in law to an Ornstein–Uhlenbeck process as tends to infinity. This was obtained by Mark Kac. Heuristically one may obtain this as follows. Let , and we will obtain the stochastic differential equation at the limit. First deduce With this, we can calculate the mean and variance of , which turns out to be and . Thus at the limit, we have , with solution (assuming distribution is standard normal) . Applications In physics: noisy relaxation The Ornstein–Uhlenbeck process is a prototype of a noisy relaxation process. A canonical example is a Hookean spring (harmonic oscillator) with spring constant whose dynamics is overdamped with friction coefficient . In the presence of thermal fluctuations with temperature , the length of the spring fluctuates around the spring rest length ; its stochastic dynamics is described by an Ornstein–Uhlenbeck process with where is derived from the Stokes–Einstein equation for the effective diffusion constant. This model has been used to characterize the motion of a Brownian particle in an optical trap. At equilibrium, the spring stores an average energy in accordance with the equipartition theorem. In financial mathematics The Ornstein–Uhlenbeck process is used in the Vasicek model of the interest rate. The Ornstein–Uhlenbeck process is one of several approaches used to model (with modifications) interest rates, currency exchange rates, and commodity prices stochastically. The parameter represents the equilibrium or mean value supported by fundamentals; the degree of volatility around it caused by shocks, and the rate by which these shocks dissipate and the variable reverts towards the mean. One application of the process is a trading strategy known as pairs trade. A further implementation of the Ornstein–Uhlenbeck process is derived by Marcello Minenna in order to model the stock return under a lognormal distribution dynamics. This modeling aims at the determination of confidence interval to predict market abuse phenomena. In evolutionary biology The Ornstein–Uhlenbeck process has been proposed as an improvement over a Brownian motion model for modeling the change in organismal phenotypes over time. A Brownian motion model implies that the phenotype can move without limit, whereas for most phenotypes natural selection imposes a cost for moving too far in either direction. A meta-analysis of 250 fossil phenotype time-series showed that an Ornstein–Uhlenbeck model was the best fit for 115 (46%) of the examined time series, supporting stasis as a common evolutionary pattern. This said, there are certain challenges to its use: model selection mechanisms are often biased towards preferring an OU process without sufficient support, and misinterpretation is easy to the unsuspecting data scientist. Generalizations It is possible to define a Lévy-driven Ornstein–Uhlenbeck process, in which the background driving process is a Lévy process instead of a Wiener process: Here, the differential of the Wiener process has been replaced with the differential of a Lévy process . In addition, in finance, stochastic processes are used where the volatility increases for larger values of . In particular, the CKLS process (Chan–Karolyi–Longstaff–Sanders) with the volatility term replaced by can be solved in closed form for , as well as for , which corresponds to the conventional OU process. Another special case is , which corresponds to the Cox–Ingersoll–Ross model (CIR-model). Higher dimensions A multi-dimensional version of the Ornstein–Uhlenbeck process, denoted by the N-dimensional vector , can be defined from where is an N-dimensional Wiener process, and and are constant N×N matrices. The solution is and the mean is These expressions make use of the matrix exponential. The process can also be described in terms of the probability density function , which satisfies the Fokker–Planck equation where the matrix with components is defined by . As for the 1d case, the process is a linear transformation of Gaussian random variables, and therefore itself must be Gaussian. Because of this, the transition probability is a Gaussian which can be written down explicitly. If the real parts of the eigenvalues of are larger than zero, a stationary solution moreover exists, given by where the matrix is determined from the Lyapunov equation . See also Stochastic calculus Wiener process Gaussian process Mathematical finance The Vasicek model of interest rates Short-rate model Diffusion Fluctuation-dissipation theorem Klein–Kramers equation Notes References External links A Stochastic Processes Toolkit for Risk Management, Damiano Brigo, Antonio Dalessandro, Matthias Neugebauer and Fares Triki Simulating and Calibrating the Ornstein–Uhlenbeck process, M. A. van den Berg Maximum likelihood estimation of mean reverting processes, Jose Carlos Garcia Franco Stochastic differential equations Markov processes Variants of random walks
0.775481
0.996388
0.77268
Entropic gravity
Entropic gravity, also known as emergent gravity, is a theory in modern physics that describes gravity as an entropic force—a force with macro-scale homogeneity but which is subject to quantum-level disorder—and not a fundamental interaction. The theory, based on string theory, black hole physics, and quantum information theory, describes gravity as an emergent phenomenon that springs from the quantum entanglement of small bits of spacetime information. As such, entropic gravity is said to abide by the second law of thermodynamics under which the entropy of a physical system tends to increase over time. The theory has been controversial within the physics community but has sparked research and experiments to test its validity. Significance At its simplest, the theory holds that when gravity becomes vanishingly weak—levels seen only at interstellar distances—it diverges from its classically understood nature and its strength begins to decay linearly with distance from a mass. Entropic gravity provides an underlying framework to explain Modified Newtonian Dynamics, or MOND, which holds that at a gravitational acceleration threshold of approximately , gravitational strength begins to vary inversely linearly with distance from a mass rather than the normal inverse-square law of the distance. This is an exceedingly low threshold, measuring only 12 trillionths gravity's strength at Earth's surface; an object dropped from a height of one meter would fall for 36 hours were Earth's gravity this weak. It is also 3,000 times less than the remnant of Earth's gravitational field that exists at the point where crossed the solar system's heliopause and entered interstellar space. The theory claims to be consistent with both the macro-level observations of Newtonian gravity as well as Einstein's theory of general relativity and its gravitational distortion of spacetime. Importantly, the theory also explains (without invoking the existence of dark matter and tweaking of its new free parameters) why galactic rotation curves differ from the profile expected with visible matter. The theory of entropic gravity posits that what has been interpreted as unobserved dark matter is the product of quantum effects that can be regarded as a form of positive dark energy that lifts the vacuum energy of space from its ground state value. A central tenet of the theory is that the positive dark energy leads to a thermal-volume law contribution to entropy that overtakes the area law of anti-de Sitter space precisely at the cosmological horizon. Thus this theory provides an alternative explanation for what mainstream physics currently attributes to dark matter. Since dark matter is believed to compose the vast majority of the universe's mass, a theory in which it is absent has huge implications for cosmology. In addition to continuing theoretical work in various directions, there are many experiments planned or in progress to actually detect or better determine the properties of dark matter (beyond its gravitational attraction), all of which would be undermined by an alternative explanation for the gravitational effects currently attributed to this elusive entity. Origin The thermodynamic description of gravity has a history that goes back at least to research on black hole thermodynamics by Bekenstein and Hawking in the mid-1970s. These studies suggest a deep connection between gravity and thermodynamics, which describes the behavior of heat. In 1995, Jacobson demonstrated that the Einstein field equations describing relativistic gravitation can be derived by combining general thermodynamic considerations with the equivalence principle. Subsequently, other physicists, most notably Thanu Padmanabhan, began to explore links between gravity and entropy. Erik Verlinde's theory In 2009, Erik Verlinde proposed a conceptual model that describes gravity as an entropic force. He argues (similar to Jacobson's result) that gravity is a consequence of the "information associated with the positions of material bodies". This model combines the thermodynamic approach to gravity with Gerard 't Hooft's holographic principle. It implies that gravity is not a fundamental interaction, but an emergent phenomenon which arises from the statistical behavior of microscopic degrees of freedom encoded on a holographic screen. The paper drew a variety of responses from the scientific community. Andrew Strominger, a string theorist at Harvard said "Some people have said it can't be right, others that it's right and we already knew it – that it’s right and profound, right and trivial." In July 2011, Verlinde presented the further development of his ideas in a contribution to the Strings 2011 conference, including an explanation for the origin of dark matter. Verlinde's article also attracted a large amount of media exposure, and led to immediate follow-up work in cosmology, the dark energy hypothesis, cosmological acceleration, cosmological inflation, and loop quantum gravity. Also, a specific microscopic model has been proposed that indeed leads to entropic gravity emerging at large scales. Entropic gravity can emerge from quantum entanglement of local Rindler horizons. Derivation of the law of gravitation The law of gravitation is derived from classical statistical mechanics applied to the holographic principle, that states that the description of a volume of space can be thought of as bits of binary information, encoded on a boundary to that region, a closed surface of area . The information is evenly distributed on the surface with each bit requiring an area equal to , the so-called Planck area, from which can thus be computed: where is the Planck length. The Planck length is defined as: where is the universal gravitational constant, is the speed of light, and is the reduced Planck constant. When substituted in the equation for we find: The statistical equipartition theorem defines the temperature of a system with degrees of freedom in terms of its energy such that: where is the Boltzmann constant. This is the equivalent energy for a mass according to: The effective temperature experienced due to a uniform acceleration in a vacuum field according to the Unruh effect is: where is that acceleration, which for a mass would be attributed to a force according to Newton's second law of motion: Taking the holographic screen to be a sphere of radius , the surface area would be given by: From algebraic substitution of these into the above relations, one derives Newton's law of universal gravitation: Note that this derivation assumes that the number of the binary bits of information is equal to the number of the degrees of freedom. Criticism and experimental tests Entropic gravity, as proposed by Verlinde in his original article, reproduces the Einstein field equations and, in a Newtonian approximation, a potential for gravitational forces. Since its results do not differ from Newtonian gravity except in regions of extremely small gravitational fields, testing the theory with earth-based laboratory experiments does not appear feasible. Spacecraft-based experiments performed at Lagrangian points within our solar system would be expensive and challenging. Even so, entropic gravity in its current form has been severely challenged on formal grounds. Matt Visser has shown that the attempt to model conservative forces in the general Newtonian case (i.e. for arbitrary potentials and an unlimited number of discrete masses) leads to unphysical requirements for the required entropy and involves an unnatural number of temperature baths of differing temperatures. Visser concludes: For the derivation of Einstein's equations from an entropic gravity perspective, Tower Wang shows that the inclusion of energy-momentum conservation and cosmological homogeneity and isotropy requirements severely restricts a wide class of potential modifications of entropic gravity, some of which have been used to generalize entropic gravity beyond the singular case of an entropic model of Einstein's equations. Wang asserts that: Cosmological observations using available technology can be used to test the theory. On the basis of lensing by the galaxy cluster Abell 1689, Nieuwenhuizen concludes that EG is strongly ruled out unless additional (dark) matter-like eV neutrinos is added. A team from Leiden Observatory statistically observing the lensing effect of gravitational fields at large distances from the centers of more than 33,000 galaxies found that those gravitational fields were consistent with Verlinde's theory. Using conventional gravitational theory, the fields implied by these observations (as well as from measured galaxy rotation curves) could only be ascribed to a particular distribution of dark matter. In June 2017, a study by Princeton University researcher Kris Pardo asserted that Verlinde's theory is inconsistent with the observed rotation velocities of dwarf galaxies. Another theory of entropy based on geometric considerations (Quantitative Geometrical Thermodynamics, QGT) provides an entropic basis for the holographic principle and also offers another explanation for galaxy rotation curves as being due to the entropic influence of the central supermassive blackhole found in the center of a spiral galaxy. In 2018, Zhi-Wei Wang and Samuel L. Braunstein showed that, while spacetime surfaces near black holes (called stretched horizons) do obey an analog of the first law of thermodynamics, ordinary spacetime surfaces — including holographic screens — generally do not, thus undermining the key thermodynamic assumption of the emergent gravity program. In his 1964 lecture on the Relation of Mathematics and Physics, Richard Feynman describes a related theory for gravity where the gravitational force is explained due to an entropic force due to unspecified microscopic degrees of freedom. However, he immediately points out that the resulting theory cannot be correct as the fluctuation-dissipation theorem would also lead to friction which would slow down the motion of the planets which contradicts observations. Entropic gravity and quantum coherence Another criticism of entropic gravity is that entropic processes should, as critics argue, break quantum coherence. There is no theoretical framework quantitatively describing the strength of such decoherence effects, though. The temperature of the gravitational field in earth gravity well is very small (on the order of 10K). Experiments with ultra-cold neutrons in the gravitational field of Earth are claimed to show that neutrons lie on discrete levels exactly as predicted by the Schrödinger equation considering the gravitation to be a conservative potential field without any decoherent factors. Archil Kobakhidze argues that this result disproves entropic gravity, while Chaichian et al. suggest a potential loophole in the argument in weak gravitational fields such as those affecting Earth-bound experiments. See also Footnotes References Further reading It from bit – Entropic gravity for pedestrians, J. Koelman Gravity: the inside story, T Padmanabhan Experiments Show Gravity Is Not an Emergent Phenomenon Theories of gravity Gravity As An Entropic Force Gravity As An Entropic Force Emergence it:Interazione_gravitazionale#Derivazione_delle_leggi_della_gravitazione_dalla_meccanica_statistica_applicata_al_principio_olografico
0.780514
0.989943
0.772664
Homothety
In mathematics, a homothety (or homothecy, or homogeneous dilation) is a transformation of an affine space determined by a point S called its center and a nonzero number called its ratio, which sends point to a point by the rule for a fixed number . Using position vectors: . In case of (Origin): , which is a uniform scaling and shows the meaning of special choices for : for one gets the identity mapping, for one gets the reflection at the center, For one gets the inverse mapping defined by . In Euclidean geometry homotheties are the similarities that fix a point and either preserve (if ) or reverse (if ) the direction of all vectors. Together with the translations, all homotheties of an affine (or Euclidean) space form a group, the group of dilations or homothety-translations. These are precisely the affine transformations with the property that the image of every line g is a line parallel to g. In projective geometry, a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwise invariant. In Euclidean geometry, a homothety of ratio multiplies distances between points by , areas by and volumes by . Here is the ratio of magnification or dilation factor or scale factor or similitude ratio. Such a transformation can be called an enlargement if the scale factor exceeds 1. The above-mentioned fixed point S is called homothetic center or center of similarity or center of similitude. The term, coined by French mathematician Michel Chasles, is derived from two Greek elements: the prefix homo-, meaning "similar", and thesis, meaning "position". It describes the relationship between two figures of the same shape and orientation. For example, two Russian dolls looking in the same direction can be considered homothetic. Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops. Properties The following properties hold in any dimension. Mapping lines, line segments and angles A homothety has the following properties: A line is mapped onto a parallel line. Hence: angles remain unchanged. The ratio of two line segments is preserved. Both properties show: A homothety is a similarity. Derivation of the properties: In order to make calculations easy it is assumed that the center is the origin: . A line with parametric representation is mapped onto the point set with equation , which is a line parallel to . The distance of two points is and the distance between their images. Hence, the ratio (quotient) of two line segments remains unchanged . In case of the calculation is analogous but a little extensive. Consequences: A triangle is mapped on a similar one. The homothetic image of a circle is a circle. The image of an ellipse is a similar one. i.e. the ratio of the two axes is unchanged. Graphical constructions using the intercept theorem If for a homothety with center the image of a point is given (see diagram) then the image of a second point , which lies not on line can be constructed graphically using the intercept theorem: is the common point th two lines and . The image of a point collinear with can be determined using . using a pantograph Before computers became ubiquitous, scalings of drawings were done by using a pantograph, a tool similar to a compass. Construction and geometrical background: Take 4 rods and assemble a mobile parallelogram with vertices such that the two rods meeting at are prolonged at the other end as shown in the diagram. Choose the ratio . On the prolonged rods mark the two points such that and . This is the case if (Instead of the location of the center can be prescribed. In this case the ratio is .) Attach the mobile rods rotatable at point . Vary the location of point and mark at each time point . Because of (see diagram) one gets from the intercept theorem that the points are collinear (lie on a line) and equation holds. That shows: the mapping is a homothety with center and ratio . Composition The composition of two homotheties with the same center is again a homothety with center . The homotheties with center form a group. The composition of two homotheties with different centers and its ratios is in case of a homothety with its center on line and ratio or in case of a translation in direction . Especially, if (point reflections). Derivation: For the composition of the two homotheties with centers with one gets by calculation for the image of point : . Hence, the composition is in case of a translation in direction by vector . in case of point is a fixpoint (is not moved) and the composition . is a homothety with center and ratio . lies on line . The composition of a homothety and a translation is a homothety. Derivation: The composition of the homothety and the translation is which is a homothety with center and ratio . In homogenous coordinates The homothety with center can be written as the composition of a homothety with center and a translation: . Hence can be represented in homogeneous coordinates by the matrix: A pure homothety linear transformation is also conformal because it is composed of translation and uniform scale. See also Scaling (geometry) a similar notion in vector spaces Homothetic center, the center of a homothetic transformation taking one of a pair of shapes into the other The Hadwiger conjecture on the number of strictly smaller homothetic copies of a convex body that may be needed to cover it Homothetic function (economics), a function of the form f(U(y)) in which U is a homogeneous function and f is a monotonically increasing function. Notes References H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961), p. 94 External links Homothety, interactive applet from Cut-the-Knot. Transformation (function)
0.786395
0.982515
0.772645
Continuity equation
A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations. Continuity equations are a stronger, local form of conservation laws. For example, a weak version of the law of conservation of energy states that energy can neither be created nor destroyed—i.e., the total amount of energy in the universe is fixed. This statement does not rule out the possibility that a quantity of energy could disappear from one point while simultaneously appearing at another point. A stronger statement is that energy is locally conserved: energy can neither be created nor destroyed, nor can it "teleport" from one place to another—it can only move by a continuous flow. A continuity equation is the mathematical way to express this kind of statement. For example, the continuity equation for electric charge states that the amount of electric charge in any volume of space can only change by the amount of electric current flowing into or out of that volume through its boundaries. Continuity equations more generally can include "source" and "sink" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. In an everyday example, there is a continuity equation for the number of people alive; it has a "source term" to account for people being born, and a "sink term" to account for people dying. Any continuity equation can be expressed in an "integral form" (in terms of a flux integral), which applies to any finite region, or in a "differential form" (in terms of the divergence operator) which applies at a point. Continuity equations underlie more specific transport equations such as the convection–diffusion equation, Boltzmann transport equation, and Navier–Stokes equations. Flows governed by continuity equations can be visualized using a Sankey diagram. General equation Definition of flux A continuity equation is useful when a flux can be defined. To define flux, first there must be a quantity which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let be the volume density of this quantity, that is, the amount of per unit volume. The way that this quantity is flowing is described by its flux. The flux of is a vector field, which we denote as j. Here are some examples and properties of flux: The dimension of flux is "amount of flowing per unit time, through a unit area". For example, in the mass continuity equation for flowing water, if 1 gram per second of water is flowing through a pipe with cross-sectional area 1 cm2, then the average mass flux inside the pipe is , and its direction is along the pipe in the direction that the water is flowing. Outside the pipe, where there is no water, the flux is zero. If there is a velocity field which describes the relevant flow—in other words, if all of the quantity at a point is moving with velocity —then the flux is by definition equal to the density times the velocity field: For example, if in the mass continuity equation for flowing water, is the water's velocity at each point, and is the water's density at each point, then would be the mass flux, also known as the material discharge. In a well-known example, the flux of electric charge is the electric current density. If there is an imaginary surface , then the surface integral of flux over is equal to the amount of that is passing through the surface per unit time: in which is a surface integral. (Note that the concept that is here called "flux" is alternatively termed flux density in some literature, in which context "flux" denotes the surface integral of flux density. See the main article on Flux for details.) Integral form The integral form of the continuity equation states that: The amount of in a region increases when additional flows inward through the surface of the region, and decreases when it flows outward; The amount of in a region increases when new is created inside the region, and decreases when is destroyed; Apart from these two processes, there is no other way for the amount of in a region to change. Mathematically, the integral form of the continuity equation expressing the rate of increase of within a volume is: where is any imaginary closed surface, that encloses a volume , denotes a surface integral over that closed surface, is the total amount of the quantity in the volume , is the flux of , is time, is the net rate that is being generated inside the volume per unit time. When is being generated, it is called a source of , and it makes more positive. When is being destroyed, it is called a sink of , and it makes more negative. This term is sometimes written as or the total change of q from its generation or destruction inside the control volume. In a simple example, could be a building, and could be the number of people in the building. The surface would consist of the walls, doors, roof, and foundation of the building. Then the continuity equation states that the number of people in the building increases when people enter the building (an inward flux through the surface), decreases when people exit the building (an outward flux through the surface), increases when someone in the building gives birth (a source, ), and decreases when someone in the building dies (a sink, ). Differential form By the divergence theorem, a general continuity equation can also be written in a "differential form": where is divergence, is the density of the amount (i.e. the quantity per unit volume), is the flux density of (i.e. j = ρv, where v is the vector field describing the movement of the quantity ), is time, is the generation of per unit volume per unit time. Terms that generate (i.e., ) or remove (i.e., ) are referred to as a "sources" and "sinks" respectively. This general equation may be used to derive any continuity equation, ranging from as simple as the volume continuity equation to as complicated as the Navier–Stokes equations. This equation also generalizes the advection equation. Other equations in physics, such as Gauss's law of the electric field and Gauss's law for gravity, have a similar mathematical form to the continuity equation, but are not usually referred to by the term "continuity equation", because in those cases does not represent the flow of a real physical quantity. In the case that is a conserved quantity that cannot be created or destroyed (such as energy), and the equations become: Electromagnetism In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density (in amperes per square meter) is equal to the negative rate of change of the charge density (in coulombs per cubic meter), Current is the movement of charge. The continuity equation says that if charge is moving out of a differential volume (i.e., divergence of current density is positive) then the amount of charge within that volume is going to decrease, so the rate of change of charge density is negative. Therefore, the continuity equation amounts to a conservation of charge. If magnetic monopoles exist, there would be a continuity equation for monopole currents as well, see the monopole article for background and the duality between electric and magnetic currents. Fluid dynamics In fluid dynamics, the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system. The differential form of the continuity equation is: where is fluid density, is time, is the flow velocity vector field. The time derivative can be understood as the accumulation (or loss) of mass in the system, while the divergence term represents the difference in flow in versus flow out. In this context, this equation is also one of the Euler equations (fluid dynamics). The Navier–Stokes equations form a vector continuity equation describing the conservation of linear momentum. If the fluid is incompressible (volumetric strain rate is zero), the mass continuity equation simplifies to a volume continuity equation: which means that the divergence of the velocity field is zero everywhere. Physically, this is equivalent to saying that the local volume dilation rate is zero, hence a flow of water through a converging pipe will adjust solely by increasing its velocity as water is largely incompressible. Computer vision In computer vision, optical flow is the pattern of apparent motion of objects in a visual scene. Under the assumption that brightness of the moving object did not change between two image frames, one can derive the optical flow equation as: where is time, coordinates in the image, is the image intensity at image coordinate and time , is the optical flow velocity vector at image coordinate and time Energy and heat Conservation of energy says that energy cannot be created or destroyed. (See below for the nuances associated with general relativity.) Therefore, there is a continuity equation for energy flow: where , local energy density (energy per unit volume), , energy flux (transfer of energy per unit cross-sectional area per unit time) as a vector, An important practical example is the flow of heat. When heat flows inside a solid, the continuity equation can be combined with Fourier's law (heat flux is proportional to temperature gradient) to arrive at the heat equation. The equation of heat flow may also have source terms: Although energy cannot be created or destroyed, heat can be created from other types of energy, for example via friction or joule heating. Probability distributions If there is a quantity that moves continuously according to a stochastic (random) process, like the location of a single dissolved molecule with Brownian motion, then there is a continuity equation for its probability distribution. The flux in this case is the probability per unit area per unit time that the particle passes through a surface. According to the continuity equation, the negative divergence of this flux equals the rate of change of the probability density. The continuity equation reflects the fact that the molecule is always somewhere—the integral of its probability distribution is always equal to 1—and that it moves by a continuous motion (no teleporting). Quantum mechanics Quantum mechanics is another domain where there is a continuity equation related to conservation of probability. The terms in the equation require the following definitions, and are slightly less obvious than the other examples above, so they are outlined here: The wavefunction for a single particle in position space (rather than momentum space), that is, a function of position and time , . The probability density function is The probability of finding the particle within at is denoted and defined by The probability current (probability flux) is With these definitions the continuity equation reads: Either form may be quoted. Intuitively, the above quantities indicate this represents the flow of probability. The chance of finding the particle at some position and time flows like a fluid; hence the term probability current, a vector field. The particle itself does not flow deterministically in this vector field. Semiconductor The total current flow in the semiconductor consists of drift current and diffusion current of both the electrons in the conduction band and holes in the valence band. General form for electrons in one-dimension: where: n is the local concentration of electrons is electron mobility E is the electric field across the depletion region Dn is the diffusion coefficient for electrons Gn is the rate of generation of electrons Rn is the rate of recombination of electrons Similarly, for holes: where: p is the local concentration of holes is hole mobility E is the electric field across the depletion region Dp is the diffusion coefficient for holes Gp is the rate of generation of holes Rp is the rate of recombination of holes Derivation This section presents a derivation of the equation above for electrons. A similar derivation can be found for the equation for holes. Consider the fact that the number of electrons is conserved across a volume of semiconductor material with cross-sectional area, A, and length, dx, along the x-axis. More precisely, one can say: Mathematically, this equality can be written: Here J denotes current density(whose direction is against electron flow by convention) due to electron flow within the considered volume of the semiconductor. It is also called electron current density. Total electron current density is the sum of drift current and diffusion current densities: Therefore, we have Applying the product rule results in the final expression: Solution The key to solving these equations in real devices is whenever possible to select regions in which most of the mechanisms are negligible so that the equations reduce to a much simpler form. Relativistic version Special relativity The notation and tools of special relativity, especially 4-vectors and 4-gradients, offer a convenient way to write any continuity equation. The density of a quantity and its current can be combined into a 4-vector called a 4-current: where is the speed of light. The 4-divergence of this current is: where is the 4-gradient and is an index labeling the spacetime dimension. Then the continuity equation is: in the usual case where there are no sources or sinks, that is, for perfectly conserved quantities like energy or charge. This continuity equation is manifestly ("obviously") Lorentz invariant. Examples of continuity equations often written in this form include electric charge conservation where is the electric 4-current; and energy–momentum conservation where is the stress–energy tensor. General relativity In general relativity, where spacetime is curved, the continuity equation (in differential form) for energy, charge, or other conserved quantities involves the covariant divergence instead of the ordinary divergence. For example, the stress–energy tensor is a second-order tensor field containing energy–momentum densities, energy–momentum fluxes, and shear stresses, of a mass-energy distribution. The differential form of energy–momentum conservation in general relativity states that the covariant divergence of the stress-energy tensor is zero: This is an important constraint on the form the Einstein field equations take in general relativity. However, the ordinary divergence of the stress–energy tensor does not necessarily vanish: The right-hand side strictly vanishes for a flat geometry only. As a consequence, the integral form of the continuity equation is difficult to define and not necessarily valid for a region within which spacetime is significantly curved (e.g. around a black hole, or across the whole universe). Particle physics Quarks and gluons have color charge, which is always conserved like electric charge, and there is a continuity equation for such color charge currents (explicit expressions for currents are given at gluon field strength tensor). There are many other quantities in particle physics which are often or always conserved: baryon number (proportional to the number of quarks minus the number of antiquarks), electron number, mu number, tau number, isospin, and others. Each of these has a corresponding continuity equation, possibly including source / sink terms. Noether's theorem One reason that conservation equations frequently occur in physics is Noether's theorem. This states that whenever the laws of physics have a continuous symmetry, there is a continuity equation for some conserved physical quantity. The three most famous examples are: The laws of physics are invariant with respect to time-translation—for example, the laws of physics today are the same as they were yesterday. This symmetry leads to the continuity equation for conservation of energy. The laws of physics are invariant with respect to space-translation—for example, a rocket in outer space is not subject to different forces or potentials if it is displaced in any given direction (eg. x, y, z), leading to the conservation of the three components of momentum conservation of momentum. The laws of physics are invariant with respect to orientation—for example, floating in outer space, there is no measurement you can do to say "which way is up"; the laws of physics are the same regardless of how you are oriented. This symmetry leads to the continuity equation for conservation of angular momentum. See also Conservation law Conservation form Dissipative system References Further reading Equations of fluid dynamics Conservation equations Partial differential equations
0.774673
0.997379
0.772643
Lagrangian and Eulerian specification of the flow field
In classical field theories, the Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location. The Lagrangian and Eulerian specifications of the flow field are sometimes loosely denoted as the Lagrangian and Eulerian frame of reference. However, in general both the Lagrangian and Eulerian specification of the flow field can be applied in any observer's frame of reference, and in any coordinate system used within the chosen frame of reference. The Lagrangian and Eulerian specifications are named after Joseph-Louis Lagrange and Leonhard Euler, respectively. These specifications are reflected in computational fluid dynamics, where "Eulerian" simulations employ a fixed mesh while "Lagrangian" ones (such as meshfree simulations) feature simulation nodes that may move following the velocity field. History Leonhard Euler is credited of introducing both specifications in two publications written in 1755 and 1759. Joseph-Louis Lagrange studied the equations of motion in connection to the principle of least action in 1760, later in a treaty of fluid mechanics in 1781, and thirdly in his book Mécanique analytique. In this book Lagrange starts with the Lagrangian specification but later converts them into the Eulerian specification. Description In the Eulerian specification of a field, the field is represented as a function of position x and time t. For example, the flow velocity is represented by a function On the other hand, in the Lagrangian specification, individual fluid parcels are followed through time. The fluid parcels are labelled by some (time-independent) vector field x0. (Often, x0 is chosen to be the position of the center of mass of the parcels at some initial time t0. It is chosen in this particular manner to account for the possible changes of the shape over time. Therefore the center of mass is a good parameterization of the flow velocity u of the parcel.) In the Lagrangian description, the flow is described by a function giving the position of the particle labeled x0 at time t. The two specifications are related as follows: because both sides describe the velocity of the particle labeled x0 at time t. Within a chosen coordinate system, x0 and x are referred to as the Lagrangian coordinates and Eulerian coordinates of the flow respectively. Material derivative The Lagrangian and Eulerian specifications of the kinematics and dynamics of the flow field are related by the material derivative (also called the Lagrangian derivative, convective derivative, substantial derivative, or particle derivative). Suppose we have a flow field u, and we are also given a generic field with Eulerian specification F(x, t). Now one might ask about the total rate of change of F experienced by a specific flow parcel. This can be computed as where ∇ denotes the nabla operator with respect to x, and the operator u⋅∇ is to be applied to each component of F. This tells us that the total rate of change of the function F as the fluid parcels moves through a flow field described by its Eulerian specification u is equal to the sum of the local rate of change and the convective rate of change of F. This is a consequence of the chain rule since we are differentiating the function F(X(x0, t), t) with respect to t. Conservation laws for a unit mass have a Lagrangian form, which together with mass conservation produce Eulerian conservation; on the contrary, when fluid particles can exchange a quantity (like energy or momentum), only Eulerian conservation laws exist. See also Brewer-Dobson Circulation Conservation form Contour advection Displacement field (mechanics) Equivalent latitude Generalized Lagrangian mean Trajectory (fluid mechanics) Liouville's theorem (Hamiltonian) Lagrangian particle tracking Rolling Streamlines, streaklines, and pathlines Immersed Boundary Method Semi-Lagrangian scheme Stochastic Eulerian Lagrangian methods Notes References External links Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity. Fluid dynamics Aerodynamics Computational fluid dynamics
0.778939
0.991829
0.772574
Mechanical wave
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a material medium. (Vacuum is, from classical perspective, a non-material medium, where electromagnetic waves propagate.) While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves. Rayleigh waves, also known as ground roll, are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves, at roughly 90% of the velocity of for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves, which lose energy in all three directions. A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude. Examples Seismic waves Sound waves Wind waves on seas and lakes Vibration See also Acoustics Ultrasound Underwater acoustics References Waves Mechanics
0.779289
0.991315
0.772521
Time derivative
A time derivative is a derivative of a function with respect to time, usually interpreted as the rate of change of the value of the function. The variable denoting time is usually written as . Notation A variety of notations are used to denote the time derivative. In addition to the normal (Leibniz's) notation, A very common short-hand notation used, especially in physics, is the 'over-dot'. I.E. (This is called Newton's notation) Higher time derivatives are also used: the second derivative with respect to time is written as with the corresponding shorthand of . As a generalization, the time derivative of a vector, say: is defined as the vector whose components are the derivatives of the components of the original vector. That is, Use in physics Time derivatives are a key concept in physics. For example, for a changing position , its time derivative is its velocity, and its second derivative with respect to time, , is its acceleration. Even higher derivatives are sometimes also used: the third derivative of position with respect to time is known as the jerk. See motion graphs and derivatives. A large number of fundamental equations in physics involve first or second time derivatives of quantities. Many other fundamental quantities in science are time derivatives of one another: force is the time derivative of momentum power is the time derivative of energy electric current is the time derivative of electric charge and so on. A common occurrence in physics is the time derivative of a vector, such as velocity or displacement. In dealing with such a derivative, both magnitude and orientation may depend upon time. Example: circular motion For example, consider a particle moving in a circular path. Its position is given by the displacement vector , related to the angle, θ, and radial distance, r, as defined in the figure: For this example, we assume that . Hence, the displacement (position) at any time t is given by This form shows the motion described by r(t) is in a circle of radius r because the magnitude of r(t) is given by using the trigonometric identity and where is the usual Euclidean dot product. With this form for the displacement, the velocity now is found. The time derivative of the displacement vector is the velocity vector. In general, the derivative of a vector is a vector made up of components each of which is the derivative of the corresponding component of the original vector. Thus, in this case, the velocity vector is: Thus the velocity of the particle is nonzero even though the magnitude of the position (that is, the radius of the path) is constant. The velocity is directed perpendicular to the displacement, as can be established using the dot product: Acceleration is then the time-derivative of velocity: The acceleration is directed inward, toward the axis of rotation. It points opposite to the position vector and perpendicular to the velocity vector. This inward-directed acceleration is called centripetal acceleration. In differential geometry In differential geometry, quantities are often expressed with respect to the local covariant basis, , where i ranges over the number of dimensions. The components of a vector expressed this way transform as a contravariant tensor, as shown in the expression , invoking Einstein summation convention. If we want to calculate the time derivatives of these components along a trajectory, so that we have , we can define a new operator, the invariant derivative , which will continue to return contravariant tensors: where (with being the jth coordinate) captures the components of the velocity in the local covariant basis, and are the Christoffel symbols for the coordinate system. Note that explicit dependence on t has been repressed in the notation. We can then write: as well as: In terms of the covariant derivative, , we have: Use in economics In economics, many theoretical models of the evolution of various economic variables are constructed in continuous time and therefore employ time derivatives. One situation involves a stock variable and its time derivative, a flow variable. Examples include: The flow of net fixed investment is the time derivative of the capital stock. The flow of inventory investment is the time derivative of the stock of inventories. The growth rate of the money supply is the time derivative of the money supply divided by the money supply itself. Sometimes the time derivative of a flow variable can appear in a model: The growth rate of output is the time derivative of the flow of output divided by output itself. The growth rate of the labor force is the time derivative of the labor force divided by the labor force itself. And sometimes there appears a time derivative of a variable which, unlike the examples above, is not measured in units of currency: The time derivative of a key interest rate can appear. The inflation rate is the growth rate of the price level—that is, the time derivative of the price level divided by the price level itself. See also Differential calculus Notation for differentiation Circular motion Centripetal force Spatial derivative Temporal rate References Differential calculus
0.785388
0.983617
0.77252
Stellar kinematics
In astronomy, stellar kinematics is the observational study or measurement of the kinematics or motions of stars through space. Stellar kinematics encompasses the measurement of stellar velocities in the Milky Way and its satellites as well as the internal kinematics of more distant galaxies. Measurement of the kinematics of stars in different subcomponents of the Milky Way including the thin disk, the thick disk, the bulge, and the stellar halo provides important information about the formation and evolutionary history of our Galaxy. Kinematic measurements can also identify exotic phenomena such as hypervelocity stars escaping from the Milky Way, which are interpreted as the result of gravitational encounters of binary stars with the supermassive black hole at the Galactic Center. Stellar kinematics is related to but distinct from the subject of stellar dynamics, which involves the theoretical study or modeling of the motions of stars under the influence of gravity. Stellar-dynamical models of systems such as galaxies or star clusters are often compared with or tested against stellar-kinematic data to study their evolutionary history and mass distributions, and to detect the presence of dark matter or supermassive black holes through their gravitational influence on stellar orbits. Space velocity The component of stellar motion toward or away from the Sun, known as radial velocity, can be measured from the spectrum shift caused by the Doppler effect. The transverse, or proper motion must be found by taking a series of positional determinations against more distant objects. Once the distance to a star is determined through astrometric means such as parallax, the space velocity can be computed. This is the star's actual motion relative to the Sun or the local standard of rest (LSR). The latter is typically taken as a position at the Sun's present location that is following a circular orbit around the Galactic Center at the mean velocity of those nearby stars with low velocity dispersion. The Sun's motion with respect to the LSR is called the "peculiar solar motion". The components of space velocity in the Milky Way's Galactic coordinate system are usually designated U, V, and W, given in km/s, with U positive in the direction of the Galactic Center, V positive in the direction of galactic rotation, and W positive in the direction of the North Galactic Pole. The peculiar motion of the Sun with respect to the LSR is (U, V, W) = (11.1, 12.24, 7.25) km/s, with statistical uncertainty (+0.69−0.75, +0.47−0.47, +0.37−0.36) km/s and systematic uncertainty (1, 2, 0.5) km/s. (Note that V is 7 km/s larger than estimated in 1998 by Dehnen et al.) Use of kinematic measurements Stellar kinematics yields important astrophysical information about stars, and the galaxies in which they reside. Stellar kinematics data combined with astrophysical modeling produces important information about the galactic system as a whole. Measured stellar velocities in the innermost regions of galaxies including the Milky Way have provided evidence that many galaxies host supermassive black holes at their center. In farther out regions of galaxies such as within the galactic halo, velocity measurements of globular clusters orbiting in these halo regions of galaxies provides evidence for dark matter. Both of these cases derive from the key fact that stellar kinematics can be related to the overall potential in which the stars are bound. This means that if accurate stellar kinematics measurements are made for a star or group of stars orbiting in a certain region of a galaxy, the gravitational potential and mass distribution can be inferred given that the gravitational potential in which the star is bound produces its orbit and serves as the impetus for its stellar motion. Examples of using kinematics combined with modeling to construct an astrophysical system include: Rotation of the Milky Way's disc: From the proper motions and radial velocities of stars within the Milky way disc one can show that there is differential rotation. When combining these measurements of stars' proper motions and their radial velocities, along with careful modeling, it is possible to obtain a picture of the rotation of the Milky Way disc. The local character of galactic rotation in the solar neighborhood is encapsulated in the Oort constants. Structural components of the Milky Way: Using stellar kinematics, astronomers construct models which seek to explain the overall galactic structure in terms of distinct kinematic populations of stars. This is possible because these distinct populations are often located in specific regions of galaxies. For example, within the Milky Way, there are three primary components, each with its own distinct stellar kinematics: the disc, halo and bulge or bar. These kinematic groups are closely related to the stellar populations in the Milky Way, forming a strong correlation between the motion and chemical composition, thus indicating different formation mechanisms. For the Milky Way, the speed of disk stars is and an RMS (Root mean square) velocity relative to this speed of . For bulge population stars, the velocities are randomly oriented with a larger relative RMS velocity of and no net circular velocity. The Galactic stellar halo consists of stars with orbits that extend to the outer regions of the galaxy. Some of these stars will continually orbit far from the galactic center, while others are on trajectories which bring them to various distances from the galactic center. These stars have little to no average rotation. Many stars in this group belong to globular clusters which formed long ago and thus have a distinct formation history, which can be inferred from their kinematics and poor metallicities. The halo may be further subdivided into an inner and outer halo, with the inner halo having a net prograde motion with respect to the Milky Way and the outer a net retrograde motion. External galaxies: Spectroscopic observations of external galaxies make it possible to characterize the bulk motions of the stars they contain. While these stellar populations in external galaxies are generally not resolved to the level where one can track the motion of individual stars (except for the very nearest galaxies) measurements of the kinematics of the integrated stellar population along the line of sight provides information including the mean velocity and the velocity dispersion which can then be used to infer the distribution of mass within the galaxy. Measurement of the mean velocity as a function of position gives information on the galaxy's rotation, with distinct regions of the galaxy that are redshifted / blueshifted in relation to the galaxy's systemic velocity. Mass distributions: Through measurement of the kinematics of tracer objects such as globular clusters and the orbits of nearby satellite dwarf galaxies, we can determine the mass distribution of the Milky Way or other galaxies. This is accomplished by combining kinematic measurements with dynamical modeling. Recent advancements due to Gaia In 2018, the Gaia Data Release 2 (GAIA DR2) marked a significant advancement in stellar kinematics, offering a rich dataset of precise measurements. This release included detailed stellar kinematic and stellar parallax data, contributing to a more nuanced understanding of the Milky Way's structure. Notably, it facilitated the determination of proper motions for numerous celestial objects, including the absolute proper motions of 75 globular clusters situated at distances extending up to and a bright limit of . Furthermore, Gaia's comprehensive dataset enabled the measurement of absolute proper motions in nearby dwarf spheroidal galaxies, serving as crucial indicators for understanding the mass distribution within the Milky Way. GAIA DR3 improved the quality of previously published data by providing detailed astrophysical parameters. While the complete GAIA DR4 is yet to be unveiled, the latest release offers enhanced insights into white dwarfs, hypervelocity stars, cosmological gravitational lensing, and the merger history of the Galaxy. Stellar kinematic types Stars within galaxies may be classified based on their kinematics. For example, the stars in the Milky Way can be subdivided into two general populations, based on their metallicity, or proportion of elements with atomic numbers higher than helium. Among nearby stars, it has been found that population I stars with higher metallicity are generally located in the stellar disk while older population II stars are in random orbits with little net rotation. The latter have elliptical orbits that are inclined to the plane of the Milky Way. Comparison of the kinematics of nearby stars has also led to the identification of stellar associations. These are most likely groups of stars that share a common point of origin in giant molecular clouds. There are many additional ways to classify stars based on their measured velocity components, and this provides detailed information about the nature of the star's formation time, its present location, and the general structure of the galaxy. As a star moves in a galaxy, the smoothed out gravitational potential of all the other stars and other mass within the galaxy plays a dominant role in determining the stellar motion. Stellar kinematics can provide insights into the location of where the star formed within the galaxy. Measurements of an individual star's kinematics can identify stars that are peculiar outliers such as a high-velocity star moving much faster than its nearby neighbors. High-velocity stars Depending on the definition, a high-velocity star is a star moving faster than 65 km/s to 100 km/s relative to the average motion of the other stars in the star's neighborhood. The velocity is also sometimes defined as supersonic relative to the surrounding interstellar medium. The three types of high-velocity stars are: runaway stars, halo stars and hypervelocity stars. High-velocity stars were studied by Jan Oort, who used their kinematic data to predict that high-velocity stars have very little tangential velocity. Runaway stars A runaway star is one that is moving through space with an abnormally high velocity relative to the surrounding interstellar medium. The proper motion of a runaway star often points exactly away from a stellar association, of which the star was formerly a member, before it was hurled out. Mechanisms that may give rise to a runaway star include: Gravitational interactions between stars in a stellar system can result in large accelerations of one or more of the involved stars. In some cases, stars may even be ejected. This can occur in seemingly stable star systems of only three stars, as described in studies of the three-body problem in gravitational theory. A collision or close encounter between stellar systems, including galaxies, may result in the disruption of both systems, with some of the stars being accelerated to high velocities, or even ejected. A large-scale example is the gravitational interaction between the Milky Way and the Large Magellanic Cloud. A supernova explosion in a multiple star system can accelerate both the supernova remnant and remaining stars to high velocities. Multiple mechanisms may accelerate the same runaway star. For example, a massive star that was originally ejected due to gravitational interactions with its stellar neighbors may itself go supernova, producing a remnant with a velocity modulated by the supernova kick. If this supernova occurs in the very nearby vicinity of other stars, it is possible that it may produce more runaways in the process. An example of a related set of runaway stars is the case of AE Aurigae, 53 Arietis and Mu Columbae, all of which are moving away from each other at velocities of over 100 km/s (for comparison, the Sun moves through the Milky Way at about 20 km/s faster than the local average). Tracing their motions back, their paths intersect near to the Orion Nebula about 2 million years ago. Barnard's Loop is believed to be the remnant of the supernova that launched the other stars. Another example is the X-ray object Vela X-1, where photodigital techniques reveal the presence of a typical supersonic bow shock hyperbola. Halo stars Halo stars are very old stars that do not follow circular orbits around the center of the Milky Way within its disk. Instead, the halo stars travel in elliptical orbits, often inclined to the disk, which take them well above and below the plane of the Milky Way. Although their orbital velocities relative to the Milky Way may be no faster than disk stars, their different paths result in high relative velocities. Typical examples are the halo stars passing through the disk of the Milky Way at steep angles. One of the nearest 45 stars, called Kapteyn's Star, is an example of the high-velocity stars that lie near the Sun: Its observed radial velocity is −245 km/s, and the components of its space velocity are and Hypervelocity stars Hypervelocity stars (designated as HVS or HV in stellar catalogues) have substantially higher velocities than the rest of the stellar population of a galaxy. Some of these stars may even exceed the escape velocity of the galaxy. In the Milky Way, stars usually have velocities on the order of 100 km/s, whereas hypervelocity stars typically have velocities on the order of 1000 km/s. Most of these fast-moving stars are thought to be produced near the center of the Milky Way, where there is a larger population of these objects than further out. One of the fastest known stars in our Galaxy is the O-class sub-dwarf US 708, which is moving away from the Milky Way with a total velocity of around 1200 km/s. Jack G. Hills first predicted the existence of HVSs in 1988. This was later confirmed in 2005 by Warren Brown, Margaret Geller, Scott Kenyon, and Michael Kurtz. 10 unbound HVSs were known, one of which is believed to have originated from the Large Magellanic Cloud rather than the Milky Way. Further measurements placed its origin within the Milky Way. Due to uncertainty about the distribution of mass within the Milky Way, determining whether a HVS is unbound is difficult. A further five known high-velocity stars may be unbound from the Milky Way, and 16 HVSs are thought to be bound. The nearest currently known HVS (HVS2) is about 19 kpc from the Sun. , there have been roughly 20 observed hypervelocity stars. Though most of these were observed in the Northern Hemisphere, the possibility remains that there are HVSs only observable from the Southern Hemisphere. It is believed that about 1,000 HVSs exist in the Milky Way. Considering that there are around 100 billion stars in the Milky Way, this is a minuscule fraction (~0.000001%). Results from the second data release of Gaia (DR2) show that most high-velocity late-type stars have a high probability of being bound to the Milky Way. However, distant hypervelocity star candidates are more promising. In March 2019, LAMOST-HVS1 was reported to be a confirmed hypervelocity star ejected from the stellar disk of the Milky Way. In July 2019, astronomers reported finding an A-type star, S5-HVS1, traveling , faster than any other star detected so far. The star is in the Grus (or Crane) constellation in the southern sky and is about from Earth. It may have been ejected from the Milky Way after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy. Origin of hypervelocity stars HVSs are believed to predominantly originate by close encounters of binary stars with the supermassive black hole in the center of the Milky Way. One of the two partners is gravitationally captured by the black hole (in the sense of entering orbit around it), while the other escapes with high velocity, becoming a HVS. Such maneuvers are analogous to the capture and ejection of interstellar objects by a star. Supernova-induced HVSs may also be possible, although they are presumably rare. In this scenario, a HVS is ejected from a close binary system as a result of the companion star undergoing a supernova explosion. Ejection velocities up to 770 km/s, as measured from the galactic rest frame, are possible for late-type B-stars. This mechanism can explain the origin of HVSs which are ejected from the galactic disk. Known HVSs are main-sequence stars with masses a few times that of the Sun. HVSs with smaller masses are also expected and G/K-dwarf HVS candidates have been found. Some HVSs may have originated from a disrupted dwarf galaxy. When it made its closest approach to the center of the Milky Way, some of its stars broke free and were thrown into space, due to the slingshot-like effect of the boost. Some neutron stars are inferred to be traveling with similar speeds. This could be related to HVSs and the HVS ejection mechanism. Neutron stars are the remnants of supernova explosions, and their extreme speeds are very likely the result of an asymmetric supernova explosion or the loss of their near partner during the supernova explosions that forms them. The neutron star RX J0822-4300, which was measured to move at a record speed of over 1,500 km/s (0.5% of the speed of light) in 2007 by the Chandra X-ray Observatory, is thought to have been produced the first way. One theory regarding the ignition of Type Ia supernovae invokes the onset of a merger between two white dwarfs in a binary star system, triggering the explosion of the more massive white dwarf. If the less massive white dwarf is not destroyed during the explosion, it will no longer be gravitationally bound to its destroyed companion, causing it to leave the system as a hypervelocity star with its pre-explosion orbital velocity of 1000–2500 km/s. In 2018, three such stars were discovered using data from the Gaia satellite. Partial list of HVSs As of 2014, twenty HVS were known. HVS 1 – (SDSS J090744.99+024506.8) (a.k.a. The Outcast Star) – the first hypervelocity star to be discovered HVS 2 – (SDSS J093320.86+441705.4 or US 708) HVS 3 – (HE 0437-5439) – possibly from the Large Magellanic Cloud HVS 4 – (SDSS J091301.00+305120.0) HVS 5 – (SDSS J091759.42+672238.7) HVS 6 – (SDSS J110557.45+093439.5) HVS 7 – (SDSS J113312.12+010824.9) HVS 8 – (SDSS J094214.04+200322.1) HVS 9 – (SDSS J102137.08-005234.8) HVS 10 – (SDSS J120337.85+180250.4) Kinematic groups A set of stars with similar space motion and ages is known as a kinematic group. These are stars that could share a common origin, such as the evaporation of an open cluster, the remains of a star forming region, or collections of overlapping star formation bursts at differing time periods in adjacent regions. Most stars are born within molecular clouds known as stellar nurseries. The stars formed within such a cloud compose gravitationally bound open clusters containing dozens to thousands of members with similar ages and compositions. These clusters dissociate with time. Groups of young stars that escape a cluster, or are no longer bound to each other, form stellar associations. As these stars age and disperse, their association is no longer readily apparent and they become moving groups of stars. Astronomers are able to determine if stars are members of a kinematic group because they share the same age, metallicity, and kinematics (radial velocity and proper motion). As the stars in a moving group formed in proximity and at nearly the same time from the same gas cloud, although later disrupted by tidal forces, they share similar characteristics. Stellar associations A stellar association is a very loose star cluster, whose stars share a common origin and are still moving together through space, but have become gravitationally unbound. Associations are primarily identified by their common movement vectors and ages. Identification by chemical composition is also used to factor in association memberships. Stellar associations were first discovered by the Armenian astronomer Viktor Ambartsumian in 1947. The conventional name for an association uses the names or abbreviations of the constellation (or constellations) in which they are located; the association type, and, sometimes, a numerical identifier. Types Viktor Ambartsumian first categorized stellar associations into two groups, OB and T, based on the properties of their stars. A third category, R, was later suggested by Sidney van den Bergh for associations that illuminate reflection nebulae. The OB, T, and R associations form a continuum of young stellar groupings. But it is currently uncertain whether they are an evolutionary sequence, or represent some other factor at work. Some groups also display properties of both OB and T associations, so the categorization is not always clear-cut. OB associations Young associations will contain 10 to 100 massive stars of spectral class O and B, and are known as OB associations. In addition, these associations also contain hundreds or thousands of low- and intermediate-mass stars. Association members are believed to form within the same small volume inside a giant molecular cloud. Once the surrounding dust and gas is blown away, the remaining stars become unbound and begin to drift apart. It is believed that the majority of all stars in the Milky Way were formed in OB associations. O-class stars are short-lived, and will expire as supernovae after roughly one million years. As a result, OB associations are generally only a few million years in age or less. The O-B stars in the association will have burned all their fuel within ten million years. (Compare this to the current age of the Sun at about five billion years.) The Hipparcos satellite provided measurements that located a dozen OB associations within 650 parsecs of the Sun. The nearest OB association is the Scorpius–Centaurus association, located about 400 light-years from the Sun. OB associations have also been found in the Large Magellanic Cloud and the Andromeda Galaxy. These associations can be quite sparse, spanning 1,500 light-years in diameter. T associations Young stellar groups can contain a number of infant T Tauri stars that are still in the process of entering the main sequence. These sparse populations of up to a thousand T Tauri stars are known as T associations. The nearest example is the Taurus-Auriga T association (Tau–Aur T association), located at a distance of 140 parsecs from the Sun. Other examples of T associations include the R Corona Australis T association, the Lupus T association, the Chamaeleon T association and the Velorum T association. T associations are often found in the vicinity of the molecular cloud from which they formed. Some, but not all, include O–B class stars. Group members have the same age and origin, the same chemical composition, and the same amplitude and direction in their vector of velocity. R associations Associations of stars that illuminate reflection nebulae are called R associations, a name suggested by Sidney van den Bergh after he discovered that the stars in these nebulae had a non-uniform distribution. These young stellar groupings contain main sequence stars that are not sufficiently massive to disperse the interstellar clouds in which they formed. This allows the properties of the surrounding dark cloud to be examined by astronomers. Because R associations are more plentiful than OB associations, they can be used to trace out the structure of the galactic spiral arms. An example of an R association is Monoceros R2, located 830 ± 50 parsecs from the Sun. Moving groups If the remnants of a stellar association drift through the Milky Way as a somewhat coherent assemblage, then they are termed a moving group or kinematic group. Moving groups can be old, such as the HR 1614 moving group at two billion years, or young, such as the AB Dor Moving Group at only 120 million years. Moving groups were studied intensely by Olin Eggen in the 1960s. A list of the nearest young moving groups has been compiled by López-Santiago et al. The closest is the Ursa Major Moving Group which includes all of the stars in the Plough / Big Dipper asterism except for Dubhe and η Ursae Majoris. This is sufficiently close that the Sun lies in its outer fringes, without being part of the group. Hence, although members are concentrated at declinations near 60°N, some outliers are as far away across the sky as Triangulum Australe at 70°S. The list of young moving groups is constantly evolving. The Banyan Σ tool currently lists 29 nearby young moving groups Recent additions to nearby moving groups are the Volans-Carina Association (VCA), discovered with Gaia, and the Argus Association (ARG), confirmed with Gaia. Moving groups can sometimes be further subdivided in smaller distinct groups. The Great Austral Young Association (GAYA) complex was found to be subdivided into the moving groups Carina, Columba, and Tucana-Horologium. The three Associations are not very distinct from each other, and have similar kinematic properties. Young moving groups have well known ages and can help with the characterization of objects with hard-to-estimate ages, such as brown dwarfs. Members of nearby young moving groups are also candidates for directly imaged protoplanetary disks, such as TW Hydrae or directly imaged exoplanets, such as Beta Pictoris b or GU Psc b. Stellar streams A stellar stream is an association of stars orbiting a galaxy that was once a globular cluster or dwarf galaxy that has now been torn apart and stretched out along its orbit by tidal forces. Known kinematic groups Some nearby kinematic groups include: Local Association (Pleiades moving group) AB Doradus moving group Alpha Persei moving cluster Beta Pictoris moving group Castor moving group Corona Australis association Eta Chamaeleontis cluster Hercules-Lyra association Hercules stream Hyades Stream IC 2391 supercluster (Argus Association) Kapteyn group MBM 12 association TW Hydrae association Ursa Major Moving Group Wolf 630 moving group Zeta Herculis moving group Pisces-Eridanus stellar stream Tucana-Horologium association See also Astrometry Gaia (spacecraft) Hipparcos n-body problem Open cluster remnant List of nearby stellar associations and moving groups Stellar association References Further reading External links ESO press release about runaway stars Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight Two Exiled Stars Are Leaving Our Galaxy Forever Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight https://myspaceastronomy.com/magnetar-the-most-magnetic-stars-in-the-universe-my-space/ Young stellar kinematic groups, David Montes, Departamento de Astrofísica, Universidad Complutense de Madrid. Kinematics Galactic astronomy Kinematics Concepts in stellar astronomy
0.782874
0.986765
0.772513
History of energy
The word energy derives from Greek , which appears for the first time in the 4th century BCE works of Aristotle (OUP V, 240, 1991) (including Physics, Metaphysics, Nicomachean Ethics and De Anima). The modern concept of energy emerged from the idea of vis viva (living force), which Leibniz defined as the product of the mass of an object and its velocity squared, he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view described by Bacon in Novum Organon to illustrate inductive reasoning and shared by Isaac Newton, although it would be more than a century until this was generally accepted. Émilie marquise du Châtelet in her book Institutions de Physique ("Lessons in Physics"), published in 1740, incorporated the idea of Leibniz with practical observations of Gravesande to show that the "quantity of motion" of a moving object is proportional to its mass and its velocity squared (not the velocity itself as Newton taught—what was later called momentum). In 1802 lectures to the Royal Society, Thomas Young was the first to use the term energy in its modern sense, instead of vis viva. In the 1807 publication of those lectures, he wrote, Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity. Thermodynamics The development of steam engines required engineers to develop concepts and formulas that would allow them to describe the mechanical and thermal efficiencies of their systems. Engineers such as Sadi Carnot, physicists such as James Prescott Joule, mathematicians such as Émile Clapeyron and Hermann von Helmholtz, and amateurs such as Julius Robert von Mayer all contributed to the notion that the ability to perform certain tasks, called work, was somehow related to the amount of energy in the system. In the 1850s, Glasgow professor of natural philosophy William Thomson and his ally in the engineering science William Rankine began to replace the older language of mechanics with terms such as actual energy, kinetic energy, and potential energy. William Thomson (Lord Kelvin) amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius, and to the introduction of laws of radiant energy by Jožef Stefan. Rankine coined the term potential energy. In 1881, William Thomson stated before an audience that: Over the following thirty years or so this newly developing science went by various names, such as the dynamical theory of heat or energetics, but after the 1920s generally came to be known as thermodynamics, the science of energy transformations. Stemming from the 1850s development of the first two laws of thermodynamics, the science of energy have since branched off into a number of various fields, such as biological thermodynamics and thermoeconomics, to name a couple; as well as related terms such as entropy, a measure of the loss of useful energy, or power, an energy flow per unit time, etc. In the past two centuries, the use of the word energy in various "non-scientific" vocations, e.g. social studies, spirituality and psychology has proliferated the popular literature. Conservation of energy In 1918 it was proved that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem). During a 1961 lecture for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy: See also Timeline of thermodynamics History of physics History of the conservation of energy principle History of thermodynamics A Guide to the Scientific Knowledge of Things Familiar, a book by Ebenezer Cobham Brewer, published around 1840, presenting explanations for common phenomena Caloric theory References Further reading Hecht, Eugene. "An Historico-Critical Account of Potential Energy: Is PE Really Real?" The Physics Teacher 41 (Nov 2003): 486–93. Hughes, Thomas. Networks of Power. Electrification in Western society, 1880-1930 (Johns Hopkins UP, 1983). Martinás, Katalin. "Aristotelian Thermodynamics," Thermodynamics: history and philosophy: facts, trends, debates (Veszprém, Hungary 23–28 July 1990), 285–303. Mendoza, E. "A sketch for a history of early thermodynamics." Physics Today 14.2 (1961): 32–42. Müller, Ingo. A history of thermodynamics (Berlin: Springer, 2007) Graf, Rüdiger. "Energy History and Histories of Energy", Docupedia-Zeitgeschichte (Aug 2023). External links The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE) Timeline of history of energy for children Energy
0.792101
0.975266
0.77251
CPT symmetry
Charge, parity, and time reversal symmetry is a fundamental symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity transformation (P), and time reversal (T). CPT is the only combination of C, P, and T that is observed to be an exact symmetry of nature at the fundamental level. The CPT theorem says that CPT symmetry holds for all physical phenomena, or more precisely, that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. History The CPT theorem appeared for the first time, implicitly, in the work of Julian Schwinger in 1951 to prove the connection between spin and statistics. In 1954, Gerhart Lüders and Wolfgang Pauli derived more explicit proofs, so this theorem is sometimes known as the Lüders–Pauli theorem. At about the same time, and independently, this theorem was also proved by John Stewart Bell. These proofs are based on the principle of Lorentz invariance and the principle of locality in the interaction of quantum fields. Subsequently, Res Jost gave a more general proof in 1958 using the framework of axiomatic quantum field theory. Efforts during the late 1950s revealed the violation of P-symmetry by phenomena that involve the weak force, and there were well-known violations of C-symmetry as well. For a short time, the CP-symmetry was believed to be preserved by all physical phenomena, but in the 1960s that was later found to be false too, which implied, by CPT invariance, violations of T-symmetry as well. Derivation of the CPT theorem Consider a Lorentz boost in a fixed direction z. This can be interpreted as a rotation of the time axis into the z axis, with an imaginary rotation parameter. If this rotation parameter were real, it would be possible for a 180° rotation to reverse the direction of time and of z. Reversing the direction of one axis is a reflection of space in any number of dimensions. If space has 3 dimensions, it is equivalent to reflecting all the coordinates, because an additional rotation of 180° in the x-y plane could be included. This defines a CPT transformation if we adopt the Feynman–Stueckelberg interpretation of antiparticles as the corresponding particles traveling backwards in time. This interpretation requires a slight analytic continuation, which is well-defined only under the following assumptions: The theory is Lorentz invariant; The vacuum is Lorentz invariant; The energy is bounded below. When the above hold, quantum theory can be extended to a Euclidean theory, defined by translating all the operators to imaginary time using the Hamiltonian. The commutation relations of the Hamiltonian, and the Lorentz generators, guarantee that Lorentz invariance implies rotational invariance, so that any state can be rotated by 180 degrees. Since a sequence of two CPT reflections is equivalent to a 360-degree rotation, fermions change by a sign under two CPT reflections, while bosons do not. This fact can be used to prove the spin-statistics theorem. Consequences and implications The implication of CPT symmetry is that a "mirror-image" of our universe — with all objects having their positions reflected through an arbitrary point (corresponding to a parity inversion), all momenta reversed (corresponding to a time inversion) and with all matter replaced by antimatter (corresponding to a charge inversion) — would evolve under exactly our physical laws. The CPT transformation turns our universe into its "mirror image" and vice versa. CPT symmetry is recognized to be a fundamental property of physical laws. In order to preserve this symmetry, every violation of the combined symmetry of two of its components (such as CP) must have a corresponding violation in the third component (such as T); in fact, mathematically, these are the same thing. Thus violations in T-symmetry are often referred to as CP violations. The CPT theorem can be generalized to take into account pin groups. In 2002 Oscar Greenberg proved that, with reasonable assumptions, CPT violation implies the breaking of Lorentz symmetry. CPT violations would be expected by some string theory models, as well as by some other models that lie outside point-particle quantum field theory. Some proposed violations of Lorentz invariance, such as a compact dimension of cosmological size, could also lead to CPT violation. Non-unitary theories, such as proposals where black holes violate unitarity, could also violate CPT. As a technical point, fields with infinite spin could violate CPT symmetry. The overwhelming majority of experimental searches for Lorentz violation have yielded negative results. A detailed tabulation of these results was given in 2011 by Kostelecky and Russell. See also Poincaré symmetry and Quantum field theory Parity (physics), Charge conjugation and T-symmetry CP violation and kaon IKAROS scientific results References Sources External links Background information on Lorentz and CPT violation by Alan Kostelecký at Theoretical Physics Indiana University Charge, Parity, and Time Reversal (CPT) Symmetry at LBL CPT Invariance Tests in Neutral Kaon Decay at LBL – 8-component theory for fermions in which T-parity can be a complex number with unit radius. The CPT invariance is not a theorem but a better to have property in these class of theories. This Particle Breaks Time Symmetry – YouTube video by Veritasium An elementary discussion of CPT violation is given in chapter 15 of this student level textbook Quantum field theory Symmetry Theorems in quantum mechanics
0.779403
0.991095
0.772463
Atmospheric entry
Atmospheric entry (sometimes listed as Vimpact or Ventry) is the movement of an object from outer space into and through the gases of an atmosphere of a planet, dwarf planet, or natural satellite. There are two main types of atmospheric entry: uncontrolled entry, such as the entry of astronomical objects, space debris, or bolides; and controlled entry (or reentry) of a spacecraft capable of being navigated or following a predetermined course. Technologies and procedures allowing the controlled atmospheric entry, descent, and landing of spacecraft are collectively termed as EDL. Objects entering an atmosphere experience atmospheric drag, which puts mechanical stress on the object, and aerodynamic heating—caused mostly by compression of the air in front of the object, but also by drag. These forces can cause loss of mass (ablation) or even complete disintegration of smaller objects, and objects with lower compressive strength can explode. Reentry has been achieved with speeds ranging from 7.8 km/s for low Earth orbit to around 12.5 km/s for the Stardust probe. Crewed space vehicles must be slowed to subsonic speeds before parachutes or air brakes may be deployed. Such vehicles have high kinetic energies, and atmospheric dissipation is the only way of expending this, as it is highly impractical to use retrorockets for the entire reentry procedure. Ballistic warheads and expendable vehicles do not require slowing at reentry, and in fact, are made streamlined so as to maintain their speed. Furthermore, slow-speed returns to Earth from near-space such as high-altitude parachute jumps from balloons do not require heat shielding because the gravitational acceleration of an object starting at relative rest from within the atmosphere itself (or not far above it) cannot create enough velocity to cause significant atmospheric heating. For Earth, atmospheric entry occurs by convention at the Kármán line at an altitude of above the surface, while at Venus atmospheric entry occurs at and at Mars atmospheric entry at about . Uncontrolled objects reach high velocities while accelerating through space toward the Earth under the influence of Earth's gravity, and are slowed by friction upon encountering Earth's atmosphere. Meteors are also often travelling quite fast relative to the Earth simply because their own orbital path is different from that of the Earth before they encounter Earth's gravity well. Most objects enter at hypersonic speeds due to their sub-orbital (e.g., intercontinental ballistic missile reentry vehicles), orbital (e.g., the Soyuz), or unbounded (e.g., meteors) trajectories. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities. An alternative method of controlled atmospheric entry is buoyancy which is suitable for planetary entry where thick atmospheres, strong gravity, or both factors complicate high-velocity hyperbolic entry, such as the atmospheres of Venus, Titan and the giant planets. History The concept of the ablative heat shield was described as early as 1920 by Robert Goddard: "In the case of meteors, which enter the atmosphere with speeds as high as per second, the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor." Practical development of reentry systems began as the range, and reentry velocity of ballistic missiles increased. For early short-range missiles, like the V-2, stabilization and aerodynamic stress were important issues (many V-2s broke apart during reentry), but heating was not a serious problem. Medium-range missiles like the Soviet R-5, with a range, required ceramic composite heat shielding on separable reentry vehicles (it was no longer possible for the entire rocket structure to survive reentry). The first ICBMs, with ranges of , were only possible with the development of modern ablative heat shields and blunt-shaped vehicles. In the United States, this technology was pioneered by H. Julian Allen and A. J. Eggers Jr. of the National Advisory Committee for Aeronautics (NACA) at Ames Research Center. In 1951, they made the counterintuitive discovery that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient; i.e., the greater the drag, the less the heat load. If the reentry vehicle is made blunt, air cannot "get out of the way" quickly enough, and acts as an air cushion to push the shock wave and heated shock layer forward (away from the vehicle). Since most of the hot gases are no longer in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere. The Allen and Eggers discovery, though initially treated as a military secret, was eventually published in 1958. Terminology, definitions and jargon When atmospheric entry is part of a spacecraft landing or recovery, particularly on a planetary body other than Earth, entry is part of a phase referred to as entry, descent, and landing, or EDL. When the atmospheric entry returns to the same body that the vehicle had launched from, the event is referred to as reentry (almost always referring to Earth entry). The fundamental design objective in atmospheric entry of a spacecraft is to dissipate the energy of a spacecraft that is traveling at hypersonic speed as it enters an atmosphere such that equipment, cargo, and any passengers are slowed and land near a specific destination on the surface at zero velocity while keeping stresses on the spacecraft and any passengers within acceptable limits. This may be accomplished by propulsive or aerodynamic (vehicle characteristics or parachute) means, or by some combination. Entry vehicle shapes There are several basic shapes used in designing entry vehicles: Sphere or spherical section The simplest axisymmetric shape is the sphere or spherical section. This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The aerodynamics of a sphere or spherical section are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay–Riddell equation. The static stability of a spherical section is assured if the vehicle's center of mass is upstream from the center of curvature (dynamic stability is more problematic). Pure spheres have no lift. However, by flying at an angle of attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high-speed computers were not yet available and computational fluid dynamics was still embryonic. Because the spherical section was amenable to closed-form analysis, that geometry became the default for conservative design. Consequently, crewed capsules of that era were based upon the spherical section. Pure spherical entry vehicles were used in the early Soviet Vostok and Voskhod capsules and in Soviet Mars and Venera descent vehicles. The Apollo command module used a spherical section forebody heat shield with a converging conical afterbody. It flew a lifting entry with a hypersonic trim angle of attack of −27° (0° is blunt-end first) to yield an average L/D (lift-to-drag ratio) of 0.368. The resultant lift achieved a measure of cross-range control by offsetting the vehicle's center of mass from its axis of symmetry, allowing the lift force to be directed left or right by rolling the capsule on its longitudinal axis. Other examples of the spherical section geometry in crewed capsules are Soyuz/Zond, Gemini, and Mercury. Even these small amounts of lift allow trajectories that have very significant effects on peak g-force, reducing it from 8–9 g for a purely ballistic (slowed only by drag) trajectory to 4–5 g, as well as greatly reducing the peak reentry heat. Sphere-cone The sphere-cone is a spherical section with a frustum or blunted cone attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. The vehicle enters sphere-first. With a sufficiently small half-angle and properly placed center of mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. (The half-angle is the angle between the cone's axis of rotational symmetry and its outer surface, and thus half the angle made by the cone's surface edges.) The original American sphere-cone aeroshell was the Mk-2 RV (reentry vehicle), which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt-body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e., it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently, an alternative sphere-cone RV to the Mk-2 was developed by General Electric. This new RV was the Mk-6 which used a non-metallic ablative TPS, a nylon phenolic. This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible. However, the Mk-6 was a huge RV with an entry mass of 3,360 kg, a length of 3.1 m and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° and 11°. Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space; e.g., Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half-angle of 45° or the Viking aeroshell with a half-angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter, and Titan. Biconic The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell Douglas Corp. and represented a significant leap in RV sophistication. Three AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward-frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft-frustum half-angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However, a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published. AMaRV's attitude was controlled through a split body flap (also called a split-windward flap) along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled-up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33. Non-axisymmetric shapes Non-axisymmetric shapes have been used for crewed entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle and the Soviet Buran. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle. Entry heating Objects entering an atmosphere from space at high velocities relative to the atmosphere will cause very high levels of heating. Atmospheric entry heating comes principally from two sources: convection of hot gas flow past the surface of the body and catalytic chemical recombination reactions between the surface and atmospheric gases; and radiation from the energetic shock layer that forms in the front and sides of the body As velocity increases, both convective and radiative heating increase, but at different rates. At very high speeds, radiative heating will dominate the convective heat fluxes, as radiative heating is proportional to the eighth power of velocity, while convective heating is proportional to the third power of velocity. Radiative heating thus predominates early in atmospheric entry, while convection predominates in the later phases. During certain intensity of ionization, a radio-blackout with the spacecraft is produced. While NASA's Earth entry interface is at , the main heating during controlled entry takes place at altitudes of , peaking at . Shock layer gas physics At typical reentry temperatures, the air in the shock layer is both ionized and dissociated. This chemical dissociation necessitates various physical models to describe the shock layer's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields: Perfect gas model Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135. Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft. The perfect gas theory is elegant and extremely useful for designing aircraft but assumes that the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than at one atmosphere pressure. The perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than . For temperatures greater than 2,000 K, a heat shield designer must use a real gas model. Real (equilibrium) gas model An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo command module and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modelling. The Apollo-CM's trim-angle angle of attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic center of the Columbia was upstream from the calculated value due to real-gas effects. On Columbias maiden flight (STS-1), astronauts John Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle. An equilibrium real-gas model assumes that a gas is chemically reactive, but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions. Direct friction upon the reentry object is not the main cause of shock-layer heating. It is caused mainly from isentropic heating of the air molecules within the compression wave. Friction based entropy increases of the molecules within the wave also account for some heating. The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e., time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay–Riddell equation, which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. The time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counterintuitive given the free stream velocity was 39 km/s during peak heat flux). Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called isentropic exponent, adiabatic index, gamma, or kappa) is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton–Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modelled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler. Real (non-equilibrium) gas model A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model developed in 1958. The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N2 = N + N and N + N = N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is too simple for modelling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model, which is based upon N2, O2, NO, N, and O. The five species model assumes no ionization and ignores trace species like carbon dioxide. When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are tightly coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately . For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five-species model is no longer accurate and a twelve-species model must be used instead. Atmospheric entry interface velocities on a Mars–Earth trajectory are on the order of . Modeling high-speed Mars atmospheric entry—which involves a carbon dioxide, nitrogen and argon atmosphere—is even more complex requiring a 19-species model. An important aspect of modelling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from asymmetric diatomic molecules; e.g., cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen etc. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy; i.e., radiative heat flux. The whole process takes place in less than a millisecond which makes modelling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s, but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to ensure Apollo's success. However, radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research. Frozen gas model The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" can be misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However, it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen. The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity etc.) for the same thermodynamic state; e.g., pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modelling the flow in the wake of an entry vehicle is very difficult. Thermal protection shield (TPS) heating in the vehicle's afterbody is usually not very high, but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability. Thermal protection systems A thermal protection system, or TPS, is the barrier that protects a spacecraft during the searing heat of atmospheric reentry. Multiple approaches for the thermal protection of spacecraft are in use, among them ablative heat shields, passive cooling, and active cooling of spacecraft surfaces. In general they can be divided into two categories: ablative TPS and reusable TPS. Ablative TPS are required when space crafts reach a relatively low altitude before slowing down. Spacecrafts like the space shuttle are designed to slow down at high altitude so that they can use reuseable TPS. (see: Space Shuttle thermal protection system). Thermal protection systems are tested in high enthalpy ground testing or plasma wind tunnels that reproduce the combination of high enthalpy and high stagnation pressure using Induction plasma or DC plasma. Ablative The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer). The boundary layer comes from blowing of gaseous reaction products from the heat shield material and provides protection against all forms of heat flux. The overall process of reducing the heat flux experienced by the heat shield's outer wall by way of a boundary layer is called blockage. Ablation occurs at two levels in an ablative TPS: the outer surface of the TPS material chars, melts, and sublimes, while the bulk of the TPS material undergoes pyrolysis and expels product gases. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated. Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic). Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for reentry-vehicle nose tips. Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel. Testing of ablative materials occurs at the Ames Arc Jet Complex. Many spacecraft thermal protection systems have been tested in this facility, including the Apollo, space shuttle, and Orion heat shield materials. The thermal conductivity of a particular TPS material is usually proportional to the material's density. Carbon phenolic is a very effective ablative material, but also has high density which is undesirable. If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently, for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower-density TPS materials such as the following examples can be better design choices: Super light-weight ablator SLA in SLA-561V stands for super light-weight ablator. SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70° sphere-cone entry vehicles sent by NASA to Mars other than the Mars Science Laboratory (MSL). SLA-561V begins significant ablation at a heat flux of approximately 110 W/cm2, but will fail for heat fluxes greater than 300 W/cm2. The MSL aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm2. The peak heat flux experienced by the Viking 1 aeroshell which landed on Mars was 21 W/cm2. For Viking 1, the TPS acted as a charred thermal insulator and never experienced significant ablation. Viking 1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest used on Mars until Mars Science Laboratory). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield. Phenolic-impregnated carbon ablator Phenolic-impregnated carbon ablator (PICA), a carbon fiber preform impregnated in phenolic resin, is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative ability at high heat flux. It is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux-ablative materials, such as conventional carbon phenolics. PICA was patented by NASA Ames Research Center in the 1990s and was the primary TPS material for the Stardust aeroshell. The Stardust sample-return capsule was the fastest man-made object ever to reenter Earth's atmosphere, at 28,000 mph (ca. 12.5 km/s) at 135 km altitude. This was faster than the Apollo mission capsules and 70% faster than the Shuttle. PICA was critical for the viability of the Stardust mission, which returned to Earth in 2006. Stardust's heat shield (0.81 m base diameter) was made of one monolithic piece sized to withstand a nominal peak heating rate of 1.2 kW/cm2. A PICA heat shield was also used for the Mars Science Laboratory entry into the Martian atmosphere. PICA-X An improved and easier to produce version called PICA-X was developed by SpaceX in 2006–2010 for the Dragon space capsule. The first reentry test of a PICA-X heat shield was on the Dragon C1 mission on 8 December 2010. The PICA-X heat shield was designed, developed and fully qualified by a small team of a dozen engineers and technicians in less than four years.<ref name="N+SX_picaX"> {{cite web |last=Chambers |first=Andrew |title=NASA + SpaceX Work Together |url=http://www.nasa.gov/offices/oce/appel/ask/issues/40/40s_space-x_prt.htm |publisher=NASA |access-date=2011-02-16 |author2=Dan Rasky |date=2010-11-14 |quote=SpaceX undertook the design and manufacture of the reentry heat shield; it brought speed and efficiency that allowed the heat shield to be designed, developed, and qualified in less than four years.''' |url-status=dead |archive-url=https://web.archive.org/web/20110416170908/http://www.nasa.gov/offices/oce/appel/ask/issues/40/40s_space-x_prt.htm |archive-date=2011-04-16 }}</ref> PICA-X is ten times less expensive to manufacture than the NASA PICA heat shield material. PICA-3 A second enhanced version of PICA—called PICA-3—was developed by SpaceX during the mid-2010s. It was first flight tested on the Crew Dragon spacecraft in 2019 during the flight demonstration mission, in April 2019, and put into regular service on that spacecraft in 2020. HARLEM PICA and most other ablative TPS materials are either proprietary or classified, with formulations and manufacturing processes not disclosed in the open literature. This limits the ability of researchers to study these materials and hinders the development of thermal protection systems. Thus, the High Enthalpy Flow Diagnostics Group (HEFDiG) at the University of Stuttgart has developed an open carbon-phenolic ablative material, called the HEFDiG Ablation-Research Laboratory Experiment Material (HARLEM), from commercially available materials. HARLEM is prepared by impregnating a preform of a carbon fiber porous monolith (such as Calcarb rigid carbon insulation) with a solution of resole phenolic resin and polyvinylpyrrolidone in ethylene glycol, heating to polymerize the resin and then removing the solvent under vacuum. The resulting material is cured and machined to the desired shape. SIRCA Silicone-impregnated reusable ceramic ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the afterbody or aft cover) and the cruise ring (also called the cruise stage). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars impactor probes with their aeroshells. SIRCA is a monolithic, insulating material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. , SIRCA had been demonstrated in backshell interface applications, but not yet as a forebody TPS material. AVCOAT AVCOAT is a NASA-specified ablative heat shield, a glass-filled epoxy–novolac system. NASA originally used it for the Apollo command module in the 1960s, and then utilized the material for its next-generation beyond low Earth orbit Orion crew module, which first flew in a December 2014 test and then operationally in November 2022. The Avcoat to be used on Orion has been reformulated to meet environmental legislation that has been passed since the end of Apollo. Thermal soak Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer convects into the heat shield's outer wall and would eventually conduct to the payload. This outcome can be prevented by ejecting the heat shield (with its heat soak) prior to the heat conducting to the inner wall. Refractory insulation Refractory insulation keeps the heat in the outermost layer of the spacecraft surface, where it is conducted away by the air. The temperature of the surface rises to incandescent levels, so the material must have a very high melting point, and the material must also exhibit very low thermal conductivity. Materials with these properties tend to be brittle, delicate, and difficult to fabricate in large sizes, so they are generally fabricated as relatively small tiles that are then attached to the structural skin of the spacecraft. There is a tradeoff between toughness and thermal conductivity: less conductive materials are generally more brittle. The space shuttle used multiple types of tiles. Tiles are also used on the Boeing X-37, Dream Chaser, and Starship's upper stage. Because insulation cannot be perfect, some heat energy is stored in the insulation and in the underlying material ("thermal soaking") and must be dissipated after the spacecraft exits the high-temperature flight regime. Some of this heat will re-radiate through the surface or will be carried off the surface by convection, but some will heat the spacecraft structure and interior, which may require active cooling after landing. Typical Space Shuttle TPS tiles (LI-900) have remarkable thermal protection properties. An LI-900 tile exposed to a temperature of 1,000 K on one side will remain merely warm to the touch on the other side. However, they are relatively brittle and break easily, and cannot survive in-flight rain. Passively cooled In some early ballistic missile RVs (e.g., the Mk-2 and the sub-orbital Mercury spacecraft), radiatively cooled TPS were used to initially absorb heat flux during the heat pulse, and, then, after the heat pulse, radiate and convect the stored heat back into the atmosphere. However, the earlier version of this technique required a considerable quantity of metal TPS (e.g., titanium, beryllium, copper, etc.). Modern designers prefer to avoid this added mass by using ablative and thermal-soak TPS instead. Thermal protection systems relying on emissivity use high emissivity coatings (HECs) to facilitate radiative cooling, while an underlying porous ceramic layer serves to protect the structure from high surface temperatures. High thermally stable emissivity values coupled with low thermal conductivity are key to the functionality of such systems. Radiatively cooled TPS can be found on modern entry vehicles, but reinforced carbon–carbon (RCC) (also called carbon–carbon) is normally used instead of metal. RCC was the TPS material on the Space Shuttle's nose cone and wing leading edges, and was also proposed as the leading-edge material for the X-33. Carbon is the most refractory material known, with a one-atmosphere sublimation temperature of for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently expensive to manufacture, is heavy, and lacks robust impact resistance. Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, deal with heating similar to that experienced by spacecraft, but at much lower intensity, and for hours at a time. Studies of the SR-71's titanium skin revealed that the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of the Concorde, the aluminium nose was permitted to reach a maximum operating temperature of (approximately warmer than the normally sub-zero, ambient air); the metallurgical implications (loss of temper) that would be associated with a higher peak temperature were the most significant factors determining the top speed of the aircraft. A radiatively cooled TPS for an entry vehicle is often called a hot-metal TPS. Early TPS designs for the Space Shuttle called for a hot-metal TPS based upon a nickel superalloy (dubbed René 41) and titanium shingles. This Shuttle TPS concept was rejected, because it was believed a silica tile-based TPS would involve lower development and manufacturing costs. A nickel superalloy-shingle TPS was again proposed for the unsuccessful X-33 single-stage-to-orbit (SSTO) prototype. Recently, newer radiatively cooled TPS materials have been developed that could be superior to RCC. Known as Ultra-High Temperature Ceramics, they were developed for the prototype vehicle Slender Hypervelocity Aerothermodynamic Research Probe (SHARP). These TPS materials are based on zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at altitudes, and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for airbreathing combined-cycle-propelled spaceplanes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than , with melting points over . They are structurally stronger than RCC, and, thus, do not require structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at reradiating absorbed heat, thus eliminating the need for additional TPS behind and between the SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles. Actively cooled Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporate a refrigerant or cryogenic fuel circulating through them. Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP) in the mid-80s. The NASP was supposed to have been a scramjet powered hypersonic aircraft, but failed in development. In 2005 and 2012, two unmanned lifting body craft with actively cooled hulls were launched as a part of the German Sharp Edge Flight Experiment (SHEFEX). In early 2019, SpaceX was developing an actively cooled heat shield for its Starship spacecraft where a part of the thermal protection system will be a transpirationally cooled outer-skin design for the reentering spaceship.SpaceX CEO Elon Musk explains Starship's "transpiring" steel heat shield in Q&A , Eric Ralph, Teslarati News, 23 January 2019, accessed 23 March 2019 However, SpaceX abandoned this approach in favor of a modern version of heat shield tiles later in 2019. The Stoke Space Nova second stage, announced in October 2023 and not yet flying, uses a regeneratively cooled (by liquid hydrogen) heat shield. In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer, or passed through channels in the heat shield. Advantages included the possibility of more all-metal designs which would be cheaper to develop, be more rugged, and eliminate the need for classified and unknown technology. The disadvantages are increased weight and complexity, and lower reliability. The concept has never been flown, but a similar technology (the plug nozzle) did undergo extensive ground testing. Propulsive entry Fuel permitting, nothing prevents a vehicle from entering the atmosphere with a retrograde engine burn, which has the double effect of slowing the vehicle down much faster than atmospheric drag alone would, and forcing the compressed hot air away from the vehicle's body. During reentry, the first stage of the SpaceX Falcon 9 performs an entry burn to rapidly decelerate from its initial hypersonic speed. Feathered entry In 2004, aircraft designer Burt Rutan demonstrated the feasibility of a shape-changing airfoil for reentry with the sub-orbital SpaceShipOne. The wings on this craft rotate upward into the feathered configuration that provides a shuttlecock effect. Thus SpaceShipOne achieves much more aerodynamic drag on reentry while not experiencing significant thermal loads. The configuration increases drag, as the craft is now less streamlined and results in more atmospheric gas particles hitting the spacecraft at higher altitudes than otherwise. The aircraft thus slows down more in higher atmospheric layers which is the key to efficient reentry. Secondly, the aircraft will automatically orient itself in this state to a high drag attitude. However, the velocity attained by SpaceShipOne prior to reentry is much lower than that of an orbital spacecraft, and engineers, including Rutan, recognize that a feathered reentry technique is not suitable for return from orbit. On 4 May 2011, the first test on the SpaceShipTwo of the feathering mechanism was made during a glideflight after release from the White Knight Two. Premature deployment of the feathering system was responsible for the 2014 VSS Enterprise crash, in which the aircraft disintegrated, killing the co-pilot. The feathered reentry was first described by Dean Chapman of NACA in 1958. In the section of his report on Composite Entry, Chapman described a solution to the problem using a high-drag device: Inflatable heat shield entry Deceleration for atmospheric reentry, especially for higher-speed Mars-return missions, benefits from maximizing "the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be." An inflatable aeroshell provides one alternative for enlarging the drag area with a low-mass design. Russia Such an inflatable shield/aerobrake was designed for the penetrators of Mars 96 mission. Since the mission failed due to the launcher malfunction, the NPO Lavochkin and DASA/ESA have designed a mission for Earth orbit. The Inflatable Reentry and Descent Technology (IRDT) demonstrator was launched on Soyuz-Fregat on 8 February 2000. The inflatable shield was designed as a cone with two stages of inflation. Although the second stage of the shield failed to inflate, the demonstrator survived the orbital reentry and was recovered.Inflatable Reentry and Descent Technology (IRDT) Factsheet, ESA, September, 2005 The subsequent missions flown on the Volna rocket failed due to launcher failure. NASA IRVE NASA launched an inflatable heat shield experimental spacecraft on 17 August 2009 with the successful first test flight of the Inflatable Re-entry Vehicle Experiment (IRVE). The heat shield had been vacuum-packed into a payload shroud and launched on a Black Brant 9 sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Virginia. "Nitrogen inflated the heat shield, made of several layers of silicone-coated [Kevlar] fabric, to a mushroom shape in space several minutes after liftoff." The rocket apogee was at an altitude of where it began its descent to supersonic speed. Less than a minute later the shield was released from its cover to inflate at an altitude of . The inflation of the shield took less than 90 seconds. NASA HIAD Following the success of the initial IRVE experiments, NASA developed the concept into the more ambitious Hypersonic Inflatable Aerodynamic Decelerator (HIAD). The current design is shaped like a shallow cone, with the structure built up as a stack of circular inflated tubes of gradually increasing major diameter. The forward (convex) face of the cone is covered with a flexible thermal protection system robust enough to withstand the stresses of atmospheric entry (or reentry). In 2012, a HIAD was tested as Inflatable Reentry Vehicle Experiment 3 (IRVE-3) using a sub-orbital sounding rocket, and worked. See also Low-Density Supersonic Decelerator, a NASA project with tests in 2014 and 2015 of a 6 m diameter SIAD-R. LOFTID A inflatable reentry vehicle, Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID), was launched in November 2022, inflated in orbit, reentered faster than Mach 25, and was successfully recovered on November 10. Entry vehicle design considerations There are four critical parameters considered when designing a vehicle for atmospheric entry: Peak heat flux Heat load Peak deceleration Peak dynamic pressure Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for crewed missions. The upper limit for crewed return to Earth from low Earth orbit (LEO) or lunar return is 10g. For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4g. Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue. The reentry vehicle's design parameters may be assessed through numerical simulation, including simplifications of the vehicle's dynamics, such as the planar reentry equations and heat flux correlations. Starting from the principle of conservative design, the engineer typically considers two worst-case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest-allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For crewed missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently, the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low-density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux, but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft). Older TPS materials tend to be more labor-intensive and expensive to manufacture compared to modern materials. However, modern TPS materials often lack the flight history of the older materials (an important consideration for a risk-averse designer). Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL, but detrimental for military RVs). However, there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum-allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle 2 in a carbon dioxide atmosphere). Prior to being abandoned, the Soviet Mars lander program achieved one successful landing (Mars 3), on the second of three entry attempts (the others were Mars 2 and Mars 6). The Soviet Mars landers were based upon a 60° half-angle aeroshell design. A 45° half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is to have either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars impactor and Pioneer Venus probes. Atmospheric entry accidents Not all atmospheric reentries have been completely successful: Voskhod 2 – The service module failed to detach for some time, but the crew survived. Soyuz 5 – The service module failed to detach, but the crew survived. Apollo 15 - One of the three ringsail parachutes failed during the ocean landing, likely damaged as the spacecraft vented excess control fuel. The spacecraft was designed to land safely with only two parachutes, and the crew were uninjured. Mars Polar Lander – Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown for lack of real-time telemetry. Space Shuttle Columbia STS-1 – a combination of launch damage, protruding gap filler, and tile installation error resulted in serious damage to the orbiter, only some of which the crew was aware. Had the crew known the extent of the damage before attempting reentry, they would have flown the shuttle to a safe altitude and then bailed out. Nevertheless, reentry was successful, and the orbiter proceeded to a normal landing. Space Shuttle Atlantis STS-27 – Insulation from the starboard solid rocket booster nose cap struck the orbiter during launch, causing significant tile damage. This dislodged one tile completely, over an aluminum mounting plate for a TACAN antenna. The antenna sustained extreme heat damage, but prevented the hot gas from penetrating the vehicle body. Genesis – The parachute failed to deploy due to a G-switch having been installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle crashed into the desert floor. The payload was damaged, but most scientific data were recoverable. Soyuz TMA-11 – The Soyuz propulsion module failed to separate properly; fallback ballistic reentry was executed that subjected the crew to accelerations of about . The crew survived. Starship IFT-3: The SpaceX Starship's third integrated test flight was supposed to end with a hard splashdown in the Indian Ocean. However, approximately 48.5 minutes after launch, at an altitude of 65km, contact with the spacecraft was lost, indicating that it burned up on reentry. This was caused by excessive vehicle rolling due to clogged vents on the vehicle. Some reentries have resulted in significant disasters: Soyuz 1 – The attitude control system failed while still in orbit and later parachutes got entangled during the emergency landing sequence (entry, descent, and landing (EDL) failure). Lone cosmonaut Vladimir Mikhailovich Komarov died. Soyuz 11 – During tri-module separation, a valve seal was opened by the shock, depressurizing the descent module; the crew of three asphyxiated in space minutes before reentry. Space Shuttle Columbia STS-107 – The failure of a reinforced carbon–carbon panel on a wing leading edge caused by debris impact at launch led to breakup of the orbiter on reentry resulting in the deaths of all seven crew members. Uncontrolled and unprotected entries Of satellites that reenter, approximately 10–40% of the mass of the object may reach the surface of the Earth. On average, about one catalogued object reentered per day . Because the Earth's surface is predominantly water, most objects that survive reentry land in one of the world's oceans. The estimated chance that a given person would get hit and injured during their lifetime is around 1 in a trillion. On January 24, 1978, the Soviet Kosmos 954 reentered and crashed near Great Slave Lake in the Northwest Territories of Canada. The satellite was nuclear-powered and left radioactive debris near its impact site. On July 11, 1979, the US Skylab space station reentered and spread debris across the Australian Outback. The reentry was a major media event largely due to the Cosmos 954 incident, but not viewed as much as a potential disaster since it did not carry toxic nuclear or hydrazine fuel. NASA had originally hoped to use a Space Shuttle mission to either extend its life or enable a controlled reentry, but delays in the Shuttle program, plus unexpectedly high solar activity, made this impossible. On February 7, 1991, the Soviet Salyut 7 space station, with the Kosmos 1686 module attached, reentered and scattered debris over the town of Capitán Bermúdez, Argentina. The station had been boosted to a higher orbit in August 1986 in an attempt to keep it up until 1994, but in a scenario similar to Skylab, the planned Buran shuttle was cancelled and high solar activity caused it to come down sooner than expected. On September 7, 2011, NASA announced the impending uncontrolled reentry of the Upper Atmosphere Research Satellite and noted that there was a small risk to the public. The decommissioned satellite reentered the atmosphere on September 24, 2011, and some pieces are presumed to have crashed into the South Pacific Ocean over a debris field long. On April 1, 2018, the Chinese Tiangong-1 space station reentered over the Pacific Ocean, halfway between Australia and South America. The China Manned Space Engineering Office had intended to control the reentry, but lost telemetry and control in March 2017. On May 11, 2020, the core stage of Chinese Long March 5B (COSPAR ID 2020-027C) weighing roughly ) made an uncontrolled reentry over the Atlantic Ocean, near West African coast. Few pieces of rocket debris reportedly survived reentry and fell over at least two villages in Ivory Coast. On May 8, 2021, the core stage of Chinese Long March 5B (COSPAR ID 2021-0035B) weighing ) made an uncontrolled reentry, just west of the Maldives in the Indian Ocean (approximately 72.47°E longitude and 2.65°N latitude). Witnesses reported rocket debris as far away as the Arabian peninsula. Deorbit disposal Salyut 1, the world's first space station, was deliberately de-orbited into the Pacific Ocean in 1971 following the Soyuz 11 accident. Its successor, Salyut 6, was de-orbited in a controlled manner as well. On June 4, 2000, the Compton Gamma Ray Observatory was deliberately de-orbited after one of its gyroscopes failed. The debris that did not burn up fell harmlessly into the Pacific Ocean. The observatory was still operational, but the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash was preferable to letting the craft come down at random. In 2001, the Russian Mir'' space station was deliberately de-orbited, and broke apart in the fashion expected by the command center during atmospheric reentry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean. On February 21, 2008, a disabled U.S. spy satellite, USA-193, was hit at an altitude of approximately with an SM-3 missile fired from the U.S. Navy cruiser off the coast of Hawaii. The satellite was inoperative, having failed to reach its intended orbit when it was launched in 2006. Due to its rapidly deteriorating orbit it was destined for uncontrolled reentry within a month. U.S. Department of Defense expressed concern that the fuel tank containing highly toxic hydrazine might survive reentry to reach the Earth's surface intact. Several governments including those of Russia, China, and Belarus protested the action as a thinly-veiled demonstration of US anti-satellite capabilities. China had previously caused an international incident when it tested an anti-satellite missile in 2007. Environmental impact Atmospheric entry has a measurable impact on Earth's atmosphere, particularly the stratosphere. Atmospheric entry by spacecrafts have reached 3 % of all atmospheric entries by 2021, but in a scenario in which the number of satellites from 2019 are doubled artificial entries would make 40 % of all, which would cause atmospheric aerosols to be 94 % artificial. The impact of spacecrafts burning up in the atmosphere during artificial atmospheric entry is different to meteors due to the spacecrafts' generally larger size and different composition. The atmospheric pollutants produced by artificial atmospheric burning-up have been traced in the atmosphere and identified as reacting and possibly negatively impacting the composition of the atmosphere and particularly the ozone layer. Considering space sustainability in regard to atmospheric impact of re-entry is by 2022 just developing and has been identified in 2024 as suffering from "atmosphere-blindness", causing global environmental injustice. This is identified as a result of the current end-of life spacecraft management, which favors the station keeping practice of controlled re-entry. This is mainly done to prevent the dangers from uncontrolled atmospheric entries and space debris. Suggested alternatives are the use of less polluting materials and by in-orbit servicing and potentially in-space recycling. Gallery See also References Further reading A revised version of this classic text has been reissued as an inexpensive paperback: reissued in 2004 External links Aerocapture Mission Analysis Tool (AMAT) provides preliminary mission analysis and simulation capability for atmospheric entry vehicles at various Solar System destinations. Center for Orbital and Reentry Debris Studies (The Aerospace Corporation) Apollo Atmospheric Entry Phase, 1968, NASA Mission Planning and Analysis Division, Project Apollo. video (25:14). Buran's heat shield Encyclopedia Astronautica article on the history of space rescue crafts, including some reentry craft designs. Aerospace engineering Flight phases
0.774815
0.996958
0.772458
Equipartition theorem
In classical statistical mechanics, the equipartition theorem relates the temperature of a system to its average energies. The equipartition theorem is also known as the law of equipartition, equipartition of energy, or simply equipartition. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in translational motion of a molecule should equal that in rotational motion. The equipartition theorem makes quantitative predictions. Like the virial theorem, it gives the total average kinetic and potential energies for a system at a given temperature, from which the system's heat capacity can be computed. However, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of in thermal equilibrium, where is the Boltzmann constant and T is the (thermodynamic) temperature. More generally, equipartition can be applied to any classical system in thermal equilibrium, no matter how complicated. It can be used to derive the ideal gas law, and the Dulong–Petit law for the specific heat capacities of solids. The equipartition theorem can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered. Although the equipartition theorem makes accurate predictions in certain conditions, it is inaccurate when quantum effects are significant, such as at low temperatures. When the thermal energy is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition. Such a degree of freedom is said to be "frozen out" when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition. Such decreases in heat capacity were among the first signs to physicists of the 19th century that classical physics was incorrect and that a new, more subtle, scientific model was required. Along with other evidence, equipartition's failure to model black-body radiation—also known as the ultraviolet catastrophe—led Max Planck to suggest that energy in the oscillators in an object, which emit light, were quantized, a revolutionary hypothesis that spurred the development of quantum mechanics and quantum field theory. Basic concept and simple examples The name "equipartition" means "equal division," as derived from the Latin equi from the antecedent, æquus ("equal or even"), and partition from the noun, partitio ("division, portion"). The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium. Equipartition also makes quantitative predictions for these energies. For example, it predicts that every atom of an inert noble gas, in thermal equilibrium at temperature , has an average translational kinetic energy of , where is the Boltzmann constant. As a consequence, since kinetic energy is equal to (mass)(velocity)2, the heavier atoms of xenon have a lower average speed than do the lighter atoms of helium at the same temperature. Figure 2 shows the Maxwell–Boltzmann distribution for the speeds of the atoms in four noble gases. In this example, the key point is that the kinetic energy is quadratic in the velocity. The equipartition theorem shows that in thermal equilibrium, any degree of freedom (such as a component of the position or velocity of a particle) which appears only quadratically in the energy has an average energy of and therefore contributes to the system's heat capacity. This has many applications. Translational energy and ideal gases The (Newtonian) kinetic energy of a particle of mass , velocity is given by where , and are the Cartesian components of the velocity . Here, is short for Hamiltonian, and used henceforth as a symbol for energy because the Hamiltonian formalism plays a central role in the most general form of the equipartition theorem. Since the kinetic energy is quadratic in the components of the velocity, by equipartition these three components each contribute to the average kinetic energy in thermal equilibrium. Thus the average kinetic energy of the particle is , as in the example of noble gases above. More generally, in a monatomic ideal gas the total energy consists purely of (translational) kinetic energy: by assumption, the particles have no internal degrees of freedom and move independently of one another. Equipartition therefore predicts that the total energy of an ideal gas of particles is . It follows that the heat capacity of the gas is and hence, in particular, the heat capacity of a mole of such gas particles is , where NA is the Avogadro constant and R is the gas constant. Since R ≈ 2 cal/(mol·K), equipartition predicts that the molar heat capacity of an ideal gas is roughly 3 cal/(mol·K). This prediction is confirmed by experiment when compared to monatomic gases. The mean kinetic energy also allows the root mean square speed of the gas particles to be calculated: where is the mass of a mole of gas particles. This result is useful for many applications such as Graham's law of effusion, which provides a method for enriching uranium. Rotational energy and molecular tumbling in solution A similar example is provided by a rotating molecule with principal moments of inertia , and . According to classical mechanics, the rotational energy of such a molecule is given by where , , and are the principal components of the angular velocity. By exactly the same reasoning as in the translational case, equipartition implies that in thermal equilibrium the average rotational energy of each particle is . Similarly, the equipartition theorem allows the average (more precisely, the root mean square) angular speed of the molecules to be calculated. The tumbling of rigid molecules—that is, the random rotations of molecules in solution—plays a key role in the relaxations observed by nuclear magnetic resonance, particularly protein NMR and residual dipolar couplings. Rotational diffusion can also be observed by other biophysical probes such as fluorescence anisotropy, flow birefringence and dielectric spectroscopy. Potential energy and harmonic oscillators Equipartition applies to potential energies as well as kinetic energies: important examples include harmonic oscillators such as a spring, which has a quadratic potential energy where the constant describes the stiffness of the spring and is the deviation from equilibrium. If such a one-dimensional system has mass , then its kinetic energy is where and denote the velocity and momentum of the oscillator. Combining these terms yields the total energy Equipartition therefore implies that in thermal equilibrium, the oscillator has average energy where the angular brackets denote the average of the enclosed quantity, This result is valid for any type of harmonic oscillator, such as a pendulum, a vibrating molecule or a passive electronic oscillator. Systems of such oscillators arise in many situations; by equipartition, each such oscillator receives an average total energy and hence contributes to the system's heat capacity. This can be used to derive the formula for Johnson–Nyquist noise and the Dulong–Petit law of solid heat capacities. The latter application was particularly significant in the history of equipartition. Specific heat capacity of solids An important application of the equipartition theorem is to the specific heat capacity of a crystalline solid. Each atom in such a solid can oscillate in three independent directions, so the solid can be viewed as a system of independent simple harmonic oscillators, where denotes the number of atoms in the lattice. Since each harmonic oscillator has average energy , the average total energy of the solid is , and its heat capacity is . By taking to be the Avogadro constant , and using the relation between the gas constant and the Boltzmann constant , this provides an explanation for the Dulong–Petit law of specific heat capacities of solids, which stated that the specific heat capacity (per unit mass) of a solid element is inversely proportional to its atomic weight. A modern version is that the molar heat capacity of a solid is 3R ≈ 6 cal/(mol·K). However, this law is inaccurate at lower temperatures, due to quantum effects; it is also inconsistent with the experimentally derived third law of thermodynamics, according to which the molar heat capacity of any substance must go to zero as the temperature goes to absolute zero. A more accurate theory, incorporating quantum effects, was developed by Albert Einstein (1907) and Peter Debye (1911). Many other physical systems can be modeled as sets of coupled oscillators. The motions of such oscillators can be decomposed into normal modes, like the vibrational modes of a piano string or the resonances of an organ pipe. On the other hand, equipartition often breaks down for such systems, because there is no exchange of energy between the normal modes. In an extreme situation, the modes are independent and so their energies are independently conserved. This shows that some sort of mixing of energies, formally called ergodicity, is important for the law of equipartition to hold. Sedimentation of particles Potential energies are not always quadratic in the position. However, the equipartition theorem also shows that if a degree of freedom contributes only a multiple of (for a fixed real number ) to the energy, then in thermal equilibrium the average energy of that part is . There is a simple application of this extension to the sedimentation of particles under gravity. For example, the haze sometimes seen in beer can be caused by clumps of proteins that scatter light. Over time, these clumps settle downwards under the influence of gravity, causing more haze near the bottom of a bottle than near its top. However, in a process working in the opposite direction, the particles also diffuse back up towards the top of the bottle. Once equilibrium has been reached, the equipartition theorem may be used to determine the average position of a particular clump of buoyant mass . For an infinitely tall bottle of beer, the gravitational potential energy is given by where is the height of the protein clump in the bottle and g is the acceleration due to gravity. Since , the average potential energy of a protein clump equals . Hence, a protein clump with a buoyant mass of 10 MDa (roughly the size of a virus) would produce a haze with an average height of about 2 cm at equilibrium. The process of such sedimentation to equilibrium is described by the Mason–Weaver equation. History The equipartition of kinetic energy was proposed initially in 1843, and more correctly in 1845, by John James Waterston. In 1859, James Clerk Maxwell argued that the kinetic heat energy of a gas is equally divided between linear and rotational energy. In 1876, Ludwig Boltzmann expanded on this principle by showing that the average energy was divided equally among all the independent components of motion in a system. Boltzmann applied the equipartition theorem to provide a theoretical explanation of the Dulong–Petit law for the specific heat capacities of solids. The history of the equipartition theorem is intertwined with that of specific heat capacity, both of which were studied in the 19th century. In 1819, the French physicists Pierre Louis Dulong and Alexis Thérèse Petit discovered that the specific heat capacities of solid elements at room temperature were inversely proportional to the atomic weight of the element. Their law was used for many years as a technique for measuring atomic weights. However, subsequent studies by James Dewar and Heinrich Friedrich Weber showed that this Dulong–Petit law holds only at high temperatures; at lower temperatures, or for exceptionally hard solids such as diamond, the specific heat capacity was lower. Experimental observations of the specific heat capacities of gases also raised concerns about the validity of the equipartition theorem. The theorem predicts that the molar heat capacity of simple monatomic gases should be roughly 3 cal/(mol·K), whereas that of diatomic gases should be roughly 7 cal/(mol·K). Experiments confirmed the former prediction, but found that molar heat capacities of diatomic gases were typically about 5 cal/(mol·K), and fell to about 3 cal/(mol·K) at very low temperatures. Maxwell noted in 1875 that the disagreement between experiment and the equipartition theorem was much worse than even these numbers suggest; since atoms have internal parts, heat energy should go into the motion of these internal parts, making the predicted specific heats of monatomic and diatomic gases much higher than 3 cal/(mol·K) and 7 cal/(mol·K), respectively. A third discrepancy concerned the specific heat of metals. According to the classical Drude model, metallic electrons act as a nearly ideal gas, and so they should contribute to the heat capacity by the equipartition theorem, where Ne is the number of electrons. Experimentally, however, electrons contribute little to the heat capacity: the molar heat capacities of many conductors and insulators are nearly the same. Several explanations of equipartition's failure to account for molar heat capacities were proposed. Boltzmann defended the derivation of his equipartition theorem as correct, but suggested that gases might not be in thermal equilibrium because of their interactions with the aether. Lord Kelvin suggested that the derivation of the equipartition theorem must be incorrect, since it disagreed with experiment, but was unable to show how. In 1900 Lord Rayleigh instead put forward a more radical view that the equipartition theorem and the experimental assumption of thermal equilibrium were both correct; to reconcile them, he noted the need for a new principle that would provide an "escape from the destructive simplicity" of the equipartition theorem. Albert Einstein provided that escape, by showing in 1906 that these anomalies in the specific heat were due to quantum effects, specifically the quantization of energy in the elastic modes of the solid. Einstein used the failure of equipartition to argue for the need of a new quantum theory of matter. Nernst's 1910 measurements of specific heats at low temperatures supported Einstein's theory, and led to the widespread acceptance of quantum theory among physicists. General formulation of the equipartition theorem The most general form of the equipartition theorem states that under suitable assumptions (discussed below), for a physical system with Hamiltonian energy function and degrees of freedom , the following equipartition formula holds in thermal equilibrium for all indices and : Here is the Kronecker delta, which is equal to one if and is zero otherwise. The averaging brackets is assumed to be an ensemble average over phase space or, under an assumption of ergodicity, a time average of a single system. The general equipartition theorem holds in both the microcanonical ensemble, when the total energy of the system is constant, and also in the canonical ensemble, when the system is coupled to a heat bath with which it can exchange energy. Derivations of the general formula are given later in the article. The general formula is equivalent to the following two: If a degree of freedom xn appears only as a quadratic term anxn2 in the Hamiltonian H, then the first of these formulae implies that which is twice the contribution that this degree of freedom makes to the average energy . Thus the equipartition theorem for systems with quadratic energies follows easily from the general formula. A similar argument, with 2 replaced by s, applies to energies of the form anxns. The degrees of freedom xn are coordinates on the phase space of the system and are therefore commonly subdivided into generalized position coordinates qk and generalized momentum coordinates pk, where pk is the conjugate momentum to qk. In this situation, formula 1 means that for all k, Using the equations of Hamiltonian mechanics, these formulae may also be written Similarly, one can show using formula 2 that and Relation to the virial theorem The general equipartition theorem is an extension of the virial theorem (proposed in 1870), which states that where t denotes time. Two key differences are that the virial theorem relates summed rather than individual averages to each other, and it does not connect them to the temperature T. Another difference is that traditional derivations of the virial theorem use averages over time, whereas those of the equipartition theorem use averages over phase space. Applications Ideal gas law Ideal gases provide an important application of the equipartition theorem. As well as providing the formula for the average kinetic energy per particle, the equipartition theorem can be used to derive the ideal gas law from classical mechanics. If q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum of a particle in the gas, and F is the net force on that particle, then where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition formula. Summing over a system of N particles yields By Newton's third law and the ideal gas assumption, the net force on the system is the force applied by the walls of their container, and this force is given by the pressure P of the gas. Hence where is the infinitesimal area element along the walls of the container. Since the divergence of the position vector is the divergence theorem implies that where is an infinitesimal volume within the container and is the total volume of the container. Putting these equalities together yields which immediately implies the ideal gas law for N particles: where is the number of moles of gas and is the gas constant. Although equipartition provides a simple derivation of the ideal-gas law and the internal energy, the same results can be obtained by an alternative method using the partition function. Diatomic gases A diatomic gas can be modelled as two masses, and , joined by a spring of stiffness , which is called the rigid rotor-harmonic oscillator approximation. The classical energy of this system is where and are the momenta of the two atoms, and is the deviation of the inter-atomic separation from its equilibrium value. Every degree of freedom in the energy is quadratic and, thus, should contribute to the total average energy, and to the heat capacity. Therefore, the heat capacity of a gas of N diatomic molecules is predicted to be : the momenta and contribute three degrees of freedom each, and the extension contributes the seventh. It follows that the heat capacity of a mole of diatomic molecules with no other degrees of freedom should be and, thus, the predicted molar heat capacity should be roughly 7 cal/(mol·K). However, the experimental values for molar heat capacities of diatomic gases are typically about 5 cal/(mol·K) and fall to 3 cal/(mol·K) at very low temperatures. This disagreement between the equipartition prediction and the experimental value of the molar heat capacity cannot be explained by using a more complex model of the molecule, since adding more degrees of freedom can only increase the predicted specific heat, not decrease it. This discrepancy was a key piece of evidence showing the need for a quantum theory of matter. Extreme relativistic ideal gases Equipartition was used above to derive the classical ideal gas law from Newtonian mechanics. However, relativistic effects become dominant in some systems, such as white dwarfs and neutron stars, and the ideal gas equations must be modified. The equipartition theorem provides a convenient way to derive the corresponding laws for an extreme relativistic ideal gas. In such cases, the kinetic energy of a single particle is given by the formula Taking the derivative of with respect to the momentum component gives the formula and similarly for the and components. Adding the three components together gives where the last equality follows from the equipartition formula. Thus, the average total energy of an extreme relativistic gas is twice that of the non-relativistic case: for particles, it is . Non-ideal gases In an ideal gas the particles are assumed to interact only through collisions. The equipartition theorem may also be used to derive the energy and pressure of "non-ideal gases" in which the particles also interact with one another through conservative forces whose potential depends only on the distance between the particles. This situation can be described by first restricting attention to a single gas particle, and approximating the rest of the gas by a spherically symmetric distribution. It is then customary to introduce a radial distribution function such that the probability density of finding another particle at a distance from the given particle is equal to , where is the mean density of the gas. It follows that the mean potential energy associated to the interaction of the given particle with the rest of the gas is The total mean potential energy of the gas is therefore , where is the number of particles in the gas, and the factor is needed because summation over all the particles counts each interaction twice. Adding kinetic and potential energies, then applying equipartition, yields the energy equation A similar argument, can be used to derive the pressure equation Anharmonic oscillators An anharmonic oscillator (in contrast to a simple harmonic oscillator) is one in which the potential energy is not quadratic in the extension (the generalized position which measures the deviation of the system from equilibrium). Such oscillators provide a complementary point of view on the equipartition theorem. Simple examples are provided by potential energy functions of the form where and are arbitrary real constants. In these cases, the law of equipartition predicts that Thus, the average potential energy equals , not as for the quadratic harmonic oscillator (where ). More generally, a typical energy function of a one-dimensional system has a Taylor expansion in the extension : for non-negative integers . There is no term, because at the equilibrium point, there is no net force and so the first derivative of the energy is zero. The term need not be included, since the energy at the equilibrium position may be set to zero by convention. In this case, the law of equipartition predicts that In contrast to the other examples cited here, the equipartition formula does not allow the average potential energy to be written in terms of known constants. Brownian motion The equipartition theorem can be used to derive the Brownian motion of a particle from the Langevin equation. According to that equation, the motion of a particle of mass with velocity is governed by Newton's second law where is a random force representing the random collisions of the particle and the surrounding molecules, and where the time constant τ reflects the drag force that opposes the particle's motion through the solution. The drag force is often written ; therefore, the time constant equals . The dot product of this equation with the position vector , after averaging, yields the equation for Brownian motion (since the random force is uncorrelated with the position ). Using the mathematical identities and the basic equation for Brownian motion can be transformed into where the last equality follows from the equipartition theorem for translational kinetic energy: The above differential equation for (with suitable initial conditions) may be solved exactly: On small time scales, with , the particle acts as a freely moving particle: by the Taylor series of the exponential function, the squared distance grows approximately quadratically: However, on long time scales, with , the exponential and constant terms are negligible, and the squared distance grows only linearly: This describes the diffusion of the particle over time. An analogous equation for the rotational diffusion of a rigid molecule can be derived in a similar way. Stellar physics The equipartition theorem and the related virial theorem have long been used as a tool in astrophysics. As examples, the virial theorem may be used to estimate stellar temperatures or the Chandrasekhar limit on the mass of white dwarf stars. The average temperature of a star can be estimated from the equipartition theorem. Since most stars are spherically symmetric, the total gravitational potential energy can be estimated by integration where is the mass within a radius and is the stellar density at radius ; represents the gravitational constant and the total radius of the star. Assuming a constant density throughout the star, this integration yields the formula where is the star's total mass. Hence, the average potential energy of a single particle is where is the number of particles in the star. Since most stars are composed mainly of ionized hydrogen, equals roughly , where is the mass of one proton. Application of the equipartition theorem gives an estimate of the star's temperature Substitution of the mass and radius of the Sun yields an estimated solar temperature of T = 14 million kelvins, very close to its core temperature of 15 million kelvins. However, the Sun is much more complex than assumed by this model—both its temperature and density vary strongly with radius—and such excellent agreement (≈7% relative error) is partly fortuitous. Star formation The same formulae may be applied to determining the conditions for star formation in giant molecular clouds. A local fluctuation in the density of such a cloud can lead to a runaway condition in which the cloud collapses inwards under its own gravity. Such a collapse occurs when the equipartition theorem—or, equivalently, the virial theorem—is no longer valid, i.e., when the gravitational potential energy exceeds twice the kinetic energy Assuming a constant density for the cloud yields a minimum mass for stellar contraction, the Jeans mass Substituting the values typically observed in such clouds (, ) gives an estimated minimum mass of 17 solar masses, which is consistent with observed star formation. This effect is also known as the Jeans instability, after the British physicist James Hopwood Jeans who published it in 1902. Derivations Kinetic energies and the Maxwell–Boltzmann distribution The original formulation of the equipartition theorem states that, in any physical system in thermal equilibrium, every particle has exactly the same average translational kinetic energy, . However, this is true only for ideal gas, and the same result can be derived from the Maxwell–Boltzmann distribution. First, we choose to consider only the Maxwell–Boltzmann distribution of velocity of the z-component with this equation, we can calculate the mean square velocity of the -component Since different components of velocity are independent of each other, the average translational kinetic energy is given by Notice, the Maxwell–Boltzmann distribution should not be confused with the Boltzmann distribution, which the former can be derived from the latter by assuming the energy of a particle is equal to its translational kinetic energy. As stated by the equipartition theorem. The same result can also be obtained by averaging the particle energy using the probability of finding the particle in certain quantum energy state. Quadratic energies and the partition function More generally, the equipartition theorem states that any degree of freedom which appears in the total energy only as a simple quadratic term , where is a constant, has an average energy of in thermal equilibrium. In this case the equipartition theorem may be derived from the partition function , where is the canonical inverse temperature. Integration over the variable yields a factor in the formula for . The mean energy associated with this factor is given by as stated by the equipartition theorem. General proofs General derivations of the equipartition theorem can be found in many statistical mechanics textbooks, both for the microcanonical ensemble and for the canonical ensemble. They involve taking averages over the phase space of the system, which is a symplectic manifold. To explain these derivations, the following notation is introduced. First, the phase space is described in terms of generalized position coordinates together with their conjugate momenta . The quantities completely describe the configuration of the system, while the quantities together completely describe its state. Secondly, the infinitesimal volume of the phase space is introduced and used to define the volume of the portion of phase space where the energy of the system lies between two limits, and : In this expression, is assumed to be very small, . Similarly, is defined to be the total volume of phase space where the energy is less than : Since is very small, the following integrations are equivalent where the ellipses represent the integrand. From this, it follows that is proportional to where is the density of states. By the usual definitions of statistical mechanics, the entropy equals , and the temperature is defined by The canonical ensemble In the canonical ensemble, the system is in thermal equilibrium with an infinite heat bath at temperature (in kelvins). The probability of each state in phase space is given by its Boltzmann factor times a normalization factor , which is chosen so that the probabilities sum to one where . Using Integration by parts for a phase-space variable the above can be written as where , i.e., the first integration is not carried out over . Performing the first integral between two limits and and simplifying the second integral yields the equation The first term is usually zero, either because is zero at the limits, or because the energy goes to infinity at those limits. In that case, the equipartition theorem for the canonical ensemble follows immediately Here, the averaging symbolized by is the ensemble average taken over the canonical ensemble. The microcanonical ensemble In the microcanonical ensemble, the system is isolated from the rest of the world, or at least very weakly coupled to it. Hence, its total energy is effectively constant; to be definite, we say that the total energy is confined between and . For a given energy and spread , there is a region of phase space in which the system has that energy, and the probability of each state in that region of phase space is equal, by the definition of the microcanonical ensemble. Given these definitions, the equipartition average of phase-space variables (which could be either or ) and is given by where the last equality follows because is a constant that does not depend on . Integrating by parts yields the relation since the first term on the right hand side of the first line is zero (it can be rewritten as an integral of H − E on the hypersurface where ). Substitution of this result into the previous equation yields Since the equipartition theorem follows: Thus, we have derived the general formulation of the equipartition theorem which was so useful in the applications described above. Limitations Requirement of ergodicity The law of equipartition holds only for ergodic systems in thermal equilibrium, which implies that all states with the same energy must be equally likely to be populated. Consequently, it must be possible to exchange energy among all its various forms within the system, or with an external heat bath in the canonical ensemble. The number of physical systems that have been rigorously proven to be ergodic is small; a famous example is the hard-sphere system of Yakov Sinai. The requirements for isolated systems to ensure ergodicity—and, thus equipartition—have been studied, and provided motivation for the modern chaos theory of dynamical systems. A chaotic Hamiltonian system need not be ergodic, although that is usually a good assumption. A commonly cited counter-example where energy is not shared among its various forms and where equipartition does not hold in the microcanonical ensemble is a system of coupled harmonic oscillators. If the system is isolated from the rest of the world, the energy in each normal mode is constant; energy is not transferred from one mode to another. Hence, equipartition does not hold for such a system; the amount of energy in each normal mode is fixed at its initial value. If sufficiently strong nonlinear terms are present in the energy function, energy may be transferred between the normal modes, leading to ergodicity and rendering the law of equipartition valid. However, the Kolmogorov–Arnold–Moser theorem states that energy will not be exchanged unless the nonlinear perturbations are strong enough; if they are too small, the energy will remain trapped in at least some of the modes. Another simple example is an ideal gas of a finite number of colliding particles in a round vessel. Due to the vessel's symmetry, the angular momentum of such a gas is conserved. Therefore, not all states with the same energy are populated. This results in the mean particle energy being dependent on the mass of this particle, and also on the masses of all the other particles. Another way ergodicity can be broken is by the existence of nonlinear soliton symmetries. In 1953, Fermi, Pasta, Ulam and Tsingou conducted computer simulations of a vibrating string that included a non-linear term (quadratic in one test, cubic in another, and a piecewise linear approximation to a cubic in a third). They found that the behavior of the system was quite different from what intuition based on equipartition would have led them to expect. Instead of the energies in the modes becoming equally shared, the system exhibited a very complicated quasi-periodic behavior. This puzzling result was eventually explained by Kruskal and Zabusky in 1965 in a paper which, by connecting the simulated system to the Korteweg–de Vries equation led to the development of soliton mathematics. Failure due to quantum effects The law of equipartition breaks down when the thermal energy is significantly smaller than the spacing between energy levels. Equipartition no longer holds because it is a poor approximation to assume that the energy levels form a smooth continuum, which is required in the derivations of the equipartition theorem above. Historically, the failures of the classical equipartition theorem to explain specific heats and black-body radiation were critical in showing the need for a new theory of matter and radiation, namely, quantum mechanics and quantum field theory. To illustrate the breakdown of equipartition, consider the average energy in a single (quantum) harmonic oscillator, which was discussed above for the classical case. Neglecting the irrelevant zero-point energy term since it can be factored out of the exponential functions involved in the probability distribution, the quantum harmonic oscillator energy levels are given by , where is the Planck constant, is the fundamental frequency of the oscillator, and is an integer. The probability of a given energy level being populated in the canonical ensemble is given by its Boltzmann factor where and the denominator is the partition function, here a geometric series Its average energy is given by Substituting the formula for gives the final result At high temperatures, when the thermal energy is much greater than the spacing between energy levels, the exponential argument is much less than one and the average energy becomes , in agreement with the equipartition theorem (Figure 10). However, at low temperatures, when , the average energy goes to zero—the higher-frequency energy levels are "frozen out" (Figure 10). As another example, the internal excited electronic states of a hydrogen atom do not contribute to its specific heat as a gas at room temperature, since the thermal energy (roughly 0.025 eV) is much smaller than the spacing between the lowest and next higher electronic energy levels (roughly 10 eV). Similar considerations apply whenever the energy level spacing is much larger than the thermal energy. This reasoning was used by Max Planck and Albert Einstein, among others, to resolve the ultraviolet catastrophe of black-body radiation. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy , there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black-body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning. Other, more subtle quantum effects can lead to corrections to equipartition, such as identical particles and continuous symmetries. The effects of identical particles can be dominant at very high densities and low temperatures. For example, the valence electrons in a metal can have a mean kinetic energy of a few electronvolts, which would normally correspond to a temperature of tens of thousands of kelvins. Such a state, in which the density is high enough that the Pauli exclusion principle invalidates the classical approach, is called a degenerate fermion gas. Such gases are important for the structure of white dwarf and neutron stars. At low temperatures, a fermionic analogue of the Bose–Einstein condensate (in which a large number of identical particles occupy the lowest-energy state) can form; such superfluid electrons are responsible for superconductivity. See also Kinetic theory Quantum statistical mechanics Notes and references Further reading ASIN B00085D6OO External links Applet demonstrating equipartition in real time for a mixture of monatomic and diatomic gases The equipartition theorem in stellar physics, written by Nir J. Shaviv, an associate professor at the Racah Institute of Physics in the Hebrew University of Jerusalem. Physics theorems Laws of thermodynamics Statistical mechanics theorems
0.776525
0.994717
0.772423
Tsiolkovsky rocket equation
The classical rocket equation, or ideal rocket equation is a mathematical equation that describes the motion of vehicles that follow the basic principle of a rocket: a device that can apply acceleration to itself using thrust by expelling part of its mass with high velocity and can thereby move due to the conservation of momentum. It is credited to Konstantin Tsiolkovsky, who independently derived it and published it in 1903, although it had been independently derived and published by William Moore in 1810, and later published in a separate book in 1813. Robert Goddard also developed it independently in 1912, and Hermann Oberth derived it independently about 1920. The maximum change of velocity of the vehicle, (with no external forces acting) is: where: is the effective exhaust velocity; is the specific impulse in dimension of time; is standard gravity; is the natural logarithm function; is the initial total mass, including propellant, a.k.a. wet mass; is the final total mass without propellant, a.k.a. dry mass. Given the effective exhaust velocity determined by the rocket motor's design, the desired delta-v (e.g., orbital speed or escape velocity), and a given dry mass , the equation can be solved for the required propellant mass : The necessary wet mass grows exponentially with the desired delta-v. History The equation is named after Russian scientist Konstantin Tsiolkovsky who independently derived it and published it in his 1903 work. The equation had been derived earlier by the British mathematician William Moore in 1810, and later published in a separate book in 1813. American Robert Goddard independently developed the equation in 1912 when he began his research to improve rocket engines for possible space flight. German engineer Hermann Oberth independently derived the equation about 1920 as he studied the feasibility of space travel. While the derivation of the rocket equation is a straightforward calculus exercise, Tsiolkovsky is honored as being the first to apply it to the question of whether rockets could achieve speeds necessary for space travel. Experiment of the Boat by Tsiolkovsky In order to understand the principle of rocket propulsion, Konstantin Tsiolkovsky proposed the famous experiment of "the boat". A person is in a boat away from the shore without oars. They want to reach this shore. They notice that the boat is loaded with a certain quantity of stones and have the idea of throwing, one by one and as quickly as possible, these stones in the opposite direction to the bank. Effectively, the quantity of movement of the stones thrown in one direction corresponds to an equal quantity of movement for the boat in the other direction (ignoring friction / drag). Derivation Most popular derivation Consider the following system: In the following derivation, "the rocket" is taken to mean "the rocket and all of its unexpended propellant". Newton's second law of motion relates external forces to the change in linear momentum of the whole system (including rocket and exhaust) as follows: where is the momentum of the rocket at time : and is the momentum of the rocket and exhausted mass at time : and where, with respect to the observer: is the velocity of the rocket at time is the velocity of the rocket at time is the velocity of the mass added to the exhaust (and lost by the rocket) during time is the mass of the rocket at time is the mass of the rocket at time The velocity of the exhaust in the observer frame is related to the velocity of the exhaust in the rocket frame by: thus, Solving this yields: If and are opposite, have the same direction as , are negligible (since ), and using (since ejecting a positive results in a decrease in rocket mass in time), If there are no external forces then (conservation of linear momentum) and Assuming that is constant (known as Tsiolkovsky's hypothesis), so it is not subject to integration, then the above equation may be integrated as follows: This then yields or equivalently or or where is the initial total mass including propellant, the final mass, and the velocity of the rocket exhaust with respect to the rocket (the specific impulse, or, if measured in time, that multiplied by gravity-on-Earth acceleration). If is NOT constant, we might not have rocket equations that are as simple as the above forms. Many rocket dynamics researches were based on the Tsiolkovsky's constant hypothesis. The value is the total working mass of propellant expended. (delta v) is the integration over time of the magnitude of the acceleration produced by using the rocket engine (what would be the actual acceleration if external forces were absent). In free space, for the case of acceleration in the direction of the velocity, this is the increase of the speed. In the case of an acceleration in opposite direction (deceleration) it is the decrease of the speed. Of course gravity and drag also accelerate the vehicle, and they can add or subtract to the change in velocity experienced by the vehicle. Hence delta-v may not always be the actual change in speed or velocity of the vehicle. Other derivations Impulse-based The equation can also be derived from the basic integral of acceleration in the form of force (thrust) over mass. By representing the delta-v equation as the following: where T is thrust, is the initial (wet) mass and is the initial mass minus the final (dry) mass, and realising that the integral of a resultant force over time is total impulse, assuming thrust is the only force involved, The integral is found to be: Realising that impulse over the change in mass is equivalent to force over propellant mass flow rate (p), which is itself equivalent to exhaust velocity, the integral can be equated to Acceleration-based Imagine a rocket at rest in space with no forces exerted on it (Newton's First Law of Motion). From the moment its engine is started (clock set to 0) the rocket expels gas mass at a constant mass flow rate R (kg/s) and at exhaust velocity relative to the rocket ve (m/s). This creates a constant force F propelling the rocket that is equal to R × ve. The rocket is subject to a constant force, but its total mass is decreasing steadily because it is expelling gas. According to Newton's Second Law of Motion, its acceleration at any time t is its propelling force F divided by its current mass m: Now, the mass of fuel the rocket initially has on board is equal to m0 – mf. For the constant mass flow rate R it will therefore take a time T = (m0 – mf)/R to burn all this fuel. Integrating both sides of the equation with respect to time from 0 to T (and noting that R = dm/dt allows a substitution on the right) obtains: Limit of finite mass "pellet" expulsion The rocket equation can also be derived as the limiting case of the speed change for a rocket that expels its fuel in the form of pellets consecutively, as , with an effective exhaust speed such that the mechanical energy gained per unit fuel mass is given by . In the rocket's center-of-mass frame, if a pellet of mass is ejected at speed and the remaining mass of the rocket is , the amount of energy converted to increase the rocket's and pellet's kinetic energy is Using momentum conservation in the rocket's frame just prior to ejection, , from which we find Let be the initial fuel mass fraction on board and the initial fueled-up mass of the rocket. Divide the total mass of fuel into discrete pellets each of mass . The remaining mass of the rocket after ejecting pellets is then . The overall speed change after ejecting pellets is the sum Notice that for large the last term in the denominator and can be neglected to give where and . As this Riemann sum becomes the definite integral since the final remaining mass of the rocket is . Special relativity If special relativity is taken into account, the following equation can be derived for a relativistic rocket, with again standing for the rocket's final velocity (after expelling all its reaction mass and being reduced to a rest mass of ) in the inertial frame of reference where the rocket started at rest (with the rest mass including fuel being initially), and standing for the speed of light in vacuum: Writing as allows this equation to be rearranged as Then, using the identity (here "exp" denotes the exponential function; see also Natural logarithm as well as the "power" identity at Logarithmic identities) and the identity (see Hyperbolic function), this is equivalent to Terms of the equation Delta-v Delta-v (literally "change in velocity"), symbolised as Δv and pronounced delta-vee, as used in spacecraft flight dynamics, is a measure of the impulse that is needed to perform a maneuver such as launching from, or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of the vehicle. Delta-v is produced by reaction engines, such as rocket engines, is proportional to the thrust per unit mass and burn time, and is used to determine the mass of propellant required for the given manoeuvre through the rocket equation. For multiple manoeuvres, delta-v sums linearly. For interplanetary missions delta-v is often plotted on a porkchop plot which displays the required mission delta-v as a function of launch date. Mass fraction In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload. Effective exhaust velocity The effective exhaust velocity is often specified as a specific impulse and they are related to each other by: where is the specific impulse in seconds, is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2), is the standard gravity, 9.80665m/s2 (in Imperial units 32.174ft/s2). Applicability The rocket equation captures the essentials of rocket flight physics in a single short equation. It also holds true for rocket-like reaction vehicles whenever the effective exhaust velocity is constant, and can be summed or integrated when the effective exhaust velocity varies. The rocket equation only accounts for the reaction force from the rocket engine; it does not include other forces that may act on a rocket, such as aerodynamic or gravitational forces. As such, when using it to calculate the propellant requirement for launch from (or powered descent to) a planet with an atmosphere, the effects of these forces must be included in the delta-V requirement (see Examples below). In what has been called "the tyranny of the rocket equation", there is a limit to the amount of payload that the rocket can carry, as higher amounts of propellant increment the overall weight, and thus also increase the fuel consumption. The equation does not apply to non-rocket systems such as aerobraking, gun launches, space elevators, launch loops, tether propulsion or light sails. The rocket equation can be applied to orbital maneuvers in order to determine how much propellant is needed to change to a particular new orbit, or to find the new orbit as the result of a particular propellant burn. When applying to orbital maneuvers, one assumes an impulsive maneuver, in which the propellant is discharged and delta-v applied instantaneously. This assumption is relatively accurate for short-duration burns such as for mid-course corrections and orbital insertion maneuvers. As the burn duration increases, the result is less accurate due to the effect of gravity on the vehicle over the duration of the maneuver. For low-thrust, long duration propulsion, such as electric propulsion, more complicated analysis based on the propagation of the spacecraft's state vector and the integration of thrust are used to predict orbital motion. Examples Assume an exhaust velocity of and a of (Earth to LEO, including to overcome gravity and aerodynamic drag). Single-stage-to-orbit rocket: = 0.884, therefore 88.4% of the initial total mass has to be propellant. The remaining 11.6% is for the engines, the tank, and the payload. Two-stage-to-orbit: suppose that the first stage should provide a of ; = 0.671, therefore 67.1% of the initial total mass has to be propellant to the first stage. The remaining mass is 32.9%. After disposing of the first stage, a mass remains equal to this 32.9%, minus the mass of the tank and engines of the first stage. Assume that this is 8% of the initial total mass, then 24.9% remains. The second stage should provide a of ; = 0.648, therefore 64.8% of the remaining mass has to be propellant, which is 16.2% of the original total mass, and 8.7% remains for the tank and engines of the second stage, the payload, and in the case of a space shuttle, also the orbiter. Thus together 16.7% of the original launch mass is available for all engines, the tanks, and payload. Stages In the case of sequentially thrusting rocket stages, the equation applies for each stage, where for each stage the initial mass in the equation is the total mass of the rocket after discarding the previous stage, and the final mass in the equation is the total mass of the rocket just before discarding the stage concerned. For each stage the specific impulse may be different. For example, if 80% of the mass of a rocket is the fuel of the first stage, and 10% is the dry mass of the first stage, and 10% is the remaining rocket, then With three similar, subsequently smaller stages with the same for each stage, gives: and the payload is 10% × 10% × 10% = 0.1% of the initial mass. A comparable SSTO rocket, also with a 0.1% payload, could have a mass of 11.1% for fuel tanks and engines, and 88.8% for fuel. This would give If the motor of a new stage is ignited before the previous stage has been discarded and the simultaneously working motors have a different specific impulse (as is often the case with solid rocket boosters and a liquid-fuel stage), the situation is more complicated. See also Delta-v budget Jeep problem Mass ratio Oberth effect - applying delta-v in a gravity well increases the final velocity Relativistic rocket Reversibility of orbits Robert H. Goddard - added terms for gravity and drag in vertical flight Spacecraft propulsion Stigler’s law of eponymy References External links How to derive the rocket equation Relativity Calculator – Learn Tsiolkovsky's rocket equations Tsiolkovsky's rocket equations plot and calculator in WolframAlpha Astrodynamics Eponymous equations of physics Rocket Equation Single-stage-to-orbit Rocket propulsion
0.774247
0.997635
0.772416
A Treatise on Electricity and Magnetism
A Treatise on Electricity and Magnetism is a two-volume treatise on electromagnetism written by James Clerk Maxwell in 1873. Maxwell was revising the Treatise for a second edition when he died in 1879. The revision was completed by William Davidson Niven for publication in 1881. A third edition was prepared by J. J. Thomson for publication in 1892. The treatise is said to be notoriously hard to read, containing plenty of ideas but lacking both the clear focus and orderliness that may have allowed it catch on more easily. It was noted by one historian of science that Maxwell's attempt at a comprehensive treatise on all of electrical science tended to bury the important results of his work under "long accounts of miscellaneous phenomena discussed from several points of view." He goes on to say that, outside the treatment of the Faraday effect, Maxwell failed to expound on his earlier work, especially the generation of electromagnetic waves and the derivation of the laws governing reflection and refraction. Maxwell introduced the use of vector fields, and his labels have been perpetuated: A (vector potential), B (magnetic induction), C (electric current), D (displacement), E (electric field – Maxwell's electromotive intensity), F (mechanical force), H (magnetic field – Maxwell's magnetic force). Maxwell's work is considered an exemplar of rhetoric of science: Lagrange's equations appear in the Treatise as the culmination of a long series of rhetorical moves, including (among others) Green's theorem, Gauss's potential theory and Faraday's lines of force – all of which have prepared the reader for the Lagrangian vision of a natural world that is whole and connected: a veritable sea change from Newton's vision. Contents Preliminary. On the Measurement of Quantities. Part I. Electrostatics. Description of Phenomena. Elementary Mathematical Theory of Electricity. On Electrical Work and Energy in a System of Conductors. General Theorems. Mechanical Action Between Two Electrical Systems. Points and Lines of Equilibrium. Forms of Equipotential Surfaces and Lines of Flow. Simple Cases of Electrification. Spherical Harmonics. Confocal Surfaces of the Second Degree. Theory of Electric Images. Conjugate Functions in Two Dimensions. Electrostatic Instruments. Part II. Electrokinematics. The Electric Current. Conduction and Resistance. Electromotive Force Between Bodies in Contact. Electrolysis. Electrolytic Polarization. Mathematical Theory of the Distribution of Electric Currents. Conduction in Three Dimensions. Resistance and Conductivity in Three Dimensions. Conduction through Heterogeneous Media. Conduction in Dielectrics. Measurement of the Electric Resistance of Conductors. Electric Resistance of Substances. Part III. Magnetism Elementary Theory of Magnetism. Magnetic Force and Magnetic Induction. Particular Forms of Magnets. Induced Magnetization. Magnetic Problems. Weber's Theory of Magnetic Induction. Magnetic Measurements. Terrestrial Magnetism. Part IV. Electromagnetism. Electromagnetic Force. Mutual Action of Electric Currents. Induction of Electric Currents. Induction of a Current on Itself. General Equations of Dynamics. Application of Dynamics to Electromagnetism. Electrokinetics. Exploration of the Field by means of the Secondary Circuit. General Equations. Dimensions of Electric Units. Energy and Stress. Current-Sheets. Parallel Currents. Circular Currents. Electromagnetic Instruments. Electromagnetic Observations. Electrical Measurement of Coefficients of Induction. Determination of Resistance in Electromagnetic Measure. Comparison of Electrostatic With Electromagnetic Units. Electromagnetic Theory of Light. Magnetic Action on Light. Electric Theory of Magnetism. Theories of Action at a distance. Reception Reviews On April 24, 1873, Nature announced the publication with an extensive description and much praise. When the second edition was published in 1881, George Chrystal wrote the review for Nature. Pierre Duhem published a critical essay outlining mistakes he found in Maxwell's Treatise. Duhem's book was reviewed in Nature. Comments Hermann von Helmholtz (1881): "Now that the mathematical interpretations of Faraday's conceptions regarding the nature of electric and magnetic force has been given by Clerk Maxwell, we see how great a degree of exactness and precision was really hidden behind Faraday's words…it is astonishing in the highest to see what a large number of general theories, the mechanical deduction of which requires the highest powers of mathematical analysis, he has found by a kind of intuition, with the security of instinct, without the help of a single mathematical formula." Oliver Heaviside (1893):”What is Maxwell's theory? The first approximation is to say: There is Maxwell's book as he wrote it; there is his text, and there are his equations: together they make his theory. But when we come to examine it closely, we find that this answer is unsatisfactory. To begin with, it is sufficient to refer to papers by physicists, written say during the first twelve years following the first publication of Maxwell's treatise to see that there may be much difference of opinion as to what his theory is. It may be, and has been, differently interpreted by different men, which is a sign that is not set forth in a perfectly clear and unmistakable form. There are many obscurities and some inconsistencies. Speaking for myself, it was only by changing its form of presentation that I was able to see it clearly, and so as to avoid the inconsistencies. Now there is no finality in a growing science. It is, therefore, impossible to adhere strictly to Maxwell's theory as he gave it to the world, if only on account of its inconvenient form. Alexander Macfarlane (1902): "This work has served as the starting point of many advances made in recent years. Maxwell is the scientific ancestor of Hertz, Hertz of Marconi and all other workers at wireless telegraphy. Oliver Lodge (1907) "Then comes Maxwell, with his keen penetration and great grasp of thought, combined with mathematical subtlety and power of expression; he assimilates the facts, sympathizes with the philosophic but untutored modes of expression invented by Faraday, links the theorems of Green and Stokes and Thomson to the facts of Faraday, and from the union rears the young modern science of electricity..." E. T. Whittaker (1910): "In this celebrated work is comprehended almost every branch of electric and magnetic theory, but the intention of the writer was to discuss the whole from a single point of view, namely, that of Faraday, so that little or no account was given of the hypotheses that had been propounded in the two preceding decades by the great German electricians...The doctrines peculiar to Maxwell ... were not introduced in the first volume, or in the first half of the second." Albert Einstein (1931): "Before Maxwell people conceived of physical reality – in so far as it is supposed to represent events in nature – as material points, whose changes consist exclusively of motions, which are subject to total differential equations. After Maxwell they conceived physical reality as represented by continuous fields, not mechanically explicable, which are subject to partial differential equations. This change in the conception of reality is the most profound and fruitful one that has come to physics since Newton; but it has at the same time to be admitted that the program has by no means been completely carried out yet." Richard P. Feynman (1964): "From a long view of the history of mankind—seen from, say, ten thousand years from now—there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade." L. Pearce Williams (1991): "In 1873, James Clerk Maxwell published a rambling and difficult two-volume Treatise on Electricity and Magnetism that was destined to change the orthodox picture of physical reality. This treatise did for electromagnetism what Newton's Principia had done for classical mechanics. It not only provided the mathematical tools for the investigation and representation of the whole of electromagnetic theory, but it altered the very framework of both theoretical and experimental physics. Although the process had been going on throughout the nineteenth century, it was this work that finally displaced action at a distance physics and substituted the physics of the field." Mark P. Silverman (1998) "I studied the principles on my own – in this case with Maxwell's Treatise as both my inspiration and textbook. This is not an experience that I would necessarily recommend to others. For all his legendary gentleness, Maxwell is a demanding teacher, and his magnum opus is anything but coffee-table reading...At the same time, the experience was greatly rewarding in that I had come to understand, as I realized much later, aspects of electromagnetism that are rarely taught at any level today and that reflect the unique physical insight of their creator. Andrew Warwick (2003): "In developing the mathematical theory of electricity and magnetism in the Treatise, Maxwell made a number of errors, and for students with only a tenuous grasp of the physical concepts of basic electromagnetic theory and the specific techniques to solve some problems, it was extremely difficult to discriminate between cases where Maxwell made an error and cases where they simply failed to follow the physical or mathematical reasoning." See also "On Physical Lines of Force" "A Dynamical Theory of the Electromagnetic Field" Introduction to Electrodynamics Classical Electrodynamics References Further reading External links Reprint from Dover Publications A Treatise on Electricity And Magnetism – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Volume 2 A Treatise on Electricity and Magnetism at Internet Archive 1st edition 1873 Volume 1, Volume 2 2nd edition 1881 Volume 1, Volume 2 3rd edition 1892 (ed. J. J. Thomson) Volume 1, Volume 2 3rd edition 1892 (Dover reprint 1954) Volume 1, Volume 2 Original Maxwell Equations – Maxwell's 20 Equations in 20 Unknowns – PDF Physics books Historical physics publications 1873 books Electromagnetism 1870s in science Works by James Clerk Maxwell Treatises
0.793887
0.972954
0.772415
Kirchhoff's law of thermal radiation
In heat transfer, Kirchhoff's law of thermal radiation refers to wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium, including radiative exchange equilibrium. It is a special case of Onsager reciprocal relations as a consequence of the time reversibility of microscopic dynamics, also known as microscopic reversibility. A body at temperature radiates electromagnetic energy. A perfect black body in thermodynamic equilibrium absorbs all light that strikes it, and radiates energy according to a unique law of radiative emissive power for temperature (Stefan–Boltzmann law), universal for all perfect black bodies. Kirchhoff's law states that: Here, the dimensionless coefficient of absorption (or the absorptivity) is the fraction of incident light (power) at each spectral frequency that is absorbed by the body when it is radiating and absorbing in thermodynamic equilibrium. In slightly different terms, the emissive power of an arbitrary opaque body of fixed size and shape at a definite temperature can be described by a dimensionless ratio, sometimes called the emissivity: the ratio of the emissive power of the body to the emissive power of a black body of the same size and shape at the same fixed temperature. With this definition, Kirchhoff's law states, in simpler language: In some cases, emissive power and absorptivity may be defined to depend on angle, as described below. The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium. Kirchhoff's law has another corollary: the emissivity cannot exceed one (because the absorptivity cannot, by conservation of energy), so it is not possible to thermally radiate more energy than a black body, at equilibrium. In negative luminescence the angle and wavelength integrated absorption exceeds the material's emission; however, such systems are powered by an external source and are therefore not in thermodynamic equilibrium. Principle of detailed balance Kirchhoff's law of thermal radiation has a refinement in that not only is thermal emissivity equal to absorptivity, it is equal in detail. Consider a leaf. It is a poor absorber of green light (around 470 nm), which is why it looks green. By the principle of detailed balance, it is also a poor emitter of green light. In other words, if a material, illuminated by black-body radiation of temperature , is dark at a certain frequency , then its thermal radiation will also be dark at the same frequency and the same temperature . More generally, all intensive properties are balanced in detail. So for example, the absorptivity at a certain incidence direction, for a certain frequency, of a certain polarization, is the same as the emissivity at the same direction, for the same frequency, of the same polarization. This is the principle of detailed balance. History Before Kirchhoff's law was recognized, it had been experimentally established that a good absorber is a good emitter, and a poor absorber is a poor emitter. Naturally, a good reflector must be a poor absorber. This is why, for example, lightweight emergency thermal blankets are based on reflective metallic coatings: they lose little heat by radiation. Kirchhoff's great insight was to recognize the universality and uniqueness of the function that describes the black body emissive power. But he did not know the precise form or character of that universal function. Attempts were made by Lord Rayleigh and Sir James Jeans 1900–1905 to describe it in classical terms, resulting in Rayleigh–Jeans law. This law turned out to be inconsistent yielding the ultraviolet catastrophe. The correct form of the law was found by Max Planck in 1900, assuming quantized emission of radiation, and is termed Planck's law. This marks the advent of quantum mechanics. Theory In a blackbody enclosure that contains electromagnetic radiation with a certain amount of energy at thermodynamic equilibrium, this "photon gas" will have a Planck distribution of energies. One may suppose a second system, a cavity with walls that are opaque, rigid, and not perfectly reflective to any wavelength, to be brought into connection, through an optical filter, with the blackbody enclosure, both at the same temperature. Radiation can pass from one system to the other. For example, suppose in the second system, the density of photons at narrow frequency band around wavelength were higher than that of the first system. If the optical filter passed only that frequency band, then there would be a net transfer of photons, and their energy, from the second system to the first. This is in violation of the second law of thermodynamics, which requires that there can be no net transfer of heat between two bodies at the same temperature. In the second system, therefore, at each frequency, the walls must absorb and emit energy in such a way as to maintain the black body distribution. Hence absorptivity and emissivity must be equal. The absorptivity of the wall is the ratio of the energy absorbed by the wall to the energy incident on the wall, for a particular wavelength. Thus the absorbed energy is where is the intensity of black-body radiation at wavelength and temperature . Independent of the condition of thermal equilibrium, the emissivity of the wall is defined as the ratio of emitted energy to the amount that would be radiated if the wall were a perfect black body. The emitted energy is thus where is the emissivity at wavelength . For the maintenance of thermal equilibrium, these two quantities must be equal, or else the distribution of photon energies in the cavity will deviate from that of a black body. This yields Kirchhoff's law: By a similar, but more complicated argument, it can be shown that, since black-body radiation is equal in every direction (isotropic), the emissivity and the absorptivity, if they happen to be dependent on direction, must again be equal for any given direction. Average and overall absorptivity and emissivity data are often given for materials with values which differ from each other. For example, white paint is quoted as having an absorptivity of 0.16, while having an emissivity of 0.93. This is because the absorptivity is averaged with weighting for the solar spectrum, while the emissivity is weighted for the emission of the paint itself at normal ambient temperatures. The absorptivity quoted in such cases is being calculated by: while the average emissivity is given by: where is the emission spectrum of the sun, and is the emission spectrum of the paint. Although, by Kirchhoff's law, in the above equations, the above averages and are not generally equal to each other. The white paint will serve as a very good insulator against solar radiation, because it is very reflective of the solar radiation, and although it therefore emits poorly in the solar band, its temperature will be around room temperature, and it will emit whatever radiation it has absorbed in the infrared, where its emission coefficient is high. Planck's derivation Historically, Planck derived the black body radiation law and detailed balance according to a classical thermodynamic argument, with a single heuristic step, which was later interpreted as a quantization hypothesis. In Planck's set up, he started with a large Hohlraum at a fixed temperature . At thermal equilibrium, the Hohlraum is filled with a distribution of EM waves at thermal equilibrium with the walls of the Hohlraum. Next, he considered connecting the Hohlraum to a single small resonator, such as Hertzian resonators. The resonator reaches a certain form of thermal equilibrium with the Hohlraum, when the spectral input into the resonator equals the spectral output at the resonance frequency. Next, suppose there are two Hohlraums at the same fixed temperature , then Planck argued that the thermal equilibrium of the small resonator is the same when connected to either Hohlraum. For, we can disconnect the resonator from one Hohlraum and connect it to another. If the thermal equilibrium were different, then we have just transported energy from one to another, violating the second law. Therefore, the spectrum of all black bodies are identical at the same temperature. Using a heuristic of quantization, which he gleaned from Boltzmann, Planck argued that a resonator tuned to frequency , with average energy , would contain entropyfor some constant (later termed the Planck constant). Then applying , Planck obtained the black body radiation law. Another argument that does not depend on the precise form of the entropy function, can be given as follows. Next, suppose we have a material that violates Kirchhoff's law when integrated, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain , then if the material at temperature is placed into a Hohlraum at temperature , it would spontaneously emit more than it absorbs, or conversely, thus spontaneously creating a temperature difference, violating the second law. Finally, suppose we have a material that violates Kirchhoff's law in detail, such that such that the total coefficient of absorption is not equal to the coefficient of emission at a certain and at a certain frequency , then since it does not violate Kirchhoff's law when integrated, there must exist two frequencies , such that the material absorbs more than it emits at , and conversely at . Now, place this material in one Hohlraum. It would spontaneously create a shift in the spectrum, making it higher at than at . However, this then allows us to tap from one Hohlraum with a resonator tuned at , then detach and attach to another Hohlraum at the same temperature, thus transporting energy from one to another, violating the second law. We may apply the same argument for polarization and direction of radiation, obtaining the full principle of detailed balance. Black bodies Near-black materials It has long been known that a lamp-black coating will make a body nearly black. Some other materials are nearly black in particular wavelength bands. Such materials do not survive all the very high temperatures that are of interest. An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%. Opaque bodies Bodies that are opaque to thermal radiation that falls on them are valuable in the study of heat radiation. Planck analyzed such bodies with the approximation that they be considered topologically to have an interior and to share an interface. They share the interface with their contiguous medium, which may be rarefied material such as air, or transparent material, through which observations can be made. The interface is not a material body and can neither emit nor absorb. It is a mathematical surface belonging jointly to the two media that touch it. It is the site of refraction of radiation that penetrates it and of reflection of radiation that does not. As such it obeys the Helmholtz reciprocity principle. The opaque body is considered to have a material interior that absorbs all and scatters or transmits none of the radiation that reaches it through refraction at the interface. In this sense the material of the opaque body is black to radiation that reaches it, while the whole phenomenon, including the interior and the interface, does not show perfect blackness. In Planck's model, perfectly black bodies, which he noted do not exist in nature, besides their opaque interior, have interfaces that are perfectly transmitting and non-reflective. Cavity radiation The walls of a cavity can be made of opaque materials that absorb significant amounts of radiation at all wavelengths. It is not necessary that every part of the interior walls be a good absorber at every wavelength. The effective range of absorbing wavelengths can be extended by the use of patches of several differently absorbing materials in parts of the interior walls of the cavity. In thermodynamic equilibrium the cavity radiation will precisely obey Planck's law. In this sense, thermodynamic equilibrium cavity radiation may be regarded as thermodynamic equilibrium black-body radiation to which Kirchhoff's law applies exactly, though no perfectly black body in Kirchhoff's sense is present. A theoretical model considered by Planck consists of a cavity with perfectly reflecting walls, initially with no material contents, into which is then put a small piece of carbon. Without the small piece of carbon, there is no way for non-equilibrium radiation initially in the cavity to drift towards thermodynamic equilibrium. When the small piece of carbon is put in, it radiation frequencies so that the cavity radiation comes to thermodynamic equilibrium. A hole in the wall of a cavity For experimental purposes, a hole in a cavity can be devised to provide a good approximation to a black surface, but will not be perfectly Lambertian, and must be viewed from nearly right angles to get the best properties. The construction of such devices was an important step in the empirical measurements that led to the precise mathematical identification of Kirchhoff's universal function, now known as Planck's law. Kirchhoff's perfect black bodies Planck also noted that the perfect black bodies of Kirchhoff do not occur in physical reality. They are theoretical fictions. Kirchhoff's perfect black bodies absorb all the radiation that falls on them, right in an infinitely thin surface layer, with no reflection and no scattering. They emit radiation in perfect accord with Lambert's cosine law. Original statements Gustav Kirchhoff stated his law in several papers in 1859 and 1860, and then in 1862 in an appendix to his collected reprints of those and some related papers. Prior to Kirchhoff's studies, it was known that for total heat radiation, the ratio of emissive power to absorptive ratio was the same for all bodies emitting and absorbing thermal radiation in thermodynamic equilibrium. This means that a good absorber is a good emitter. Naturally, a good reflector is a poor absorber. For wavelength specificity, prior to Kirchhoff, the ratio was shown experimentally by Balfour Stewart to be the same for all bodies, but the universal value of the ratio had not been explicitly considered in its own right as a function of wavelength and temperature. Kirchhoff's original contribution to the physics of thermal radiation was his postulate of a perfect black body radiating and absorbing thermal radiation in an enclosure opaque to thermal radiation and with walls that absorb at all wavelengths. Kirchhoff's perfect black body absorbs all the radiation that falls upon it. Every such black body emits from its surface with a spectral radiance that Kirchhoff labeled (for specific intensity, the traditional name for spectral radiance). The precise mathematical expression for that universal function was very much unknown to Kirchhoff, and it was just postulated to exist, until its precise mathematical expression was found in 1900 by Max Planck. It is nowadays referred to as Planck's law. Then, at each wavelength, for thermodynamic equilibrium in an enclosure, opaque to heat rays, with walls that absorb some radiation at every wavelength: See also Kirchhoff's laws (disambiguation) Sakuma–Hattori equation Wien's displacement law Stefan–Boltzmann law, which states that the power of emission is proportional to the fourth power of the black body's temperature References Citations Bibliography Translated: Reprinted as General references Evgeny Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, 3rd edition (Elsevier, 1980). F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill: Boston, 1965). Heat transfer Electromagnetic radiation Eponymous laws of physics Gustav Kirchhoff 1859 in science
0.778681
0.991914
0.772385
Biefeld–Brown effect
The Biefeld–Brown effect is an electrical phenomenon, first noticed by inventor Thomas Townsend Brown in the 1920s, where high voltage applied to the electrodes of an asymmetric capacitor causes a net propulsive force toward the smaller electrode. Brown believed effect was an anti-gravity force, and referred to as "electrogravitics" based on it being an electricity/gravity phenomenon. It has since been determined that force is due to ionic wind that transfers its momentum to surrounding neutral particles. Overview It is generally assumed that the Biefeld–Brown effect produces an ionic wind that transfers its momentum to surrounding neutral particles. It describes a force observed on an asymmetric capacitor when high voltage is applied to the capacitor's electrodes. Once suitably charged up to high DC potentials, a thrust at the negative terminal, pushing it away from the positive terminal, is generated. The use of an asymmetric capacitor, with the negative electrode being larger than the positive electrode, allowed for more thrust to be produced in the direction from the low-flux to the high-flux region compared to a conventional capacitor. These asymmetric capacitors became known as Asymmetrical Capacitor Thrusters (ACT). The Biefeld–Brown effect can be observed in ionocrafts and lifters, which utilize the effect to produce thrust in the air without requiring any combustion or moving parts. History The "Biefeld–Brown effect" was the name given to a phenomenon observed by Thomas Townsend Brown while he was experimenting with X-ray tubes during the 1920s while he was still in high school. When he applied a high voltage electrical charge to a Coolidge tube that he placed on a scale, Brown noticed a difference in the tube's mass depending on orientation, implying some kind of net force. This discovery caused him to assume that he had somehow influenced gravity electronically and led him to design a propulsion system based on this phenomenon. On 15 April 1927, he applied for a patent, entitled "Method of Producing Force or Motion," that described his invention as an electrical-based method that could control gravity to produce linear force or motion. In 1929, Brown published an article for the popular American magazine Science and Invention, which detailed his work. The article also mentioned the "gravitator," an invention by Brown which produced motion without the use of electromagnetism, gears, propellers, or wheels, but instead using the principles of what he called "electro-gravitation." He also claimed that the asymmetric capacitors were capable of generating mysterious fields that interacted with the Earth's gravitational pull and envisioned a future where gravitators would propel ocean liners and even space cars. At some point this effect also gained the moniker "Biefeld–Brown effect", probably coined by Brown to claim Denison University professor of physics and astronomy Paul Alfred Biefeld as his mentor and co-experimenter. Brown attended Denison in Ohio for a year before he dropped out and records of him even having an association with Biefeld are sketchy at best. Brown claimed that he did a series of experiments with professor of astronomy Biefeld, a former teacher of Brown whom Brown claimed was his mentor and co-experimenter at Denison University.As of 2004, Denison University claims they have no record of any such experiments, or of any association between Brown and Biefeld. In his 1960 patent titled "Electrokinetic Apparatus," Brown refers to electrokinesis to describe the Biefeld–Brown effect, linking the phenomenon to the field of electrohydrodynamics (EHD). Brown also believed the Biefeld–Brown effect could produce an anti-gravity force, referred to as "electrogravitics" based on it being an electricity/gravity phenomenon. However, there is little evidence that supports Brown's claim on the effect's anti-gravity properties. Brown's patent made the following claims: There is a negative correlation between the distance between the plates of the capacitor and the strength of the effect, where the shorter the distance, the greater the effect. There is a positive correlation between the dielectric strength of the material between the electrodes and the strength of the effect, where the higher the strength, the greater the effect. There is a positive correlation between the area of the conductors and the strength of the effect, where the greater the area, the greater the effect. There is a positive correlation between the voltage difference between the capacitor plates and the strength of the effect, where the greater the voltage, the greater the effect. There is a positive correlation between the mass of the dielectric material and the strength of the effect, where the greater the mass, the greater the effect. In 1965, Brown filed a patent that claimed that a net force on the asymmetric capacitor can exist even in a vacuum. However, there is little experimental evidence that serves to validate his claims. Effect analysis The effect is generally believed to rely on corona discharge, which allows air molecules to become ionized near sharp points and edges. Usually, two electrodes are used with a high voltage between them, ranging from a few kilovolts and up to megavolt levels, where one electrode is small or sharp, and the other larger and smoother. The most effective distance between electrodes occurs at an electric potential gradient of about 10 kV/cm, which is just below the nominal breakdown voltage of air between two sharp points, at a current density level usually referred to as the saturated corona current condition. This creates a high field gradient around the smaller, positively charged electrode. Around this electrode, ionization occurs, that is, electrons are stripped from the atoms in the surrounding medium; they are literally pulled right off by the electrode's charge. This leaves a cloud of positively charged ions in the medium, which are attracted to the negative smooth electrode by Coulomb's Law, where they are neutralized again. This produces an equally scaled opposing force in the lower electrode. This effect can be used for propulsion (see EHD thruster), fluid pumps and recently also in EHD cooling systems. The velocity achievable by such setups is limited by the momentum achievable by the ionized air, which is reduced by ion impact with neutral air. A theoretical derivation of this force has been proposed (see the external links below). However, this effect works using either polarity for the electrodes: the small or thin electrode can be either positive or negative, and the larger electrode must have the opposite polarity. On many experimental sites it is reported that the thrust effect of a lifter is actually a bit stronger when the small electrode is the positive one. This is possibly an effect of the differences between the ionization energy and electron affinity energy of the constituent parts of air; thus the ease of which ions are created at the 'sharp' electrode. As air pressure is removed from the system, several effects combine to reduce the force and momentum available to the system. The number of air molecules around the ionizing electrode is reduced, decreasing the quantity of ionized particles. At the same time, the number of impacts between ionized and neutral particles is reduced. Whether this increases or decreases the maximum momentum of the ionized air is not typically measured, although the force acting upon the electrodes reduces, until the glow discharge region is entered. The reduction in force is also a product of the reducing breakdown voltage of air, as a lower potential must be applied between the electrodes, thereby reducing the force dictated by Coulomb's Law. During the glow discharge regime, the air becomes a conductor. Though the applied voltage and current will propagate at nearly the speed of light, the movement of the conductors themselves is almost negligible. This leads to a Coulomb force and change of momentum so small as to be zero. Below the glow discharge region, the breakdown voltage increases again, whilst the number of potential ions decreases, and the chance of impact lowers. Experiments have been conducted and found to both prove and disprove a force at very low pressure. It is likely that the reason for this is that at very low pressures, only experiments which used very large voltages produced positive results, as a product of a greater chance of ionization of the extremely limited number of available air molecules, and a greater force from each ion from Coulomb's Law; experiments which used lower voltages have a lower chance of ionization and a lower force per ion. Common to positive results is that the force observed is small in comparison to experiments conducted at standard pressure. Disputes surrounding electrogravity and ion wind Brown believed that his large, high voltage, high capacity capacitors produced an electric field strong enough to marginally interact with the Earth's gravitational pull, a phenomenon he labeled electrogravitics. Several researchers claim that conventional physics cannot adequately explain the phenomenon. The effect has become something of a cause célèbre in the UFO community, where it is seen as an example of something much more exotic than electrokinetics. William L. Moore and Charles Berlitz devoted an entire chapter of their book on the "Philadelphia Experiment" to a retelling of Brown's early work with the effect, implying he had discovered a new electrogravity effect and that it was being used by UFOs. There have been follow-ups on the claims that this force can be produced in a full vacuum, meaning it is an unknown anti-gravity force, and not just the more well known ion wind. As part of a study in 1990, U.S. Air Force researcher R. L. Talley conducted a test on a Biefeld–Brown-style capacitor to replicate the effect in a vacuum. Despite attempts that increased the driving DC voltage to about 19 kV in vacuum chambers up to 10−6 torr, Talley observed no thrust in terms of static DC potential applied to the electrodes. In 2003, NASA scientist Jonathan Campbell tested a lifter in a vacuum at 10−7 torr with a voltage of up to 50 kV, only to observe no movement from the lifter. Campbell pointed out to a Wired magazine reporter that creating a true vacuum similar to space for the test requires tens of thousands of dollars in equipment. Around the same time in 2003, researchers from the Army Research Laboratory (ARL) tested the Biefeld–Brown effect by building four different-sized asymmetric capacitors based on simple designs found on the Internet and then applying a high voltage of around 30 kV to them. According to their report, the researchers wrote that the effects of ion wind was at least three orders of magnitude too small to account for the observed force on the asymmetric capacitor in the air. Having proposed that the Biefeld–Brown effect could theoretically be explained using ion drift instead of ion wind due to how the former involves collisions instead of ballistic trajectories, they noted these were only "scaling estimates" and more experimental and theoretical work was needed. Around ten years later, researchers from the Technical University of Liberec conducted experiments on the Biefeld–Brown effect that supported one of ARL's hypotheses that assigned ion drift as the most likely source of the generated force. In 2004, Martin Tajmar published a paper that also failed to replicate Brown's work and suggested that Brown may have instead observed the effects of a corona wind triggered by insufficient outgassing of the electrode assembly in the vacuum chamber and therefore misinterpreted the corona wind effects as a possible connection between gravitation and electromagnetism. Patents T. T. Brown was granted a number of patents on his discovery: GB300311 — A method of and an apparatus or machine for producing force or motion (accepted 1928-11-15) — Electrostatic motor (1934-09-25) — Electrokinetic apparatus (1960-08-16) — Electrokinetic transducer (1962-01-23) — Electrokinetic generator (1962-02-20) — Electrokinetic apparatus (1965-06-01) — Electric generator (1965-07-20) Historically, numerous patents have been granted for various applications of the effect, including electrostatic dust precipitation, air ionizers, and flight. was granted to G.E. Hagen in 1964 for apparatus more or less identical to the later so-called 'lifter' devices. References External links Propulsion Physical phenomena Anti-gravity Electrostatics
0.78318
0.986152
0.772334
Aeroelasticity
Aeroelasticity is the branch of physics and engineering studying the interactions between the inertial, elastic, and aerodynamic forces occurring while an elastic body is exposed to a fluid flow. The study of aeroelasticity may be broadly classified into two fields: static aeroelasticity dealing with the static or steady state response of an elastic body to a fluid flow, and dynamic aeroelasticity dealing with the body's dynamic (typically vibrational) response. Aircraft are prone to aeroelastic effects because they need to be lightweight while enduring large aerodynamic loads. Aircraft are designed to avoid the following aeroelastic problems: divergence where the aerodynamic forces increase the twist of a wing which further increases forces; control reversal where control activation produces an opposite aerodynamic moment that reduces, or in extreme cases reverses, the control effectiveness; and flutter which is uncontained vibration that can lead to the destruction of an aircraft. Aeroelasticity problems can be prevented by adjusting the mass, stiffness or aerodynamics of structures which can be determined and verified through the use of calculations, ground vibration tests and flight flutter trials. Flutter of control surfaces is usually eliminated by the careful placement of mass balances. The synthesis of aeroelasticity with thermodynamics is known as aerothermoelasticity, and its synthesis with control theory is known as aeroservoelasticity. History The second failure of Samuel Langley's prototype plane on the Potomac was attributed to aeroelastic effects (specifically, torsional divergence). An early scientific work on the subject was George Bryan's Theory of the Stability of a Rigid Aeroplane published in 1906. Problems with torsional divergence plagued aircraft in the First World War and were solved largely by trial-and-error and ad hoc stiffening of the wing. The first recorded and documented case of flutter in an aircraft was that which occurred to a Handley Page O/400 bomber during a flight in 1916, when it suffered a violent tail oscillation, which caused extreme distortion of the rear fuselage and the elevators to move asymmetrically. Although the aircraft landed safely, in the subsequent investigation F. W. Lanchester was consulted. One of his recommendations was that left and right elevators should be rigidly connected by a stiff shaft, which was to subsequently become a design requirement. In addition, the National Physical Laboratory (NPL) was asked to investigate the phenomenon theoretically, which was subsequently carried out by Leonard Bairstow and Arthur Fage. In 1926, Hans Reissner published a theory of wing divergence, leading to much further theoretical research on the subject. The term aeroelasticity itself was coined by Harold Roxbee Cox and Alfred Pugsley at the Royal Aircraft Establishment (RAE), Farnborough in the early 1930s. In the development of aeronautical engineering at Caltech, Theodore von Kármán started a course "Elasticity applied to Aeronautics". After teaching the course for one term, Kármán passed it over to Ernest Edwin Sechler, who developed aeroelasticity in that course and in publication of textbooks on the subject. In 1947, Arthur Roderick Collar defined aeroelasticity as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design". Static aeroelasticity In an aeroplane, two significant static aeroelastic effects may occur. Divergence is a phenomenon in which the elastic twist of the wing suddenly becomes theoretically infinite, typically causing the wing to fail. Control reversal is a phenomenon occurring only in wings with ailerons or other control surfaces, in which these control surfaces reverse their usual functionality (e.g., the rolling direction associated with a given aileron moment is reversed). Divergence Divergence occurs when a lifting surface deflects under aerodynamic load in a direction which further increases lift in a positive feedback loop. The increased lift deflects the structure further, which eventually brings the structure to the point of divergence. Unlike flutter, which is another aeroelastic problem, instead of irregular oscillations, divergence causes the lifting surface to move in the same direction and when it comes to point of divergence the structure deforms. Control reversal Control surface reversal is the loss (or reversal) of the expected response of a control surface, due to deformation of the main lifting surface. For simple models (e.g. single aileron on an Euler-Bernoulli beam), control reversal speeds can be derived analytically as for torsional divergence. Control reversal can be used to aerodynamic advantage, and forms part of the Kaman servo-flap rotor design. Dynamic aeroelasticity Dynamic aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are: Flutter Flutter is a dynamic instability of an elastic structure in a fluid flow, caused by positive feedback between the body's deflection and the force exerted by the fluid flow. In a linear system, "flutter point" is the point at which the structure is undergoing simple harmonic motion—zero net damping—and so any further decrease in net damping will result in a self-oscillation and eventual failure. "Net damping" can be understood as the sum of the structure's natural positive damping and the negative damping of the aerodynamic force. Flutter can be classified into two types: hard flutter, in which the net damping decreases very suddenly, very close to the flutter point; and soft flutter, in which the net damping decreases gradually. In water the mass ratio of the pitch inertia of the foil to that of the circumscribing cylinder of fluid is generally too low for binary flutter to occur, as shown by explicit solution of the simplest pitch and heave flutter stability determinant. Structures exposed to aerodynamic forces—including wings and aerofoils, but also chimneys and bridges—are generally designed carefully within known parameters to avoid flutter. Blunt shapes, such as chimneys, can give off a continuous stream of vortices known as a Kármán vortex street, which can induce structural oscillations. Strakes are typically wrapped around chimneys to stop the formation of these vortices. In complex structures where both the aerodynamics and the mechanical properties of the structure are not fully understood, flutter can be discounted only through detailed testing. Even changing the mass distribution of an aircraft or the stiffness of one component can induce flutter in an apparently unrelated aerodynamic component. At its mildest, this can appear as a "buzz" in the aircraft structure, but at its most violent, it can develop uncontrollably with great speed and cause serious damage to the aircraft or lead to its destruction, as in Northwest Airlines Flight 2 in 1938, Braniff Flight 542 in 1959, or the prototypes for Finland's VL Myrsky fighter aircraft in the early 1940s. Famously, the original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering. Aeroservoelasticity In some cases, automatic control systems have been demonstrated to help prevent or limit flutter-related structural vibration. Propeller whirl flutter Propeller whirl flutter is a special case of flutter involving the aerodynamic and inertial effects of a rotating propeller and the stiffness of the supporting nacelle structure. Dynamic instability can occur involving pitch and yaw degrees of freedom of the propeller and the engine supports leading to an unstable precession of the propeller. Failure of the engine supports led to whirl flutter occurring on two Lockheed L-188 Electra aircraft, in 1959 on Braniff Flight 542 and again in 1960 on Northwest Orient Airlines Flight 710. Transonic aeroelasticity Flow is highly non-linear in the transonic regime, dominated by moving shock waves. Avoiding flutter is mission-critical for aircraft that fly through transonic Mach numbers. The role of shock waves was first analyzed by Holt Ashley. A phenomenon that impacts stability of aircraft known as "transonic dip", in which the flutter speed can get close to flight speed, was reported in May 1976 by Farmer and Hanson of the Langley Research Center. Buffeting Buffeting is a high-frequency instability, caused by airflow separation or shock wave oscillations from one object striking another. It is caused by a sudden impulse of load increasing. It is a random forced vibration. Generally it affects the tail unit of the aircraft structure due to air flow downstream of the wing. The methods for buffet detection are: Pressure coefficient diagram Pressure divergence at trailing edge Computing separation from trailing edge based on Mach number Normal force fluctuating divergence Prediction and cure In the period 1950–1970, AGARD developed the Manual on Aeroelasticity which details the processes used in solving and verifying aeroelastic problems along with standard examples that can be used to test numerical solutions. Aeroelasticity involves not just the external aerodynamic loads and the way they change but also the structural, damping and mass characteristics of the aircraft. Prediction involves making a mathematical model of the aircraft as a series of masses connected by springs and dampers which are tuned to represent the dynamic characteristics of the aircraft structure. The model also includes details of applied aerodynamic forces and how they vary. The model can be used to predict the flutter margin and, if necessary, test fixes to potential problems. Small carefully chosen changes to mass distribution and local structural stiffness can be very effective in solving aeroelastic problems. Methods of predicting flutter in linear structures include the p-method, the k-method and the p-k method. For nonlinear systems, flutter is usually interpreted as a limit cycle oscillation (LCO), and methods from the study of dynamical systems can be used to determine the speed at which flutter will occur. Media These videos detail the Active Aeroelastic Wing two-phase NASA-Air Force flight research program to investigate the potential of aerodynamically twisting flexible wings to improve maneuverability of high-performance aircraft at transonic and supersonic speeds, with traditional control surfaces such as ailerons and leading-edge flaps used to induce the twist. Notable aeroelastic failures The original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering. Propeller whirl flutter of the Lockheed L-188 Electra on Braniff Flight 542. 1931 Transcontinental & Western Air Fokker F-10 crash. Body freedom flutter of the GAF Jindivik drone. See also Adaptive compliant wing Aerospace engineering Kármán vortex street Mathematical modeling Oscillation Parker Variable Wing Vortex shedding Vortex-induced vibration X-53 Active Aeroelastic Wing References Further reading Bisplinghoff, R. L., Ashley, H. and Halfman, H., Aeroelasticity. Dover Science, 1996, , 880 p. Dowell, E. H., A Modern Course on Aeroelasticity. . Fung, Y. C., An Introduction to the Theory of Aeroelasticity. Dover, 1994, . Hodges, D. H. and Pierce, A., Introduction to Structural Dynamics and Aeroelasticity, Cambridge, 2002, . Wright, J. R. and Cooper, J. E., Introduction to Aircraft Aeroelasticity and Loads, Wiley 2007, . Hoque, M. E., "Active Flutter Control", LAP Lambert Academic Publishing, Germany, 2010, . Collar, A. R., "The first fifty years of aeroelasticity", Aerospace, vol. 5, no. 2, pp. 12–20, 1978. Garrick, I. E. and Reed W. H., "Historical development of aircraft flutter", Journal of Aircraft, vol. 18, pp. 897–912, Nov. 1981. External links Aeroelasticity Branch – NASA Langley Research Center DLR Institute of Aeroelasticity National Aerospace Laboratory The Aeroelasticity Group – Texas A&M University NACA Technical Reports – NASA Langley Research Center NASA Aeroelasticity Handbook Aerodynamics Aircraft wing design Aerospace engineering Solid mechanics Elasticity (physics) Articles containing video clips
0.778553
0.991993
0.77232
T-symmetry
T-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal, Since the second law of thermodynamics states that entropy increases as time flows toward the future, in general, the macroscopic universe does not show symmetry under time reversal. In other words, time is said to be non-symmetric, or asymmetric, except for special equilibrium states when the second law of thermodynamics predicts the time symmetry to hold. However, quantum noninvasive measurements are predicted to violate time symmetry even in equilibrium, contrary to their classical counterparts, although this has not yet been experimentally confirmed. Time asymmetries (see Arrow of time) generally are caused by one of three categories: intrinsic to the dynamic physical law (e.g., for the weak force) due to the initial conditions of the universe (e.g., for the second law of thermodynamics) due to measurements (e.g., for the noninvasive measurements) Macroscopic phenomena The second law of thermodynamics Daily experience shows that T-symmetry does not hold for the behavior of bulk materials. Of these macroscopic laws, most notable is the second law of thermodynamics. Many other phenomena, such as the relative motion of bodies with friction, or viscous motion of fluids, reduce to this, because the underlying mechanism is the dissipation of usable energy (for example, kinetic energy) into heat. The question of whether this time-asymmetric dissipation is really inevitable has been considered by many physicists, often in the context of Maxwell's demon. The name comes from a thought experiment described by James Clerk Maxwell in which a microscopic demon guards a gate between two halves of a room. It only lets slow molecules into one half, only fast ones into the other. By eventually making one side of the room cooler than before and the other hotter, it seems to reduce the entropy of the room, and reverse the arrow of time. Many analyses have been made of this; all show that when the entropy of room and demon are taken together, this total entropy does increase. Modern analyses of this problem have taken into account Claude E. Shannon's relation between entropy and information. Many interesting results in modern computing are closely related to this problem—reversible computing, quantum computing and physical limits to computing, are examples. These seemingly metaphysical questions are today, in these ways, slowly being converted into hypotheses of the physical sciences. The current consensus hinges upon the Boltzmann–Shannon identification of the logarithm of phase space volume with the negative of Shannon information, and hence to entropy. In this notion, a fixed initial state of a macroscopic system corresponds to relatively low entropy because the coordinates of the molecules of the body are constrained. As the system evolves in the presence of dissipation, the molecular coordinates can move into larger volumes of phase space, becoming more uncertain, and thus leading to increase in entropy. Big Bang One resolution to irreversibility is to say that the constant increase of entropy we observe happens only because of the initial state of our universe. Other possible states of the universe (for example, a universe at heat death equilibrium) would actually result in no increase of entropy. In this view, the apparent T-asymmetry of our universe is a problem in cosmology: why did the universe start with a low entropy? This view, supported by cosmological observations (such as the isotropy of the cosmic microwave background) connects this problem to the question of initial conditions of the universe. Black holes The laws of gravity seem to be time reversal invariant in classical mechanics; however, specific solutions need not be. An object can cross through the event horizon of a black hole from the outside, and then fall rapidly to the central region where our understanding of physics breaks down. Since within a black hole the forward light-cone is directed towards the center and the backward light-cone is directed outward, it is not even possible to define time-reversal in the usual manner. The only way anything can escape from a black hole is as Hawking radiation. The time reversal of a black hole would be a hypothetical object known as a white hole. From the outside they appear similar. While a black hole has a beginning and is inescapable, a white hole has an ending and cannot be entered. The forward light-cones of a white hole are directed outward; and its backward light-cones are directed towards the center. The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out. The modern view of black hole irreversibility is to relate it to the second law of thermodynamics, since black holes are viewed as thermodynamic objects. For example, according to the gauge–gravity duality conjecture, all microscopic processes in a black hole are reversible, and only the collective behavior is irreversible, as in any other macroscopic, thermal system. Kinetic consequences: detailed balance and Onsager reciprocal relations In physical and chemical kinetics, T-symmetry of the mechanical microscopic equations implies two important laws: the principle of detailed balance and the Onsager reciprocal relations. T-symmetry of the microscopic description together with its kinetic consequences are called microscopic reversibility. Effect of time reversal on some variables of classical physics Even Classical variables that do not change upon time reversal include: , position of a particle in three-space , acceleration of the particle , force on the particle , energy of the particle , electric potential (voltage) , electric field , electric displacement , density of electric charge , electric polarization Energy density of the electromagnetic field , Maxwell stress tensor All masses, charges, coupling constants, and other physical constants, except those associated with the weak force. Odd Classical variables that time reversal negates include: , the time when an event occurs , velocity of a particle , linear momentum of a particle , angular momentum of a particle (both orbital and spin) , electromagnetic vector potential , magnetic field , magnetic auxiliary field , density of electric current , magnetization , Poynting vector , power (rate of work done). Example: Magnetic Field and Onsager reciprocal relations Let us consider the example of a system of charged particles subject to a constant external magnetic field: in this case the canonical time reversal operation that reverses the velocities and the time and keeps the coordinates untouched is no more a symmetry for the system. Under this consideration, it seems that only Onsager–Casimir reciprocal relations could hold; these equalities relate two different systems, one subject to and another to , and so their utility is limited. However, there was proved that it is possible to find other time reversal operations which preserve the dynamics and so Onsager reciprocal relations; in conclusion, one cannot state that the presence of a magnetic field always breaks T-symmetry. Microscopic phenomena: time reversal invariance Most systems are asymmetric under time reversal, but there may be phenomena with symmetry. In classical mechanics, a velocity v reverses under the operation of T, but an acceleration does not. Therefore, one models dissipative phenomena through terms that are odd in v. However, delicate experiments in which known sources of dissipation are removed reveal that the laws of mechanics are time reversal invariant. Dissipation itself is originated in the second law of thermodynamics. The motion of a charged body in a magnetic field, B involves the velocity through the Lorentz force term v×B, and might seem at first to be asymmetric under T. A closer look assures us that B also changes sign under time reversal. This happens because a magnetic field is produced by an electric current, J, which reverses sign under T. Thus, the motion of classical charged particles in electromagnetic fields is also time reversal invariant. (Despite this, it is still useful to consider the time-reversal non-invariance in a local sense when the external field is held fixed, as when the magneto-optic effect is analyzed. This allows one to analyze the conditions under which optical phenomena that locally break time-reversal, such as Faraday isolators and directional dichroism, can occur.) In physics one separates the laws of motion, called kinematics, from the laws of force, called dynamics. Following the classical kinematics of Newton's laws of motion, the kinematics of quantum mechanics is built in such a way that it presupposes nothing about the time reversal symmetry of the dynamics. In other words, if the dynamics are invariant, then the kinematics will allow it to remain invariant; if the dynamics is not, then the kinematics will also show this. The structure of the quantum laws of motion are richer, and we examine these next. Time reversal in quantum mechanics This section contains a discussion of the three most important properties of time reversal in quantum mechanics; chiefly, that it must be represented as an anti-unitary operator, that it protects non-degenerate quantum states from having an electric dipole moment, that it has two-dimensional representations with the property (for fermions). The strangeness of this result is clear if one compares it with parity. If parity transforms a pair of quantum states into each other, then the sum and difference of these two basis states are states of good parity. Time reversal does not behave like this. It seems to violate the theorem that all abelian groups be represented by one-dimensional irreducible representations. The reason it does this is that it is represented by an anti-unitary operator. It thus opens the way to spinors in quantum mechanics. On the other hand, the notion of quantum-mechanical time reversal turns out to be a useful tool for the development of physically motivated quantum computing and simulation settings, providing, at the same time, relatively simple tools to assess their complexity. For instance, quantum-mechanical time reversal was used to develop novel boson sampling schemes and to prove the duality between two fundamental optical operations, beam splitter and squeezing transformations. Formal notation In formal mathematical presentations of T-symmetry, three different kinds of notation for T need to be carefully distinguished: the T that is an involution, capturing the actual reversal of the time coordinate, the T that is an ordinary finite dimensional matrix, acting on spinors and vectors, and the T that is an operator on an infinite-dimensional Hilbert space. For a real (not complex) classical (unquantized) scalar field , the time reversal involution can simply be written as as time reversal leaves the scalar value at a fixed spacetime point unchanged, up to an overall sign . A slightly more formal way to write this is which has the advantage of emphasizing that is a map, and thus the "mapsto" notation whereas is a factual statement relating the old and new fields to one-another. Unlike scalar fields, spinor and vector fields might have a non-trivial behavior under time reversal. In this case, one has to write where is just an ordinary matrix. For complex fields, complex conjugation may be required, for which the mapping can be thought of as a 2x2 matrix. For a Dirac spinor, cannot be written as a 4x4 matrix, because, in fact, complex conjugation is indeed required; however, it can be written as an 8x8 matrix, acting on the 8 real components of a Dirac spinor. In the general setting, there is no ab initio value to be given for ; its actual form depends on the specific equation or equations which are being examined. In general, one simply states that the equations must be time-reversal invariant, and then solves for the explicit value of that achieves this goal. In some cases, generic arguments can be made. Thus, for example, for spinors in three-dimensional Euclidean space, or four-dimensional Minkowski space, an explicit transformation can be given. It is conventionally given as where is the y-component of the angular momentum operator and is complex conjugation, as before. This form follows whenever the spinor can be described with a linear differential equation that is first-order in the time derivative, which is generally the case in order for something to be validly called "a spinor". The formal notation now makes it clear how to extend time-reversal to an arbitrary tensor field In this case, Covariant tensor indexes will transform as and so on. For quantum fields, there is also a third T, written as which is actually an infinite dimensional operator acting on a Hilbert space. It acts on quantized fields as This can be thought of as a special case of a tensor with one covariant, and one contravariant index, and thus two 's are required. All three of these symbols capture the idea of time-reversal; they differ with respect to the specific space that is being acted on: functions, vectors/spinors, or infinite-dimensional operators. The remainder of this article is not cautious to distinguish these three; the T that appears below is meant to be either or or depending on context, left for the reader to infer. Anti-unitary representation of time reversal Eugene Wigner showed that a symmetry operation S of a Hamiltonian is represented, in quantum mechanics either by a unitary operator, , or an antiunitary one, where U is unitary, and K denotes complex conjugation. These are the only operations that act on Hilbert space so as to preserve the length of the projection of any one state-vector onto another state-vector. Consider the parity operator. Acting on the position, it reverses the directions of space, so that . Similarly, it reverses the direction of momentum, so that , where x and p are the position and momentum operators. This preserves the canonical commutator , where ħ is the reduced Planck constant, only if P is chosen to be unitary, . On the other hand, the time reversal operator T, it does nothing to the x-operator, , but it reverses the direction of p, so that . The canonical commutator is invariant only if T is chosen to be anti-unitary, i.e., . Another argument involves energy, the time-component of the four-momentum. If time reversal were implemented as a unitary operator, it would reverse the sign of the energy just as space-reversal reverses the sign of the momentum. This is not possible, because, unlike momentum, energy is always positive. Since energy in quantum mechanics is defined as the phase factor exp(–iEt) that one gets when one moves forward in time, the way to reverse time while preserving the sign of the energy is to also reverse the sense of "i", so that the sense of phases is reversed. Similarly, any operation that reverses the sense of phase, which changes the sign of i, will turn positive energies into negative energies unless it also changes the direction of time. So every antiunitary symmetry in a theory with positive energy must reverse the direction of time. Every antiunitary operator can be written as the product of the time reversal operator and a unitary operator that does not reverse time. For a particle with spin J, one can use the representation where Jy is the y-component of the spin, and use of has been made. Electric dipole moments This has an interesting consequence on the electric dipole moment (EDM) of any particle. The EDM is defined through the shift in the energy of a state when it is put in an external electric field: , where d is called the EDM and δ, the induced dipole moment. One important property of an EDM is that the energy shift due to it changes sign under a parity transformation. However, since d is a vector, its expectation value in a state |ψ⟩ must be proportional to ⟨ψ| J |ψ⟩, that is the expected spin. Thus, under time reversal, an invariant state must have vanishing EDM. In other words, a non-vanishing EDM signals both P and T symmetry-breaking. Some molecules, such as water, must have EDM irrespective of whether T is a symmetry. This is correct; if a quantum system has degenerate ground states that transform into each other under parity, then time reversal need not be broken to give EDM. Experimentally observed bounds on the electric dipole moment of the nucleon currently set stringent limits on the violation of time reversal symmetry in the strong interactions, and their modern theory: quantum chromodynamics. Then, using the CPT invariance of a relativistic quantum field theory, this puts strong bounds on strong CP violation. Experimental bounds on the electron electric dipole moment also place limits on theories of particle physics and their parameters. Kramers' theorem For T, which is an anti-unitary Z2 symmetry generator T2 = UKUK = UU* = U (UT)−1 = Φ, where Φ is a diagonal matrix of phases. As a result, and , showing that U = Φ U Φ. This means that the entries in Φ are ±1, as a result of which one may have either . This is specific to the anti-unitarity of T. For a unitary operator, such as the parity, any phase is allowed. Next, take a Hamiltonian invariant under T. Let |a⟩ and T|a⟩ be two quantum states of the same energy. Now, if , then one finds that the states are orthogonal: a result called Kramers' theorem. This implies that if , then there is a twofold degeneracy in the state. This result in non-relativistic quantum mechanics presages the spin statistics theorem of quantum field theory. Quantum states that give unitary representations of time reversal, i.e., have , are characterized by a multiplicative quantum number, sometimes called the T-parity. Time reversal of the known dynamical laws Particle physics codified the basic laws of dynamics into the standard model. This is formulated as a quantum field theory that has CPT symmetry, i.e., the laws are invariant under simultaneous operation of time reversal, parity and charge conjugation. However, time reversal itself is seen not to be a symmetry (this is usually called CP violation). There are two possible origins of this asymmetry, one through the mixing of different flavours of quarks in their weak decays, the second through a direct CP violation in strong interactions. The first is seen in experiments, the second is strongly constrained by the non-observation of the EDM of a neutron. Time reversal violation is unrelated to the second law of thermodynamics, because due to the conservation of the CPT symmetry, the effect of time reversal is to rename particles as antiparticles and vice versa. Thus the second law of thermodynamics is thought to originate in the initial conditions in the universe. Time reversal of noninvasive measurements Strong measurements (both classical and quantum) are certainly disturbing, causing asymmetry due to the second law of thermodynamics. However, noninvasive measurements should not disturb the evolution, so they are expected to be time-symmetric. Surprisingly, it is true only in classical physics but not in quantum physics, even in a thermodynamically invariant equilibrium state. This type of asymmetry is independent of CPT symmetry but has not yet been confirmed experimentally due to extreme conditions of the checking proposal. See also Arrow of time Causality (physics) Computing applications Limits of computation Quantum computing Reversible computing Standard model CKM matrix CP violation CPT invariance Neutrino mass Strong CP problem Wheeler–Feynman absorber theory Loschmidt's paradox Maxwell's demon Microscopic reversibility Second law of thermodynamics Time translation symmetry References Inline citations General references Maxwell's demon: entropy, information, computing, edited by H.S.Leff and A.F. Rex (IOP publishing, 1990) Maxwell's demon, 2: entropy, classical and quantum information, edited by H.S.Leff and A.F. Rex (IOP publishing, 2003) The emperor's new mind: concerning computers, minds, and the laws of physics, by Roger Penrose (Oxford university press, 2002) Multiferroic materials with time-reversal breaking optical properties CP violation, by I.I. Bigi and A.I. Sanda (Cambridge University Press, 2000) Particle Data Group on CP violation the Babar experiment in SLAC the BELLE experiment in KEK the KTeV experiment in Fermilab the CPLEAR experiment in CERN Time in physics Thermodynamics Statistical mechanics Philosophy of thermal and statistical physics Quantum field theory Symmetry
0.776876
0.994093
0.772287
Dimensionless numbers in fluid mechanics
Dimensionless numbers (or characteristic numbers) have an important role in analyzing the behavior of fluids and their flow as well as in other transport phenomena. They include the Reynolds and the Mach numbers, which describe as ratios the relative magnitude of fluid and physical system characteristics, such as density, viscosity, speed of sound, and flow speed. To compare a real situation (e.g. an aircraft) with a small-scale model it is necessary to keep the important characteristic numbers the same. Names and formulation of these numbers were standardized in ISO 31-12 and in ISO 80000-11. Diffusive numbers in transport phenomena As a general example of how dimensionless numbers arise in fluid mechanics, the classical numbers in transport phenomena of mass, momentum, and energy are principally analyzed by the ratio of effective diffusivities in each transport mechanism. The six dimensionless numbers give the relative strengths of the different phenomena of inertia, viscosity, conductive heat transport, and diffusive mass transport. (In the table, the diagonals give common symbols for the quantities, and the given dimensionless number is the ratio of the left column quantity over top row quantity; e.g. Re = inertial force/viscous force = vd/ν.) These same quantities may alternatively be expressed as ratios of characteristic time, length, or energy scales. Such forms are less commonly used in practice, but can provide insight into particular applications. Droplet formation Droplet formation mostly depends on momentum, viscosity and surface tension. In inkjet printing for example, an ink with a too high Ohnesorge number would not jet properly, and an ink with a too low Ohnesorge number would be jetted with many satellite drops. Not all of the quantity ratios are explicitly named, though each of the unnamed ratios could be expressed as a product of two other named dimensionless numbers. List All numbers are dimensionless quantities. See other article for extensive list of dimensionless quantities. Certain dimensionless quantities of some importance to fluid mechanics are given below: References Fluid dynamics
0.780406
0.989581
0.772275
Gravitational redshift
In physics and general relativity, gravitational redshift (known as Einstein shift in older literature) is the phenomenon that electromagnetic waves or photons travelling out of a gravitational well lose energy. This loss of energy corresponds to a decrease in the wave frequency and increase in the wavelength, known more generally as a redshift. The opposite effect, in which photons gain energy when travelling into a gravitational well, is known as a gravitational blueshift (a type of blueshift). The effect was first described by Einstein in 1907, eight years before his publication of the full theory of relativity. Gravitational redshift can be interpreted as a consequence of the equivalence principle (that gravity and acceleration are equivalent and the redshift is caused by the Doppler effect) or as a consequence of the mass–energy equivalence and conservation of energy ('falling' photons gain energy), though there are numerous subtleties that complicate a rigorous derivation. A gravitational redshift can also equivalently be interpreted as gravitational time dilation at the source of the radiation: if two oscillators (attached to transmitters producing electromagnetic radiation) are operating at different gravitational potentials, the oscillator at the higher gravitational potential (farther from the attracting body) will tick faster; that is, when observed from the same location, it will have a higher measured frequency than the oscillator at the lower gravitational potential (closer to the attracting body). To first approximation, gravitational redshift is proportional to the difference in gravitational potential divided by the speed of light squared, , thus resulting in a very small effect. Light escaping from the surface of the Sun was predicted by Einstein in 1911 to be redshifted by roughly 2 ppm or 2 × 10−6. Navigational signals from GPS satellites orbiting at altitude are perceived blueshifted by approximately 0.5 ppb or 5 × 10−10, corresponding to a (negligible) increase of less than 1 Hz in the frequency of a 1.5 GHz GPS radio signal (however, the accompanying gravitational time dilation affecting the atomic clock in the satellite is crucially important for accurate navigation). On the surface of the Earth the gravitational potential is proportional to height, , and the corresponding redshift is roughly 10−16 (0.1 parts per quadrillion) per meter of change in elevation and/or altitude. In astronomy, the magnitude of a gravitational redshift is often expressed as the velocity that would create an equivalent shift through the relativistic Doppler effect. In such units, the 2 ppm sunlight redshift corresponds to a 633 m/s receding velocity, roughly of the same magnitude as convective motions in the Sun, thus complicating the measurement. The GPS satellite gravitational blueshift velocity equivalent is less than 0.2 m/s, which is negligible compared to the actual Doppler shift resulting from its orbital velocity. In astronomical objects with strong gravitational fields the redshift can be much greater; for example, light from the surface of a white dwarf is gravitationally redshifted on average by around (50 km/s)/c (around 170 ppm). Observing the gravitational redshift in the Solar System is one of the classical tests of general relativity. Measuring the gravitational redshift to high precision with atomic clocks can serve as a test of Lorentz symmetry and guide searches for dark matter. Prediction by the equivalence principle and general relativity Uniform gravitational field or acceleration Einstein's theory of general relativity incorporates the equivalence principle, which can be stated in various different ways. One such statement is that gravitational effects are locally undetectable for a free-falling observer. Therefore, in a laboratory experiment at the surface of the Earth, all gravitational effects should be equivalent to the effects that would have been observed if the laboratory had been accelerating through outer space at g. One consequence is a gravitational Doppler effect. If a light pulse is emitted at the floor of the laboratory, then a free-falling observer says that by the time it reaches the ceiling, the ceiling has accelerated away from it, and therefore when observed by a detector fixed to the ceiling, it will be observed to have been Doppler shifted toward the red end of the spectrum. This shift, which the free-falling observer considers to be a kinematical Doppler shift, is thought of by the laboratory observer as a gravitational redshift. Such an effect was verified in the 1959 Pound–Rebka experiment. In a case such as this, where the gravitational field is uniform, the change in wavelength is given by where is the change in height. Since this prediction arises directly from the equivalence principle, it does not require any of the mathematical apparatus of general relativity, and its verification does not specifically support general relativity over any other theory that incorporates the equivalence principle. On Earth's surface (or in a spaceship accelerating at 1 g), the gravitational redshift is approximately , the equivalent of a Doppler shift for every 1 m of altitude. Spherically symmetric gravitational field When the field is not uniform, the simplest and most useful case to consider is that of a spherically symmetric field. By Birkhoff's theorem, such a field is described in general relativity by the Schwarzschild metric, , where is the clock time of an observer at distance R from the center, is the time measured by an observer at infinity, is the Schwarzschild radius , "..." represents terms that vanish if the observer is at rest, is the Newtonian constant of gravitation, the mass of the gravitating body, and the speed of light. The result is that frequencies and wavelengths are shifted according to the ratio where is the wavelength of the light as measured by the observer at infinity, is the wavelength measured at the source of emission, and is the radius at which the photon is emitted. This can be related to the redshift parameter conventionally defined as . In the case where neither the emitter nor the observer is at infinity, the transitivity of Doppler shifts allows us to generalize the result to . The redshift formula for the frequency is . When is small, these results are consistent with the equation given above based on the equivalence principle. The redshift ratio may also be expressed in terms of a (Newtonian) escape velocity at , resulting in the corresponding Lorentz factor: . For an object compact enough to have an event horizon, the redshift is not defined for photons emitted inside the Schwarzschild radius, both because signals cannot escape from inside the horizon and because an object such as the emitter cannot be stationary inside the horizon, as was assumed above. Therefore, this formula only applies when is larger than . When the photon is emitted at a distance equal to the Schwarzschild radius, the redshift will be infinitely large, and it will not escape to any finite distance from the Schwarzschild sphere. When the photon is emitted at an infinitely large distance, there is no redshift. Newtonian limit In the Newtonian limit, i.e. when is sufficiently large compared to the Schwarzschild radius , the redshift can be approximated as where is the gravitational acceleration at . For Earth's surface with respect to infinity, z is approximately (the equivalent of a 0.2 m/s radial Doppler shift); for the Moon it is approximately (about 1 cm/s). The value for the surface of the Sun is about , corresponding to 0.64 km/s. (For non-relativistic velocities, the radial Doppler equivalent velocity can be approximated by multiplying z with the speed of light.) The z-value can be expressed succinctly in terms of the escape velocity at , since the gravitational potential is equal to half the square of the escape velocity, thus: where is the escape velocity at . It can also be related to the circular orbit velocity at , which equals , thus . For example, the gravitational blueshift of distant starlight due to the Sun's gravity, which the Earth is orbiting at about 30 km/s, would be approximately 1 × 10−8 or the equivalent of a 3 m/s radial Doppler shift. For an object in a (circular) orbit, the gravitational redshift is of comparable magnitude as the transverse Doppler effect, where , while both are much smaller than the radial Doppler effect, for which . Prediction of the Newtonian limit using the properties of photons The formula for the gravitational red shift in the Newtonian limit can also be derived using the properties of a photon: In a gravitational field a particle of mass and velocity changes it's energy according to: . For a massless photon described by its energy and momentum this equation becomes after dividing by the Planck constant : Inserting the gravitational field of a spherical body of mass within the distance and the wave vector of a photon leaving the gravitational field in radial direction the energy equation becomes Using an ordinary differential equation which is only dependent on the radial distance is obtained: For a photon starting at the surface of a spherical body with a Radius with a frequency the analytical solution is: In a large distance from the body an observer measures the frequency : Therefore, the red shift is: In the linear approximation the Newtonian limit for the graviational red shift of General Relativity is obtained. Experimental verification Astronomical observations A number of experimenters initially claimed to have identified the effect using astronomical measurements, and the effect was considered to have been finally identified in the spectral lines of the star Sirius B by W.S. Adams in 1925. However, measurements by Adams have been criticized as being too low and these observations are now considered to be measurements of spectra that are unusable because of scattered light from the primary, Sirius A. The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/s gravitational redshift of 40 Eridani B. The redshift of Sirius B was finally measured by Greenstein et al. in 1971, obtaining the value for the gravitational redshift of 89±16 km/s, with more accurate measurements by the Hubble Space Telescope, showing 80.4±4.8 km/s. James W. Brault, a graduate student of Robert Dicke at Princeton University, measured the gravitational redshift of the sun using optical methods in 1962. In 2020, a team of scientists published the most accurate measurement of the solar gravitational redshift so far, made by analyzing Fe spectral lines in sunlight reflected by the Moon; their measurement of a mean global 638 ± 6 m/s lineshift is in agreement with the theoretical value of 633.1 m/s. Measuring the solar redshift is complicated by the Doppler shift caused by the motion of the Sun's surface, which is of similar magnitude as the gravitational effect. In 2011, the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity. In 2018, the star S2 made its closest approach to Sgr A*, the 4-million solar mass supermassive black hole at the centre of the Milky Way, reaching 7650 km/s or about 2.5% of the speed of light while passing the black hole at a distance of just 120 AU, or 1400 Schwarzschild radii. Independent analyses by the GRAVITY collaboration (led by Reinhard Genzel) and the KECK/UCLA Galactic Center Group (led by Andrea Ghez) revealed a combined transverse Doppler and gravitational redshift up to 200 km/s/c, in agreement with general relativity predictions. In 2021, Mediavilla (IAC, Spain) & Jiménez-Vicente (UGR, Spain) were able to use measurements of the gravitational redshift in quasars up to cosmological redshift of to confirm the predictions of Einstein's equivalence principle and the lack of cosmological evolution within 13%. In 2024, Padilla et al. have estimated the gravitational redshifts of supermassive black holes (SMBH) in eight thousand quasars and one hundred Seyfert type 1 galaxies from the full width at half maximum (FWHM) of their emission lines, finding , compatible with SMBHs of ~ 1 billion solar masses and broadline regions of ~ 1 parsec radius. This same gravitational redshift was directly measured by these authors in the SAMI sample of LINER galaxies, using the redshift differences between lines emitted in central and outer regions. Terrestrial tests The effect is now considered to have been definitively verified by the experiments of Pound, Rebka and Snider between 1959 and 1965. The Pound–Rebka experiment of 1959 measured the gravitational redshift in spectral lines using a terrestrial 57Fe gamma source over a vertical height of 22.5 metres. This paper was the first determination of the gravitational redshift which used measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The accuracy of the gamma-ray measurements was typically 1%. An improved experiment was done by Pound and Snider in 1965, with an accuracy better than the 1% level. A very accurate gravitational redshift experiment was performed in 1976, where a hydrogen maser clock on a rocket was launched to a height of , and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%. Later tests can be done with the Global Positioning System (GPS), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, it showed the predicted shift of 38 microseconds per day. This rate of the discrepancy is sufficient to substantially impair the function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003. In 2010, an experiment placed two aluminum-ion quantum clocks close to each other, but with the second elevated 33 cm compared to the first, making the gravitational red shift effect visible in everyday lab scales. In 2020, a group at the University of Tokyo measured the gravitational redshift of two strontium-87 optical lattice clocks. The measurement took place at Tokyo Skytree where the clocks were separated by approximately 450 m and connected by telecom fibers. The gravitational redshift can be expressed as , where is the gravitational redshift, is the optical clock transition frequency, is the difference in gravitational potential, and denotes the violation from general relativity. By Ramsey spectroscopy of the strontium-87 optical clock transition (429 THz, 698 nm) the group determined the gravitational redshift between the two optical clocks to be 21.18 Hz, corresponding to a z-value of approximately 5 × 10−14. Their measured value of , , is an agreement with recent measurements made with hydrogen masers in elliptical orbits. In October 2021, a group at JILA led by physicist Jun Ye reported a measurement of gravitational redshift in the submillimeter scale. The measurement is done on the 87Sr clock transition between the top and the bottom of a millimeter-tall ultracold cloud of 100,000 strontium atoms in an optical lattice. Early historical development of the theory The gravitational weakening of light from high-gravity stars was predicted by John Michell in 1783 and Pierre-Simon Laplace in 1796, using Isaac Newton's concept of light corpuscles (see: emission theory) and who predicted that some stars would have a gravity so strong that light would not be able to escape. The effect of gravity on light was then explored by Johann Georg von Soldner (1801), who calculated the amount of deflection of a light ray by the Sun, arriving at the Newtonian answer which is half the value predicted by general relativity. All of this early work assumed that light could slow down and fall, which is inconsistent with the modern understanding of light waves. Once it became accepted that light was an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself were altered if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911. He considered an accelerating box, and noted that according to the special theory of relativity, the clock rate at the "bottom" of the box (the side away from the direction of acceleration) was slower than the clock rate at the "top" (the side toward the direction of acceleration). Indeed, in a frame moving (in direction) with velocity relative to the rest frame, the clocks at a nearby position are ahead by (to the first order); so an acceleration (that changes speed by per time ) makes clocks at the position to be ahead by , that is, tick at a rate The equivalence principle implies that this change in clock rate is the same whether the acceleration is that of an accelerated frame without gravitational effects, or caused by a gravitational field in a stationary frame. Since acceleration due to gravitational potential is , we get so – in weak fields – the change in the clock rate is equal to . Since the light would be slowed down by gravitational time dilation (as seen by outside observer), the regions with lower gravitational potential would act like a medium with higher refractive index causing light to deflect. This reasoning allowed Einstein in 1911 to reproduce the incorrect Newtonian value for the deflection of light. At the time he only considered the time-dilating manifestation of gravity, which is the dominating contribution at non-relativistic speeds; however relativistic objects travel through space a comparable amount as they do though time, so purely spatial curvature becomes just as important. After constructing the full theory of general relativity, Einstein solved in 1915 the full post-Newtonian approximation for the Sun's gravity and calculated the correct amount of light deflection – double the Newtonian value. Einstein's prediction was confirmed by many experiments, starting with Arthur Eddington's 1919 solar eclipse expedition. The changing rates of clocks allowed Einstein to conclude that light waves change frequency as they move, and the frequency/energy relationship for photons allowed him to see that this was best interpreted as the effect of the gravitational field on the mass–energy of the photon. To calculate the changes in frequency in a nearly static gravitational field, only the time component of the metric tensor is important, and the lowest order approximation is accurate enough for ordinary stars and planets, which are much bigger than their Schwarzschild radius. See also Tests of general relativity Equivalence principle Gravitational time dilation Redshift (redshifting of gravitational waves due to speed or cosmic expansion) Citations References Primary sources Albert Einstein, "Relativity: the Special and General Theory". (@Project Gutenberg). Other sources Albert Einstein Effects of gravity
0.778506
0.991979
0.772262
Relativistic electromagnetism
Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations. Electromechanics After Maxwell proposed the differential equation model of the electromagnetic field in 1873, the mechanism of action of fields came into question, for instance in the Kelvin's master class held at Johns Hopkins University in 1884 and commemorated a century later. The requirement that the equations remain consistent when viewed from various moving observers led to special relativity, a geometric theory of 4-space where intermediation is by light and radiation. The spacetime geometry provided a context for technical description of electric technology, especially generators, motors, and lighting at first. The Coulomb force was generalized to the Lorentz force. For example, with this model transmission lines and power grids were developed and radio frequency communication explored. An effort to mount a full-fledged electromechanics on a relativistic basis is seen in the work of Leigh Page, from the project outline in 1912 to his textbook Electrodynamics (1940) The interplay (according to the differential equations) of electric and magnetic field as viewed over moving observers is examined. What is charge density in electrostatics becomes proper charge density and generates a magnetic field for a moving observer. A revival of interest in this method for education and training of electrical and electronics engineers broke out in the 1960s after Richard Feynman’s textbook. Rosser’s book Classical Electromagnetism via Relativity was popular, as was Anthony French’s treatment in his textbook which illustrated diagrammatically the proper charge density. One author proclaimed, "Maxwell — Out of Newton, Coulomb, and Einstein". The use of retarded potentials to describe electromagnetic fields from source-charges is an expression of relativistic electromagnetism. Principle The question of how an electric field in one inertial frame of reference looks in different reference frames moving with respect to the first is crucial to understanding fields created by moving sources. In the special case, the sources that create the field are at rest with respect to one of the reference frames. Given the electric field in the frame where the sources are at rest, one can ask: what is the electric field in some other frame? Knowing the electric field at some point (in space and time) in the rest frame of the sources, and knowing the relative velocity of the two frames provided all the information needed to calculate the electric field at the same point in the other frame. In other words, the electric field in the other frame does not depend on the particular distribution of the source charges, only on the local value of the electric field in the first frame at that point. Thus, the electric field is a complete representation of the influence of the far-away charges. Alternatively, introductory treatments of magnetism introduce the Biot–Savart law, which describes the magnetic field associated with an electric current. An observer at rest with respect to a system of static, free charges will see no magnetic field. However, a moving observer looking at the same set of charges does perceive a current, and thus a magnetic field. That is, the magnetic field is simply the electric field, as seen in a moving coordinate system. Redundancy The title of this article is redundant since all mathematical theories of electromagnetism are relativistic. Indeed, as Einstein wrote, "The special theory of relativity ... was simply a systematic development of the electrodynamics of Clerk Maxwell and Lorentz". Combination of spatial and temporal variables in Maxwell's theory required admission of a four-manifold. Finite light speed and other constant motion lines were described with analytic geometry. Orthogonality of electric and magnetic vector fields in space was extended by hyperbolic orthogonality for the temporal factor. When Ludwik Silberstein published his textbook The Theory of Relativity (1914) he related the new geometry to electromagnetism. Faraday's law of induction was suggestive to Einstein when he wrote in 1905 about the "reciprocal electrodynamic action of a magnet and a conductor". Nevertheless, the aspiration, reflected in references for this article, is for an analytic geometry of spacetime and charges providing a deductive route to forces and currents in practice. Such a royal route to electromagnetic understanding may be lacking, but a path has been opened with differential geometry: The tangent space at an event in spacetime is a four-dimensional vector space, operable by linear transformations. Symmetries observed by electricians find expression in linear algebra and differential geometry. Using exterior algebra to construct a 2-form F  from electric and magnetic fields, and the implied dual 2-form *F, the equations dF = 0 and d*F = J (current) express Maxwell's theory with a differential form approach. See also Covariant formulation of classical electromagnetism Special relativity Liénard–Wiechert potential Moving magnet and conductor problem Wheeler–Feynman absorber theory Paradox of a charge in a gravitational field Notes and references Electromagnetism Electromagnetism
0.792733
0.974161
0.77225
Environment (systems)
In science and engineering, a system is the part of the universe that is being studied, while the environment is the remainder of the universe that lies outside the boundaries of the system. It is also known as the surroundings or neighbourhood, and in thermodynamics, as the reservoir. Depending on the type of system, it may interact with the environment by exchanging mass, energy (including heat and work), linear momentum, angular momentum, electric charge, or other conserved properties. In some disciplines, such as information theory, information may also be exchanged. The environment is ignored in analysis of the system, except in regard to these interactions. See also Bioenergetic systems – energy system Earth system science Environment (biophysical) Environmental Management System Thermodynamic system External links Geography of transport systems people.hofstra.edu Environmental Management Systems epa.gov Earth's Environmental Systems eesc.Columbia.edu Environmental Education Thermodynamic systems
0.776935
0.993873
0.772174
Radiesthesia
Radiesthesia describes a physical ability to detect radiation emitted by a person, animal, object or geographical feature. One of its practitioners, J. Cecil Maby, defined it as "The faculty and study of certain reflexive physical responses of living tissue to various radiations ... resulting in displacement currents and other inductive effects in living tissues." He distinguished it critically from the psychic facility of divination. Despite this distinction, there is no scientific evidence for the existence of the phenomenon and it is classed by the mainstream as pseudoscience . Definitions One definition is "sensitivity to radiations of all kinds emanating from living beings, inanimate objects, mineral ores, water and even photographs". The word derives from Latin root radi- referring to beams of light, radiation and aesthesia, referring to sensory perception. The term is a neologism created by a French Catholic priest Alexis Timothée Bouly who was a celebrated dowsing practitioner in the early part of the 20th century. Bouly claimed to be able to detect unexploded ordnance from WW1 and also to detect molecular changes in laboratory experiments. He was the founder at Lille in 1929 of the Association of the Friends of Radiesthesia. Claims Practitioners may claim to be able to detect the emitted radiation through use of their hands or more typically with dowsing rods or a pendulum. Teleradiesthesia or Tele-radiesthesia describes this sensitivity to radiation but without the need to be in physical proximity to the subject. Typically a practitioner will use an instrument such as a pendulum to perform analysis based on a map or photograph. The practical application of radiesthesia, i.e. dowsing is directed toward providing individual and environmental benefits, such as: diagnosis of infirmities detection of underground water detection of underground mineral sources detection of the Earth's telluric currents and magnetic fields location of lost objects location of missing persons or livestock A distinction may be made in the application of radiesthesic techniques in the detection of physical phenomena e.g. water, minerals, objects, changed cell condition and using these techniques for analysis of supposed subtle energy fields or the ‘aura’ of an individual. Researchers have cited an involuntary bodily reaction, that is, ideomotor phenomenon as the initiator of the movement seen occurring in instruments such as dowsing rods or a pendulum. It is this reactive movement which typically acts as the indicator of the location or the state change of the subject or object under investigation. See also References Further reading F.A. Archdale Elementary Radiesthesia and the Use of the Pendulum, 1950 Marc Aurice, Le Grand Livre de la radiesthésie, 2008 éditions Trédaniel Gabriell Blackburn, Science and Art of the Pendulum: A Complete Course in Radiesthesia, 1984 pub. Idylwild C.L. Cooper-Hunt, Radiesthetic Analysis, 1996 pub. Health Research Books Bruce Copen, Dowsing from Maps, Tele-radiesthesia, 1975 pub. Academic Publications Emma Decourtay, Initiation à la radiesthésie, 2004 éditions Cristal Gilbert Degueldre, La Radiesthésie, cet instinct originel, 1985 éditions Florikosse asbl, Verviers – Belgique Karl Maximilan Fischer, Radiästhesie und Geopathie – Theorie und empirische Untersuchungen, 1989 Böhlau in Wien Christopher Freeland, Radiesthesia I – Method and Training for the Modern Dowser, 2020 pub. Completelynovel Tom Graves, Pendel und Wünschelrute, Radiästhesie, 1999 Jane E. Hartman, Radionics and Radiesthesia, 1999 pub Aquarian Systems Ray Hyman. How People Are Fooled by Ideomotor Action. Adolphe Landspurg, Comment devenir sourcier et géobiologue (La pratique de la radiesthésie vibratoire), 2003 éditions Dangles Hartmut Lüdeling: Handbuch der Radiaesthesie – Schwerpunkt Grifflängentechnik. 2006 Drachen-Verlag Marguerite Maury, How to Dowse – Experimental And Practical Radiesthesia 1953, pub. G. Bell and Sons; 2008 edition Alexis Mermet, Principles and Practice of Radiesthesia: A textbook for Practitioners and Students, 1959; 1991 edition Michel Moine, La radiestesia – la otra sciencia, 1974 Helmut Müller, Radiestesia: Manual Práctico, 1991 Editorial De Vecchi Otto Prokop, Wolf Wimmer: Wünschelrute, Erdstrahlen, Radiästhesie. Die okkulten Strahlenfühligkeitslehren im Lichte der Wissenschaft. 1985 Thieme Jessie Toler Kingsley Tarpey, Healing by radiesthesia, 1955, pub. Omega Press Henry Tomlinson, The Divination of Disease: A Study in Radiesthesia, 1953 pub. Health Science Press S.W. Tromp, Psychical Physics, a Scientific Analysis of Dowsing, Radiesthesia and Kindred Phenomena, 1949 pub. Elzevier, New York Herbet Weaver, Divining, the Primary Sense: Unfamiliar Radiation in Nature, Art and Science, 1978 pub, Routledge & Kegan Paul V. D. Wethered, A Radiesthetic Approach to Health and Homoeopathy, or Health and the Pendulum, 1950, pub. British Society of Dowsers V. D. Wethered, An Introduction to Medical Radiesthesia and Radionics, 1957 pub. C.W. Daniel Company External links Association des Amis de la Radiesthesie Associazioni Italiana Radiestesisti Pseudoscience
0.783682
0.985288
0.772153
Lagrangian (field theory)
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom. One motivation for the development of the Lagrangian formalism on fields, and more generally, for classical field theory, is to provide a clear mathematical foundation for quantum field theory, which is infamously beset by formal difficulties that make it unacceptable as a mathematical theory. The Lagrangians presented here are identical to their quantum equivalents, but, in treating the fields as classical fields, instead of being quantized, one can provide definitions and obtain solutions with properties compatible with the conventional formal approach to the mathematics of partial differential equations. This enables the formulation of solutions on spaces with well-characterized properties, such as Sobolev spaces. It enables various theorems to be provided, ranging from proofs of existence to the uniform convergence of formal series to the general settings of potential theory. In addition, insight and clarity is obtained by generalizations to Riemannian manifolds and fiber bundles, allowing the geometric structure to be clearly discerned and disentangled from the corresponding equations of motion. A clearer view of the geometric structure has in turn allowed highly abstract theorems from geometry to be used to gain insight, ranging from the Chern–Gauss–Bonnet theorem and the Riemann–Roch theorem to the Atiyah–Singer index theorem and Chern–Simons theory. Overview In field theory, the independent variable is replaced by an event in spacetime , or more generally still by a point s on a Riemannian manifold. The dependent variables are replaced by the value of a field at that point in spacetime so that the equations of motion are obtained by means of an action principle, written as: where the action, , is a functional of the dependent variables , their derivatives and s itself where the brackets denote ; and s = {sα} denotes the set of n independent variables of the system, including the time variable, and is indexed by α = 1, 2, 3, ..., n. The calligraphic typeface, , is used to denote the density, and is the volume form of the field function, i.e., the measure of the domain of the field function. In mathematical formulations, it is common to express the Lagrangian as a function on a fiber bundle, wherein the Euler–Lagrange equations can be interpreted as specifying the geodesics on the fiber bundle. Abraham and Marsden's textbook provided the first comprehensive description of classical mechanics in terms of modern geometrical ideas, i.e., in terms of tangent manifolds, symplectic manifolds and contact geometry. Bleecker's textbook provided a comprehensive presentation of field theories in physics in terms of gauge invariant fiber bundles. Such formulations were known or suspected long before. Jost continues with a geometric presentation, clarifying the relation between Hamiltonian and Lagrangian forms, describing spin manifolds from first principles, etc. Current research focuses on non-rigid affine structures, (sometimes called "quantum structures") wherein one replaces occurrences of vector spaces by tensor algebras. This research is motivated by the breakthrough understanding of quantum groups as affine Lie algebras (Lie groups are, in a sense "rigid", as they are determined by their Lie algebra. When reformulated on a tensor algebra, they become "floppy", having infinite degrees of freedom; see e.g., Virasoro algebra.) Definitions In Lagrangian field theory, the Lagrangian as a function of generalized coordinates is replaced by a Lagrangian density, a function of the fields in the system and their derivatives, and possibly the space and time coordinates themselves. In field theory, the independent variable t is replaced by an event in spacetime or still more generally by a point s on a manifold. Often, a "Lagrangian density" is simply referred to as a "Lagrangian". Scalar fields For one scalar field , the Lagrangian density will take the form: For many scalar fields In mathematical formulations, the scalar fields are understood to be coordinates on a fiber bundle, and the derivatives of the field are understood to be sections of the jet bundle. Vector fields, tensor fields, spinor fields The above can be generalized for vector fields, tensor fields, and spinor fields. In physics, fermions are described by spinor fields. Bosons are described by tensor fields, which include scalar and vector fields as special cases. For example, if there are real-valued scalar fields, , then the field manifold is . If the field is a real vector field, then the field manifold is isomorphic to . Action The time integral of the Lagrangian is called the action denoted by . In field theory, a distinction is occasionally made between the Lagrangian , of which the time integral is the action and the Lagrangian density , which one integrates over all spacetime to get the action: The spatial volume integral of the Lagrangian density is the Lagrangian; in 3D, The action is often referred to as the "action functional", in that it is a function of the fields (and their derivatives). Volume form In the presence of gravity or when using general curvilinear coordinates, the Lagrangian density will include a factor of . This ensures that the action is invariant under general coordinate transformations. In mathematical literature, spacetime is taken to be a Riemannian manifold and the integral then becomes the volume form Here, the is the wedge product and is the square root of the determinant of the metric tensor on . For flat spacetime (e.g., Minkowski spacetime), the unit volume is one, i.e. and so it is commonly omitted, when discussing field theory in flat spacetime. Likewise, the use of the wedge-product symbols offers no additional insight over the ordinary concept of a volume in multivariate calculus, and so these are likewise dropped. Some older textbooks, e.g., Landau and Lifschitz write for the volume form, since the minus sign is appropriate for metric tensors with signature (+−−−) or (−+++) (since the determinant is negative, in either case). When discussing field theory on general Riemannian manifolds, the volume form is usually written in the abbreviated notation where is the Hodge star. That is, and so Not infrequently, the notation above is considered to be entirely superfluous, and is frequently seen. Do not be misled: the volume form is implicitly present in the integral above, even if it is not explicitly written. Euler–Lagrange equations The Euler–Lagrange equations describe the geodesic flow of the field as a function of time. Taking the variation with respect to , one obtains Solving, with respect to the boundary conditions, one obtains the Euler–Lagrange equations: Examples A large variety of physical systems have been formulated in terms of Lagrangians over fields. Below is a sampling of some of the most common ones found in physics textbooks on field theory. Newtonian gravity The Lagrangian density for Newtonian gravity is: where is the gravitational potential, is the mass density, and in m3·kg−1·s−2 is the gravitational constant. The density has units of J·m−3. Here the interaction term involves a continuous mass density ρ in kg·m−3. This is necessary because using a point source for a field would result in mathematical difficulties. This Lagrangian can be written in the form of , with the providing a kinetic term, and the interaction the potential term. See also Nordström's theory of gravitation for how this could be modified to deal with changes over time. This form is reprised in the next example of a scalar field theory. The variation of the integral with respect to is: After integrating by parts, discarding the total integral, and dividing out by the formula becomes: which is equivalent to: which yields Gauss's law for gravity. Scalar field theory The Lagrangian for a scalar field moving in a potential can be written as It is not at all an accident that the scalar theory resembles the undergraduate textbook Lagrangian for the kinetic term of a free point particle written as . The scalar theory is the field-theory generalization of a particle moving in a potential. When the is the Mexican hat potential, the resulting fields are termed the Higgs fields. Sigma model Lagrangian The sigma model describes the motion of a scalar point particle constrained to move on a Riemannian manifold, such as a circle or a sphere. It generalizes the case of scalar and vector fields, that is, fields constrained to move on a flat manifold. The Lagrangian is commonly written in one of three equivalent forms: where the is the differential. An equivalent expression is with the Riemannian metric on the manifold of the field; i.e. the fields are just local coordinates on the coordinate chart of the manifold. A third common form is with and , the Lie group SU(N). This group can be replaced by any Lie group, or, more generally, by a symmetric space. The trace is just the Killing form in hiding; the Killing form provides a quadratic form on the field manifold, the lagrangian is then just the pullback of this form. Alternately, the Lagrangian can also be seen as the pullback of the Maurer–Cartan form to the base spacetime. In general, sigma models exhibit topological soliton solutions. The most famous and well-studied of these is the Skyrmion, which serves as a model of the nucleon that has withstood the test of time. Electromagnetism in special relativity Consider a point particle, a charged particle, interacting with the electromagnetic field. The interaction terms are replaced by terms involving a continuous charge density ρ in A·s·m−3 and current density in A·m−2. The resulting Lagrangian density for the electromagnetic field is: Varying this with respect to , we get which yields Gauss' law. Varying instead with respect to , we get which yields Ampère's law. Using tensor notation, we can write all this more compactly. The term is actually the inner product of two four-vectors. We package the charge density into the current 4-vector and the potential into the potential 4-vector. These two new vectors are We can then write the interaction term as Additionally, we can package the E and B fields into what is known as the electromagnetic tensor . We define this tensor as The term we are looking out for turns out to be We have made use of the Minkowski metric to raise the indices on the EMF tensor. In this notation, Maxwell's equations are where ε is the Levi-Civita tensor. So the Lagrange density for electromagnetism in special relativity written in terms of Lorentz vectors and tensors is In this notation it is apparent that classical electromagnetism is a Lorentz-invariant theory. By the equivalence principle, it becomes simple to extend the notion of electromagnetism to curved spacetime. Electromagnetism and the Yang–Mills equations Using differential forms, the electromagnetic action S in vacuum on a (pseudo-) Riemannian manifold can be written (using natural units, ) as Here, A stands for the electromagnetic potential 1-form, J is the current 1-form, is the field strength 2-form and the star denotes the Hodge star operator. This is exactly the same Lagrangian as in the section above, except that the treatment here is coordinate-free; expanding the integrand into a basis yields the identical, lengthy expression. Note that with forms, an additional integration measure is not necessary because forms have coordinate differentials built in. Variation of the action leads to These are Maxwell's equations for the electromagnetic potential. Substituting immediately yields the equation for the fields, because is an exact form. The A field can be understood to be the affine connection on a U(1)-fiber bundle. That is, classical electrodynamics, all of its effects and equations, can be completely understood in terms of a circle bundle over Minkowski spacetime. The Yang–Mills equations can be written in exactly the same form as above, by replacing the Lie group U(1) of electromagnetism by an arbitrary Lie group. In the Standard model, it is conventionally taken to be although the general case is of general interest. In all cases, there is no need for any quantization to be performed. Although the Yang–Mills equations are historically rooted in quantum field theory, the above equations are purely classical. Chern–Simons functional In the same vein as the above, one can consider the action in one dimension less, i.e. in a contact geometry setting. This gives the Chern–Simons functional. It is written as Chern–Simons theory was deeply explored in physics, as a toy model for a broad range of geometric phenomena that one might expect to find in a grand unified theory. Ginzburg–Landau Lagrangian The Lagrangian density for Ginzburg–Landau theory combines the Lagrangian for the scalar field theory with the Lagrangian for the Yang–Mills action. It may be written as: where is a section of a vector bundle with fiber . The corresponds to the order parameter in a superconductor; equivalently, it corresponds to the Higgs field, after noting that the second term is the famous "Sombrero hat" potential. The field is the (non-Abelian) gauge field, i.e. the Yang–Mills field and is its field-strength. The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations and where is the Hodge star operator, i.e. the fully antisymmetric tensor. These equations are closely related to the Yang–Mills–Higgs equations. Another closely related Lagrangian is found in Seiberg–Witten theory. Dirac Lagrangian The Lagrangian density for a Dirac field is: where is a Dirac spinor, is its Dirac adjoint, and is Feynman slash notation for . There is no particular need to focus on Dirac spinors in the classical theory. The Weyl spinors provide a more general foundation; they can be constructed directly from the Clifford algebra of spacetime; the construction works in any number of dimensions, and the Dirac spinors appear as a special case. Weyl spinors have the additional advantage that they can be used in a vielbein for the metric on a Riemannian manifold; this enables the concept of a spin structure, which, roughly speaking, is a way of formulating spinors consistently in a curved spacetime. Quantum electrodynamic Lagrangian The Lagrangian density for QED combines the Lagrangian for the Dirac field together with the Lagrangian for electrodynamics in a gauge-invariant way. It is: where is the electromagnetic tensor, D is the gauge covariant derivative, and is Feynman notation for with where is the electromagnetic four-potential. Although the word "quantum" appears in the above, this is a historical artifact. The definition of the Dirac field requires no quantization whatsoever, it can be written as a purely classical field of anti-commuting Weyl spinors constructed from first principles from a Clifford algebra. The full gauge-invariant classical formulation is given in Bleecker. Quantum chromodynamic Lagrangian The Lagrangian density for quantum chromodynamics combines the Lagrangian for one or more massive Dirac spinors with the Lagrangian for the Yang–Mills action, which describes the dynamics of a gauge field; the combined Lagrangian is gauge invariant. It may be written as: where D is the QCD gauge covariant derivative, n = 1, 2, ...6 counts the quark types, and is the gluon field strength tensor. As for the electrodynamics case above, the appearance of the word "quantum" above only acknowledges its historical development. The Lagrangian and its gauge invariance can be formulated and treated in a purely classical fashion. Einstein gravity The Lagrange density for general relativity in the presence of matter fields is where is the cosmological constant, is the curvature scalar, which is the Ricci tensor contracted with the metric tensor, and the Ricci tensor is the Riemann tensor contracted with a Kronecker delta. The integral of is known as the Einstein–Hilbert action. The Riemann tensor is the tidal force tensor, and is constructed out of Christoffel symbols and derivatives of Christoffel symbols, which define the metric connection on spacetime. The gravitational field itself was historically ascribed to the metric tensor; the modern view is that the connection is "more fundamental". This is due to the understanding that one can write connections with non-zero torsion. These alter the metric without altering the geometry one bit. As to the actual "direction in which gravity points" (e.g. on the surface of the Earth, it points down), this comes from the Riemann tensor: it is the thing that describes the "gravitational force field" that moving bodies feel and react to. (This last statement must be qualified: there is no "force field" per se; moving bodies follow geodesics on the manifold described by the connection. They move in a "straight line".) The Lagrangian for general relativity can also be written in a form that makes it manifestly similar to the Yang–Mills equations. This is called the Einstein–Yang–Mills action principle. This is done by noting that most of differential geometry works "just fine" on bundles with an affine connection and arbitrary Lie group. Then, plugging in SO(3,1) for that symmetry group, i.e. for the frame fields, one obtains the equations above. Substituting this Lagrangian into the Euler–Lagrange equation and taking the metric tensor as the field, we obtain the Einstein field equations is the energy momentum tensor and is defined by where is the determinant of the metric tensor when regarded as a matrix. Generally, in general relativity, the integration measure of the action of Lagrange density is . This makes the integral coordinate independent, as the root of the metric determinant is equivalent to the Jacobian determinant. The minus sign is a consequence of the metric signature (the determinant by itself is negative). This is an example of the volume form, previously discussed, becoming manifest in non-flat spacetime. Electromagnetism in general relativity The Lagrange density of electromagnetism in general relativity also contains the Einstein–Hilbert action from above. The pure electromagnetic Lagrangian is precisely a matter Lagrangian . The Lagrangian is This Lagrangian is obtained by simply replacing the Minkowski metric in the above flat Lagrangian with a more general (possibly curved) metric . We can generate the Einstein Field Equations in the presence of an EM field using this lagrangian. The energy-momentum tensor is It can be shown that this energy momentum tensor is traceless, i.e. that If we take the trace of both sides of the Einstein Field Equations, we obtain So the tracelessness of the energy momentum tensor implies that the curvature scalar in an electromagnetic field vanishes. The Einstein equations are then Additionally, Maxwell's equations are where is the covariant derivative. For free space, we can set the current tensor equal to zero, . Solving both Einstein and Maxwell's equations around a spherically symmetric mass distribution in free space leads to the Reissner–Nordström charged black hole, with the defining line element (written in natural units and with charge ): One possible way of unifying the electromagnetic and gravitational Lagrangians (by using a fifth dimension) is given by Kaluza–Klein theory. Effectively, one constructs an affine bundle, just as for the Yang–Mills equations given earlier, and then considers the action separately on the 4-dimensional and the 1-dimensional parts. Such factorizations, such as the fact that the 7-sphere can be written as a product of the 4-sphere and the 3-sphere, or that the 11-sphere is a product of the 4-sphere and the 7-sphere, accounted for much of the early excitement that a theory of everything had been found. Unfortunately, the 7-sphere proved not large enough to enclose all of the Standard model, dashing these hopes. Additional examples The BF model Lagrangian, short for "Background Field", describes a system with trivial dynamics, when written on a flat spacetime manifold. On a topologically non-trivial spacetime, the system will have non-trivial classical solutions, which may be interpreted as solitons or instantons. A variety of extensions exist, forming the foundations for topological field theories. See also Calculus of variations Covariant classical field theory Euler–Lagrange equation Functional derivative Functional integral Generalized coordinates Hamiltonian mechanics Hamiltonian field theory Kinetic term Lagrangian and Eulerian coordinates Lagrangian mechanics Lagrangian point Lagrangian system Noether's theorem Onsager–Machlup function Principle of least action Scalar field theory Notes Citations Mathematical physics Classical field theory Calculus of variations Quantum field theory
0.775729
0.995352
0.772124
Internal ballistics
Internal ballistics (also interior ballistics), a subfield of ballistics, is the study of the propulsion of a projectile. In guns, internal ballistics covers the time from the propellant's ignition until the projectile exits the gun barrel. The study of internal ballistics is important to designers and users of firearms of all types, from small-bore rifles and pistols, to artillery. For rocket-propelled projectiles, internal ballistics covers the period during which a rocket motor is providing thrust. General concepts Interior ballistics can be considered in three time periods: Lock time - the time from sear release until the primer is struck Ignition time - the time from when the primer is struck until the projectile starts to move Barrel time - the time from when the projectile starts to move until it exits the barrel. The burning firearm propellant produces energy in the form of hot gases that raise the chamber pressure which applies a force on the base of the projectile, causing it to accelerate. The chamber pressure depends on the amount of propellant that has burned, the temperature of the gases, and the volume of the chamber. The burn rate of the propellant depends on the chemical make up and shape of the propellant grains. The temperature depends on the energy released and the heat loss to the sides of the barrel and chamber. As the projectile travels down the barrel, the volume the gas occupies behind the projectile increases. Some energy is lost in deforming the projectile and causing it to spin. There are also frictional losses between the projectile and the barrel. The projectile, as it travels down the barrel, compresses the air in front of it, which adds resistance to its forward motion. The breech and the barrel must resist the high-pressure gases without damage. Although the pressure initially rises to a high value, the pressure starts dropping when the projectile has traveled some distance down the barrel. Consequently, the muzzle end of the barrel does not need to be as strong as the chamber end. Mathematical models have been developed for these processes. The four general concepts which are calculated in interior ballistics are: Energy - released by the propellant Motion - the relation between the projectile acceleration and the pressure on its base. Burning rate - a function of the propellant surface area and an empirically derived burning rate coefficient which is unique to the propellant. Form function - a burning rate modifying coefficient that includes the shape of the propellant. History Internal ballistics was not scientifically based prior to the mid-1800s. Barrels and actions were built strong enough to survive a known overload (Proof test). Muzzle velocity was surmised from the distance the projectile traveled. In the 1800s test barrels began to be instrumented. Holes were drilled in the barrel and fitted with standardized steel pistons which exerted pressure which compressed standardized copper cylinders when the firearm discharged. The reduction in the copper cylinder length is used as an indication of peak pressure, known as "Copper Units of Pressure", or "CUP" for high pressure firearms. Similar standards were applied to firearms with lower peak pressures, typically common handguns, with test cylinder pellets made of more easily deformed lead cylinders, hence "Lead Units of Pressure", or "LUP". The measurement only indicated the maximum pressure that was reached at that point in the barrel. Piezoelectric strain gauges were introduced in the 1960's, allowing instantaneous pressures to be measured without destructive pressure ports. Instrumented projectiles were developed by the Army Research Laboratory that measures the pressure at the base of the projectile and acceleration. Priming methods Methods of igniting the propellant evolved over time. A small hole (a touch hole) was drilled into the breech, into which a propellant was then poured, and an external flame or spark applied (see matchlock and flintlock). Percussion caps were and self-contained cartridges have primers that detonate after mechanical deformation, igniting the propellant. Propellants Black powder Gunpowder (Black powder) is a finely ground, pressed and granulated mechanical pyrotechnic mixture of sulfur, charcoal, and potassium nitrate or sodium nitrate. It can be produced in a range of grain sizes. The size and shape of the grains can increase or decrease the relative surface area, and change the burning rate significantly. The burning rate of black powder is relatively insensitive to pressure, meaning it will burn quickly and predictably even without confinement, making it also suitable for use as a low explosive. It has a very slow decomposition rate, and therefore a very low brisance. It is not, in the strictest sense of the term, an explosive, but a "deflagrant", as it does not detonate but decomposes by deflagration due to its subsonic mechanism of flame-front propagation. Nitrocellulose (single-base propellants) Nitrocellulose or "guncotton" is formed by the action of nitric acid on cellulose fibers. It is a highly combustible fibrous material that deflagrates rapidly when heat is applied. It also burns very cleanly, burning almost entirely to gaseous components at high temperatures with little smoke or solid residue. Gelatinised nitrocellulose is a plastic, which can be formed into cylinders, tubes, balls, or flakes known as single-base propellants. The size and shape of the propellant grains can increase or decrease the relative surface area, and change the burn rate significantly. Additives and coatings can be added to the propellant to further modify the burn rate. Normally, very fast powders are used for light-bullet or low-velocity pistols and shotguns, medium-rate powders for magnum pistols and light rifle rounds, and slow powders for large-bore heavy rifle rounds. Double-base propellants Nitroglycerin can be added to nitrocellulose to form "double-base propellants". Nitrocellulose desensitizes nitroglycerin to prevent detonation in propellant-sized grains, (see dynamite), and the nitroglycerin gelatinises the nitrocellulose and increases the energy. Double-base powders burn faster than single-base powders of the same shape, though not as cleanly, and burn rate increases with nitroglycerin content. In artillery, Ballistite or Cordite has been used in the form of rods, tubes, slotted-tube, perforated-cylinder or multi-tubular; the geometry being chosen to provide the required burning characteristics. (Round balls or rods, for example, are "degressive-burning" because their production of gas decreases with their surface area as the balls or rods burn smaller; thin flakes are "neutral-burning," since they burn on their flat surfaces until the flake is completely consumed. The longitudally perforated or multi-perforated cylinders used in large, long-barreled rifles or cannon are "progressive-burning;" the burning surface increases as the inside diameter of the holes enlarges, giving sustained burning and a long, continuous push on the projectile to produce higher velocity without increasing the peak pressure unduly. Progressive-burning powder compensates somewhat for the pressure drop as the projectile accelerates down the bore and increases the volume behind it.) Solid propellants (caseless ammunition) "Caseless ammunition" incorporates propellant cast as a single solid grain with the priming compound placed in a hollow at the base and the bullet attached to the front. Since the single propellant grain is so large (most smokeless powders have grain sizes around 1 mm, but a caseless grain will be perhaps 7 mm diameter and 15 mm long), the relative burn rate must be much higher. To reach this rate of burning, caseless propellants often use moderated explosives, such as RDX. The major advantages of a successful caseless round would be elimination of the need to extract and eject the spent cartridge case, permitting higher rates of fire and a simpler mechanism, and also reduced ammunition weight by eliminating the weight (and cost) of the brass or steel case. While there is at least one experimental military rifle (the H&K G11), and one commercial rifle (the Voere VEC-91), that use caseless rounds, they have met with little success. One other commercial rifle was the Daisy VL rifle made by the Daisy Air Rifle Co. and chambered for .22 caliber caseless ammunition that was ignited by a hot blast of compressed air from the lever used to compress a strong spring like for an air rifle. The caseless ammunition is of course not reloadable, since there is no casing left after firing the bullet, and the exposed propellant makes the rounds less durable. Also, the case in a standard cartridge serves as a seal, keeping gas from escaping the breech. Caseless arms must use a more complex self-sealing breech, which increases the design and manufacturing complexity. Another unpleasant problem, common to all rapid-firing arms but particularly problematic for those firing caseless rounds, is the problem of rounds "cooking off". This problem is caused by residual heat from the chamber heating the round in the chamber to the point where it ignites, causing an unintentional discharge. To minimize the risk of cartridge cook-off, machineguns can be designed to fire from an open bolt, with the round not chambered until the trigger is pulled, and so there is no chance for the round to cook off before the operator is ready. Such weapons could use caseless ammunition effectively. Open-bolt designs are generally undesirable for anything but machine guns; the mass of the bolt moving forward causes the gun to lurch in reaction, which significantly reduces the accuracy of the gun, which is generally not an issue for machinegun fire. Propellant charge Load density and consistency Load density is the percentage of the space in the cartridge case that is filled with powder. In general, loads close to 100% density (or even loads where seating the bullet in the case compresses the powder) ignite and burn more consistently than lower-density loads. In cartridges surviving from the black-powder era (examples being .45 Colt, .45-70 Government), the case is much larger than is needed to hold the maximum charge of high-density smokeless powder. This extra room allows the powder to shift in the case, piling up near the front or back of the case and potentially causing significant variations in burning rate, as powder near the rear of the case will ignite rapidly but powder near the front of the case will ignite later. This change has less impact with fast powders. Such high-capacity, low-density cartridges generally deliver best accuracy with the fastest appropriate powder, although this keeps the total energy low due to the sharp high-pressure peak. Magnum pistol cartridges reverse this power/accuracy tradeoff by using lower-density, slower-burning powders that give high load density and a broad pressure curve. The downside is the increased recoil and muzzle blast from the high powder mass, and high muzzle pressure. Most rifle cartridges have a high load density with the appropriate powders. Rifle cartridges tend to be bottlenecked, with a wide base narrowing down to a smaller diameter, to hold a light, high-velocity bullet. These cases are designed to hold a large charge of low-density powder, for an even broader pressure curve than a magnum pistol cartridge. These cases require the use of a long rifle barrel to extract their full efficiency, although they are also chambered in rifle-like pistols (single-shot or bolt-action) with barrels of 10 to 15 inches (25 to 38 cm). Chamber Straight vs bottleneck Straight walled cases were the standard from the beginnings of cartridge arms. With the low burning speed of black powder, the best efficiency was achieved with large, heavy bullets, so the bullet was the largest practical diameter. The large diameter allowed a short, stable bullet with high weight, and the maximum practical bore volume to extract the most energy possible in a given length barrel. There were a few cartridges that had long, shallow tapers, but these were generally an attempt to use an existing cartridge to fire a smaller bullet with a higher velocity and lower recoil. With the advent of smokeless powders, it was possible to generate far higher velocities by using a slow smokeless powder in a large volume case, pushing a small, light bullet. The odd, highly tapered 8 mm Lebel, made by necking down an older 11 mm black-powder cartridge, was introduced in 1886, and it was soon followed by the 7.92×57mm Mauser and 7×57mm Mauser military rounds, and the commercial .30-30 Winchester, all of which were new designs built to use smokeless powder. All of these have a distinct shoulder that closely resembles modern cartridges, and with the exception of the Lebel they are still chambered in modern firearms even though the cartridges are over a century old. Aspect ratio and consistency When selecting a rifle cartridge for maximum accuracy, a short, fat cartridge with very little case taper may yield higher efficiency and more consistent velocity than a long, thin cartridge with a lot of case taper (part of the reason for a bottle-necked design). Given current trends towards shorter and fatter cases, such as the new Winchester Super Short Magnum cartridges, it appears the ideal might be a case approaching spherical inside. Target and vermin hunting rounds require the greatest accuracy, so their cases tend to be short, fat, and nearly untapered with sharp shoulders on the case. Short, fat cases also allow short-action weapons to be made lighter and stronger for the same level of performance. The trade-off for this performance is fat rounds which take up more space in a magazine, sharp shoulders that do not feed as easily out of a magazine, and less reliable extraction of the spent round. For these reasons, when reliable feeding is more important than accuracy, such as with military rifles, longer cases with shallower shoulder angles are favored. There has been a long-term trend however, even among military weapons, towards shorter, fatter cases. The current 7.62×51mm NATO case replacing the longer .30-06 Springfield is a good example, as is the new 6.5 Grendel cartridge designed to increase the performance of the AR-15 family of rifles and carbines. Nevertheless, there is significantly more to accuracy and cartridge lethality than the length and diameter of the case, and the 7.62×51mm NATO has a smaller case capacity than the .30-06 Springfield, reducing the amount of propellant that can be used, directly reducing the bullet weight and muzzle velocity combination that contributes to lethality, (as detailed in the published cartridge specifications linked herein for comparison). The 6.5 Grendel, on the other hand, is capable of firing a significantly heavier bullet (see link) than the 5.56 NATO out of the AR-15 family of weapons, with only a slight decrease in muzzle velocity, perhaps providing a more advantageous performance tradeoff. Friction and inertia Static friction and ignition Since the burning rate of smokeless powder varies directly with the pressure, the initial pressure buildup,(i.e. "the shot-start pressure"), has a significant effect on the final velocity, especially in large cartridges with very fast powders and relatively light weight projectiles. In small caliber firearms, the friction holding the bullet in the case, determines how soon after ignition the bullet moves, and since the motion of the bullet increases the volume and drops the pressure, a difference in friction can change the slope of the pressure curve. In general, a tight fit is desired, to the extent of crimping the bullet into the case. In straight-walled rimless cases, such as the .45 ACP, an aggressive crimp is not possible, since the case is held in the chamber by the mouth of the case, but sizing the case to allow a tight interference fit with the bullet, can give the desired result. In larger caliber firearms, the shot start pressure is often determined by the force required to initially engrave the projectile driving band into the start of the barrel rifling; smoothbore guns, which do not have rifling, achieve shot start pressure by initially driving the projectile into a "forcing cone" that provides resistance as it compresses the projectile obturation ring. Kinetic friction The bullet must tightly fit the bore to seal the high pressure of the burning gunpowder. This tight fit results in a large frictional force. The friction of the bullet in the bore does have a slight impact on the final velocity, but that is generally not much of a concern. Of greater concern is the heat that is generated due to the friction. At velocities of about , lead begins to melt, and deposit in the bore. This lead build-up constricts the bore, increasing the pressure and decreasing the accuracy of subsequent rounds, and is difficult to scrub out without damaging the bore. Rounds, used at velocities up to , can use wax lubricants on the bullet to reduce lead build-up. At velocities over , nearly all bullets are jacketed in copper, or a similar alloy that is soft enough not to wear on the barrel, but melts at a high enough temperature to reduce build-up in the bore. Copper build-up does begin to occur in rounds that exceed , and a common solution is to impregnate the surface of the bullet with molybdenum disulfide lubricant. This reduces copper build-up in the bore, and results in better long-term accuracy. Large caliber projectiles also employ copper driving bands for rifled barrels for spin-stabilized projectiles; however, fin-stabilized projectiles fired from both rifle and smoothbore barrels, such as the APFSDS anti-armor projectiles, employ nylon obturation rings that are sufficient to seal high pressure propellant gasses and also minimize in-bore friction, providing a small boost to muzzle velocity. The role of inertia In the first few centimeters of travel down the bore, the bullet reaches a significant percentage of its final velocity, even for high-capacity rifles, with slow burning powder. The acceleration is on the order of tens of thousands of gravities, so even a projectile as light as can provide over of resistance due to inertia. Changes in bullet mass, therefore, have a huge impact on the pressure curves of smokeless powder cartridges, unlike black-powder cartridges. The loading or reloading of smokeless cartridges thus requires high-precision equipment, and carefully measured tables of load data for given cartridges, powders, and bullet weights. Pressure-velocity relationships Energy is imparted to the bullet in a firearm by the pressure of gases produced by burning propellant. While higher pressures produce higher velocities, pressure duration is also important. Peak pressure may represent only a small fraction of the time the bullet is accelerating. The entire duration of the bullet's travel through the barrel must be considered. Peak vs area Energy is the ability to do work on an object. Work is force applied over a distance. The total energy imparted to a bullet is indicated by the area under a curve with the y-axis being force (i.e., the pressure exerted on the base of the bullet multiplied by the area of the base of the bullet) and the x-axis being distance. Increasing the energy of the bullet requires increasing the area under that curve, either by raising the pressure, or increasing the distance the bullet travels under pressure. Pressure is limited by the strength of the firearm, and duration is limited by barrel length. Propellant design Propellants are matched to firearm strength, chamber volume and barrel length; and bullet material, weight and dimensions. The rate of gas generation is proportional to the surface area of burning propellant grains in accordance with Piobert's Law. Smokeless propellant reactions occur in a series of zones or phases as the reaction proceeds from the surface into the solid. The deepest portion of the solid experiencing heat transfer melts and begins phase transition from solid to gas in a foam zone. The gaseous propellant decomposes into simpler molecules in a surrounding fizz zone. Endothermic transformations in the foam zone and fizz zone require energy initially provided by the primer and subsequently released in a luminous outer flame zone where the simpler gas molecules react to form conventional combustion products like steam and carbon monoxide. The heat transfer rate of smokeless propellants increases with pressure, resulting in the rate of gas generation from a given grain surface area increased at higher pressures. Accelerating gas generation from fast burning propellants may rapidly create a destructively high pressure spike before bullet movement increases reaction volume. Conversely, propellants designed for a minimum heat transfer pressure may cease decomposition into gaseous reactants if bullet movement decreases pressure before a slow burning propellant has been consumed. Unburned propellant grains may remain in the barrel if the energy-releasing flame zone cannot be sustained in the resultant absence of gaseous reactants from the inner zones. Propellant burnout Another issue to consider, when choosing a powder burn rate, is the time the powder takes to completely burn vs. the time the bullet spends in the barrel. Looking carefully at the left graph, there is a change in the curve, at about 0.8 ms. This is the point at which the powder is completely burned, and no new gas is created. With a faster powder, burnout occurs earlier, and with the slower powder, it occurs later. Propellant that is unburned when the bullet reaches the muzzle is wasted — it adds no energy to the bullet, but it does add to the recoil and muzzle blast. For maximum power, the powder should burn until the bullet is just short of the muzzle. Since smokeless powders burn, not detonate, the reaction can only take place on the surface of the powder. Smokeless powders come in a variety of shapes, which serve to determine how fast they burn, and also how the burn rate changes as the powder burns. The simplest shape is a ball powder, which is in the form of round or slightly flattened spheres. Ball powder has a comparatively small surface-area-to-volume ratio, so it burns comparatively slowly, and as it burns, its surface area decreases. This means as the powder burns, the burn rate slows down. To some degree, this can be offset by the use of a retardant coating on the surface of the powder, which slows the initial burn rate and flattens out the rate of change. Ball powders are generally formulated as slow pistol powders, or fast rifle powders. Flake powders are in the form of flat, round flakes which have a relatively high surface-area-to-volume ratio. Flake powders have a nearly constant rate of burn, and are usually formulated as fast pistol or shotgun powders. The last common shape is an extruded powder, which is in the form of a cylinder, sometimes hollow. Extruded powders generally have a lower ratio of nitroglycerin to nitrocellulose, and are often progressive burning — that is, they burn at a faster rate as they burn. Extruded powders are generally medium to slow rifle powders. General concerns Bore diameter and energy transfer A firearm, in many ways, is like a piston engine on the power stroke. There is a certain amount of high-pressure gas available, and energy is extracted from it by making the gas move a piston — in this case, the projectile is the piston. The swept volume of the piston determines how much energy can be extracted from the given gas. The more volume that is swept by the piston, the lower is the exhaust pressure (in this case, the muzzle pressure). Any remaining pressure at the muzzle or at the end of the engine's power stroke represents lost energy. To extract the maximum amount of energy, then, the swept volume is maximized. This can be done in one of two ways — increasing the length of the barrel or increasing the diameter of the projectile. Increasing the barrel length will increase the swept volume linearly, while increasing the diameter will increase the swept volume as the square of the diameter. Since barrel length is limited by practical concerns to about arm's length for a rifle and much shorter for a handgun, increasing bore diameter is the normal way to increase the efficiency of a cartridge. The limit to bore diameter is generally the sectional density of the projectile (see external ballistics). Larger-diameter bullets of the same weight have much more drag, and so they lose energy more quickly after exiting the barrel. In general, most handguns use bullets between .355 (9 mm) and .45 (11.5 mm) caliber, while most rifles generally range from .223 (5.56 mm) to .32 (8 mm) caliber. There are many exceptions, of course, but bullets in the given ranges provide the best general-purpose performance. Handguns use the larger-diameter bullets for greater efficiency in short barrels, and tolerate the long-range velocity loss since handguns are seldom used for long-range shooting. Handguns designed for long-range shooting are generally closer to shortened rifles than to other handguns. Ratio of propellant to projectile mass Another issue, when choosing or developing a cartridge, is the issue of recoil. The recoil is not just the reaction from the projectile being launched, but also from the powder gas, which will exit the barrel with a velocity even higher than that of the bullet. For handgun cartridges, with heavy bullets and light powder charges (a 9×19mm, for example, might use of powder, and a bullet), the powder recoil is not a significant force; for a rifle cartridge (a .22-250 Remington, using of powder and a bullet), the powder can be the majority of the recoil force. There is a solution to the recoil issue, though it is not without cost. A muzzle brake or recoil compensator is a device which redirects the powder gas at the muzzle, usually up and back. This acts like a rocket, pushing the muzzle down and forward. The forward push helps negate the feel of the projectile recoil by pulling the firearm forwards. The downward push, on the other hand, helps counteract the rotation imparted by the fact that most firearms have the barrel mounted above the center of gravity. Overt combat guns, large-bore high-powered rifles, long-range handguns chambered for rifle ammunition, and action-shooting handguns designed for accurate rapid fire, all benefit from muzzle brakes. The high-powered firearms use the muzzle brake mainly for recoil reduction, which reduces the battering of the shooter by the severe recoil. The action-shooting handguns redirect all the energy up to counteract the rotation of the recoil, and make following shots faster by leaving the gun on target. The disadvantage of the muzzle brake is a longer, heavier barrel, and a large increase in sound levels and flash behind the muzzle of the rifle. Shooting firearms without muzzle brakes and without hearing protection can eventually damage the operator's hearing; however, shooting rifles with muzzle brakes - with or without hearing protection - causes permanent ear damage. (See muzzle brake for more on the disadvantages of muzzle brakes.) Powder-to-projectile-weight ratio also touches on the subject of efficiency. In the case of the .22-250 Remington, more energy goes into propelling the powder gas than goes into propelling the bullet. The .22-250 pays for this by requiring a large case, with much powder, all for a fairly small gain in velocity and energy over other .22 caliber cartridges. Accuracy and bore characteristics Nearly all small bore firearms, with the exception of shotguns, have rifled barrels. The rifling imparts a spin on the bullet, which keeps it from tumbling in flight. The rifling is usually in the form of sharp edged grooves cut as helices along the axis of the bore, anywhere from 2 to 16 in number. The areas between the grooves are known as lands. Another system, polygonal rifling, gives the bore a polygonal cross section. Polygonal rifling is not very common, used by only a few European manufacturers as well as the American gun manufacturer Kahr Arms. The companies that use polygonal rifling claim greater accuracy, lower friction, and less lead and/or copper buildup in the barrel. Traditional land and groove rifling is used in most competition firearms, however, so the advantages of polygonal rifling are unproven. There are four methods of rifling a barrel: A single point cutter is drawn down the bore by a machine that controls the rotation of the cutting head relative to the barrel. This is the slowest process, but requires the simplest equipment. It is often used by custom gunsmiths as it can result in very accurate barrels. Button rifling uses a die with a negative image of the rifling cut on it which is drawn down the barrel while rotated, swaging the inside of the barrel. This creates all the grooves at once by deformation, and is faster than cut rifling. Hammer forging is a process in which a slightly oversized, bored barrel is placed around a mandrel that contains a negative image of the length of the rifled barrel. The barrel and mandrel are rotated and hammered by power hammers, which forms the inside of the barrel. This is the fastest and often cheapest method of making a barrel, but the equipment is expensive. Hammer-forged barrels are generally not capable of the accuracy attainable with the first tow methods mentioned. Electrical discharge machining (EDM) or Electro chemical machining (ECM) processes use electricity to erode away material, a process which produces a highly consistent diameter and very smooth finish, with less stress than other rifling methods. EDM is very costly and primarily used in large bore, long barrel cannon where traditional methods are very difficult, while ECM is used by some smaller barrel makers. The purpose of the barrel is to provide a consistent seal, allowing the bullet to accelerate to a consistent velocity. It must also impart the right spin, and release the bullet consistently, perfectly concentric to the bore. The residual pressure in the bore must be released symmetrically, so that no side of the bullet receives any more or less push than the rest. To maintain a good pressure seal, the bore must be a precise constant diameter, or have a slight decrease in diameter from breech to muzzle. Any increase in bore diameter will allow the bullet to shift, allowing gas to leak past the bullet, decreasing velocity, or cause the bullet to tip so that it is no longer perfectly coaxial with the bore. High quality barrels are lapped to remove any constrictions in the bore which will cause a change in diameter. A lapping process known as "fire lapping" uses a lead "slug" that is slightly larger than the bore and covered in fine abrasive compound to cut out the constrictions. The slug is passed from breech to muzzle to remove obstructions. Many passes are made, and as the bore becomes more uniform, finer grades of abrasive compound are used. The final result is a barrel that is mirror-smooth, and with a consistent or slightly tapering bore. The hand-lapping technique uses a wooden or soft metal rod to pull or push the slug through the bore, while the newer fire-lapping technique uses specially loaded, low-power cartridges to push abrasive-covered soft-lead bullets down the barrel. Another issue that has an effect on the barrel's hold on the bullet is the rifling. When the bullet is fired, it is forced into the rifling, which cuts or "engraves" the surface of the bullet. If the rifling is a constant twist, then the rifling rides in the grooves engraved in the bullet, and everything is secure and sealed. If the rifling has a decreasing twist, then the changing angle of the rifling in the engraved grooves of the bullet causes the rifling to become narrower than the grooves. This allows gas to blow by, and loosens the hold of the bullet on the barrel. An increasing twist, however, will make the rifling become wider than the grooves in the bullet, maintaining the seal. When a rifled-barrel blank is selected for a gun, the higher-twist end is located at the muzzle. The muzzle of the barrel is the last thing to touch the bullet before it goes into ballistic flight, and as such has the greatest potential to disrupt the bullet's flight. The muzzle must allow the gas to escape the barrel symmetrically; any asymmetry will cause an uneven pressure on the base of the bullet, which will disrupt its flight. The muzzle end of the barrel is called the "crown", and it is usually either beveled or recessed to protect it from bumps or scratches that might affect accuracy. Before the barrel can release the bullet in a consistent manner, it must grip the bullet in a consistent manner. The part of the barrel between where the bullet exits the cartridge, and engages the rifling, is called the "throat", and the length of the throat is the freebore. In some firearms, the freebore is zero as the act of chambering the cartridge forces the bullet into the rifling. This is common in low-powered rimfire target rifles. The placement of the bullet in the rifling ensures that the transition between cartridge and rifling is quick and stable. The downside is that the cartridge is firmly held in place, and attempting to extract the unfired round can be difficult, to the point of even pulling the bullet from the cartridge in extreme cases. With high-powered cartridges, a significant amount of force is required to engrave the bullet which can raise the pressure in the chamber above the maximum design pressure. Higher-powered rifles usually have a longer freebore so that the bullet is allowed to gain some momentum, allowing the and the chamber pressure to drop slightly before the bullet engages the rifling. However, any slight misalignment can cause the bullet to tip as it engages the rifling, resulting in the bullet does not entering the barrel coaxially. Revolver-specific issues The defining characteristic of a revolver is the revolving cylinder, separate from the barrel, that contains the chambers. Revolvers typically have 5 to 10 chambers, and the first issue is ensuring consistency among the chambers, because if they aren't consistent then the point of impact will vary from chamber to chamber. The chambers must also align consistently with the barrel, so the bullet enters the barrel the same way from each chamber. The throat in a revolver is composed of two separate parts, the cylinder throat and the barrel throat. part of the cylinder and sized so that it is concentric to the chamber and very slightly over the bullet diameter. The cylinder gap - the space between the cylinder and barrel - must be wide enough to allow free rotation of the cylinder even when it becomes fouled with powder residue, but not so large that excessive gas is released. The forcing cone - where the bullet is guided from the cylinder into the bore of the barrel - should be deep enough to force the bullet into the bore without significant deformation. Unlike rifles, where the threaded portion of the barrel is in the chamber, revolver barrels threads surround the breech end of the bore. It is possible that the bore is compressed when the barrel is screwed into the frame. Cutting a longer forcing cone can relieve this "choke" point, as can lapping of the barrel after it is fitted to the frame. See also External ballistics Percussion cap, for an early history of priming powder and percussion caps Terminal ballistics Transitional ballistics Physics of firearms Table of handgun and rifle cartridges References External links A (Very) Short Course in Internal Ballistics, Fr. Frog QuickLOAD Ballistics Software Ballistics Ammunition Handloading
0.783996
0.984786
0.772068
Isometry
In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion. Introduction Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry; the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection. Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space involves an isometry from into a quotient set of the space of Cauchy sequences on The original space is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace. Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space. An isometric surjective linear operator on a Hilbert space is called a unitary operator. Definition Let and be metric spaces with metrics (e.g., distances) and A map is called an isometry or distance-preserving map if for any , An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d, i.e., if and only if . This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding. A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse. The inverse of a global isometry is also a global isometry. Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y. The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group. There is also the weaker notion of path isometry or arcwise isometry: A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective. This term is often abridged to simply isometry, so one should take care to determine from context which type is intended. Examples Any reflection, translation and rotation is a global isometry on Euclidean spaces. See also Euclidean group and . The map in is a path isometry but not a (general) isometry. Note that unlike an isometry, this path isometry does not need to be injective. Isometries between normed spaces The following theorem is due to Mazur and Ulam. Definition: The midpoint of two elements and in a vector space is the vector . Linear isometry Given two normed vector spaces and a linear isometry is a linear map that preserves the norms: for all Linear isometries are distance-preserving maps in the above sense. They are global isometries if and only if they are surjective. In an inner product space, the above definition reduces to for all which is equivalent to saying that This also implies that isometries preserve inner products, as . Linear isometries are not always unitary operators, though, as those require additionally that and (i.e. the domain and codomain coincide and defines a coisometry). By the Mazur–Ulam theorem, any isometry of normed vector spaces over is affine. A linear isometry also necessarily preserves angles, therefore a linear isometry transformation is a conformal linear transformation. Examples A linear map from to itself is an isometry (for the dot product) if and only if its matrix is unitary. Manifold An isometry of a manifold is any (smooth) mapping of that manifold into itself, or into another manifold that preserves the notion of distance between points. The definition of an isometry requires the notion of a metric on the manifold; a manifold with a (positive-definite) metric is a Riemannian manifold, one with an indefinite metric is a pseudo-Riemannian manifold. Thus, isometries are studied in Riemannian geometry. A local isometry from one (pseudo-)Riemannian manifold to another is a map which pulls back the metric tensor on the second manifold to the metric tensor on the first. When such a map is also a diffeomorphism, such a map is called an isometry (or isometric isomorphism), and provides a notion of isomorphism ("sameness") in the category Rm of Riemannian manifolds. Definition Let and be two (pseudo-)Riemannian manifolds, and let be a diffeomorphism. Then is called an isometry (or isometric isomorphism) if where denotes the pullback of the rank (0, 2) metric tensor by . Equivalently, in terms of the pushforward we have that for any two vector fields on (i.e. sections of the tangent bundle ), If is a local diffeomorphism such that then is called a local isometry. Properties A collection of isometries typically form a group, the isometry group. When the group is a continuous group, the infinitesimal generators of the group are the Killing vector fields. The Myers–Steenrod theorem states that every isometry between two connected Riemannian manifolds is smooth (differentiable). A second form of this theorem states that the isometry group of a Riemannian manifold is a Lie group. Riemannian manifolds that have isometries defined at every point are called symmetric spaces. Generalizations Given a positive real number ε, an ε-isometry or almost isometry (also called a Hausdorff approximation) is a map between metric spaces such that for one has and for any point there exists a point with That is, an -isometry preserves distances to within and leaves no element of the codomain further than away from the image of an element of the domain. Note that -isometries are not assumed to be continuous. The restricted isometry property characterizes nearly isometric matrices for sparse vectors. Quasi-isometry is yet another useful generalization. One may also define an element in an abstract unital C*-algebra to be an isometry: is an isometry if and only if Note that as mentioned in the introduction this is not necessarily a unitary element because one does not in general have that left inverse is a right inverse. On a pseudo-Euclidean space, the term isometry means a linear bijection preserving magnitude. See also Quadratic spaces. See also Beckman–Quarles theorem The second dual of a Banach space as an isometric isomorphism Euclidean plane isometry Flat (geometry) Homeomorphism group Involution Isometry group Motion (geometry) Myers–Steenrod theorem 3D isometries that leave the origin fixed Partial isometry Scaling (geometry) Semidefinite embedding Space group Symmetry in mathematics Footnotes References Bibliography Functions and mappings Metric geometry Symmetry Equivalence (mathematics) Riemannian geometry
0.775777
0.995202
0.772055
Van der Waals force
In molecular physics and chemistry, the van der Waals force (sometimes van de Waals' force) is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules. Named after Dutch physicist Johannes Diderik van der Waals, the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. It also underlies many properties of organic compounds and molecular solids, including their solubility in polar and non-polar media. If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance; this phenomenon results from the mutual repulsion between the atoms' electron clouds. The van der Waals forces are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles", Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time. Definition Van der Waals forces include attraction and repulsions between atoms, molecules, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics). The force results from a transient shift in electron density. Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance r approximately with the 7th power (~r−7). Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H (hydrogen) atoms in different H2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O (oxygen) atoms in different O2 molecules equals 0.44 kJ/mol (4.6 meV). The corresponding vaporization energies of H2 and O2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds). The strength of van der Waals bonds increases with higher polarizability of the participating atoms. For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S (sulfur) atoms in H2S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe (xenon) atoms is 2.35 kJ/mol (24.3 meV). These van der Waals interactions are up to 40 times stronger than in H2, which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb (lead) and on the order of 32 kJ/mol (330 meV) for high-melting Pt (platinum), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas. Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present. Force Contributions More broadly, intermolecular forces have several possible contributions. They are ordered from strongest to weakest: A repulsive component resulting from the Pauli exclusion principle that prevents close contact of atoms, or the collapse of molecules. Attractive or repulsive electrostatic interactions between permanent charges (in the case of molecular ions), dipoles (in the case of molecules without inversion centre), quadrupoles (all molecules with symmetry lower than cubic), and in general between permanent multipoles. These interactions also include hydrogen bonds, cation-pi, and pi-stacking interactions. Orientation-averaged contributions from electrostatic interactions are sometimes called the Keesom interaction or Keesom force after Willem Hendrik Keesom. Induction (also known as polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced multipole on another. This interaction is sometimes called Debye force after Peter J. W. Debye. The interactions (2) and (3) are labelled polar Interactions. Dispersion (usually named London dispersion interactions after Fritz London), which is the attractive interaction between any pair of molecules, including non-polar atoms, arising from the interactions of instantaneous multipoles. When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range. All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces. The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance. Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz. A more general theory of van der Waals forces has also been developed. The main characteristics of van der Waals forces are: They are weaker than normal covalent and ionic bonds. The van der Waals forces are additive in nature, consisting of several individual interactions, and cannot be saturated. They have no directional characteristic. They are all short-range forces and hence only interactions between the nearest particles need to be considered (instead of all the particles). Van der Waals attraction is greater if the molecules are closer. Van der Waals forces are independent of temperature except for dipole-dipole interactions. In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility. Van der Waals forces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules. London dispersion force London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as 'dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction. Van der Waals forces between macroscopic objects For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules as the starting point) by: where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R1, R2, and r (the distance between the surfaces): . The van der Waals force between two spheres of constant radii (R1 and R2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function,. This yields: In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., or , so that equation (1) for the potential energy function simplifies to: with the force: The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature. From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm. The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking. The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory", was developed by Lifshitz in 1956. Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published. Use by geckos and arthropods The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads. There were efforts in 2008 to create a dry glue that exploits the effect, and success was achieved in 2011 to create an adhesive tape on similar grounds (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints. A later study suggested that capillary adhesion might play a role, but that hypothesis has been rejected by more recent studies. A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces. Among the arthropods, some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain. See also Arthropod adhesion Cold welding Dispersion (chemistry) Gecko feet Lennard-Jones potential Noncovalent interactions Synthetic setae Van der Waals molecule Van der Waals radius Van der Waals strain Van der Waals surface Wringing of gauge blocks References Further reading English translation: English translation: External links An introductory description of the van der Waals force (as a sum of attractive components only) TED Talk on biomimicry, including applications of van der Waals force. Intermolecular forces Force
0.772881
0.998859
0.771999
Thaumaturgy
Thaumaturgy, derived from the Greek words thauma (wonder) and ergon (work), refers to the practical application of magic to effect change in the physical world. Historically, thaumaturgy has been associated with the manipulation of natural forces, the creation of wonders, and the performance of magical feats through esoteric knowledge and ritual practice. Unlike theurgy, which focuses on invoking divine powers, thaumaturgy is more concerned with utilizing occult principles to achieve specific outcomes, often in a tangible and observable manner. It is sometimes translated into English as wonderworking. This concept has evolved from its ancient roots in magical traditions to its incorporation into modern Western esotericism. Thaumaturgy has been practiced by individuals seeking to exert influence over the material world through both subtle and overt magical means. It has played a significant role in the development of magical systems, particularly those that emphasize the practical aspects of esoteric work. In modern times, thaumaturgy continues to be a subject of interest within the broader field of occultism, where it is studied and practiced as part of a larger system of magical knowledge. Its principles are often applied in conjunction with other forms of esoteric practice, such as alchemy and Hermeticism, to achieve a deeper understanding and mastery of the forces that govern the natural and supernatural worlds. A practitioner of thaumaturgy is a "thaumaturge", "thaumaturgist", "thaumaturgus", "miracle worker", or "wonderworker". Etymology The word thaumaturgy derives from Greek thaûma, meaning "miracle" or "marvel" (final t from genitive thaûmatos) and érgon, meaning "work". In the 16th century, the word thaumaturgy entered the English language meaning miraculous or magical powers. The word was first anglicized and used in the magical sense in John Dee's book The Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570). He mentions an "art mathematical" called "thaumaturgy... which giveth certain order to make strange works, of the sense to be perceived and of men greatly to be wondered at". Historical development Ancient roots The origins of thaumaturgy can be traced back to ancient civilizations where magical practices were integral to both religious rituals and daily life. In ancient Egypt, priests were often regarded as thaumaturges, wielding their knowledge of rituals and incantations to influence natural and supernatural forces. These practices were aimed at protecting the Pharaoh, ensuring a successful harvest, or even controlling the weather. Similarly, in ancient Greece, certain figures were believed to possess the ability to perform miraculous feats, often attributed to their deep understanding of the mysteries of the gods and nature. This blending of religious and magical practices laid the groundwork for what would later be recognized as thaumaturgy in Western esotericism. In Greek writings, the term thaumaturge also referred to several Christian saints. In this context, the word is usually translated into English as 'wonderworker'. Notable early Christian thaumaturges include Gregory Thaumaturgus (c. 213–270), Saint Menas of Egypt (285–c. 309), Saint Nicholas (270–343), and Philomena ( 300 (?)). Medieval and Renaissance Europe During the medieval period, thaumaturgy evolved within the context of Christian mysticism and early scientific thought. The medieval understanding of thaumaturgy was closely linked to the idea of miracles, with saints and holy men often credited with thaumaturgic powers. The seventeenth-century Irish Franciscan editor John Colgan called the three early Irish saints, Patrick, Brigid, and Columba, thaumaturges in his Acta Triadis Thaumaturgae (Louvain, 1647). Later notable medieval Christian thaumaturges include Anthony of Padua (1195–1231) and the bishop of Fiesole, Andrew Corsini of the Carmelites (1302–1373), who was called a thaumaturge during his lifetime. This period also saw the development of grimoires—manuals for magical practices—where rituals and spells were documented, often blending Christian and pagan traditions. In the Renaissance, the concept of thaumaturgy expanded as scholars like John Dee explored the intersections between magic, science, and religion. Dee's Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570) is one of the earliest English texts to discuss thaumaturgy, describing it as the art of creating "strange works" through a combination of natural and mathematical principles. Dee's work reflects the Renaissance pursuit of knowledge that blurred the lines between the magical and the mechanical, as thaumaturges were often seen as early scientists who harnessed the hidden powers of nature. In Dee's time, "the Mathematicks" referred not merely to the abstract computations associated with the term today, but to physical mechanical devices which employed mathematical principles in their design. These devices, operated by means of compressed air, springs, strings, pulleys or levers, were seen by unsophisticated people (who did not understand their working principles) as magical devices which could only have been made with the aid of demons and devils. By building such mechanical devices, Dee earned a reputation as a conjurer "dreaded" by neighborhood children. He complained of this assessment in his Mathematicall Praeface: Notable Renaissance and Age of Enlightenment Christian thaumaturges of the period include Gerard Majella (1726–1755), Ambrose of Optina (1812–1891), and John of Kronstadt (1829–1908). Incorporation into modern esotericism The transition into modern esotericism saw thaumaturgy taking on a more structured role within various magical systems, particularly those developed in the 18th and 19th centuries. In Hermeticism and the Western occult tradition, thaumaturgy was often practiced alongside alchemy and theurgy, with a focus on manipulating the material world through ritual and symbolic action. The Hermetic Order of the Golden Dawn, a prominent magical order founded in the late 19th century, incorporated thaumaturgy into its curriculum, emphasizing the importance of both theory and practice in the mastery of magical arts. Thaumaturgy's role in modern esotericism also intersects with the rise of ceremonial magic, where it is often employed to achieve specific, practical outcomes—ranging from healing to the invocation of spirits. Contemporary magicians continue to explore and adapt thaumaturgic practices, often drawing from a wide range of historical and cultural sources to create eclectic and personalized systems of magic. Core principles and practices Principles of sympathy and contagion Thaumaturgy is often governed by two key magical principles: the Principle of Sympathy and the Principle of Contagion. These principles are foundational in understanding how thaumaturges influence the physical world through magical means. The Principle of Sympathy operates on the idea that "like affects like", meaning that objects or symbols that resemble each other can influence each other. For example, a miniature representation of a desired outcome, such as a model of a bridge, could be used in a ritual to ensure the successful construction of an actual bridge. The Principle of Contagion, on the other hand, is based on the belief that objects that were once in contact continue to influence each other even after they are separated. This principle is often employed in the use of personal items, such as hair or clothing, in rituals to affect the person to whom those items belong. These principles are not unique to thaumaturgy but are integral to many forms of magic across cultures. However, in the context of thaumaturgy, they are particularly important because they provide a theoretical framework for understanding how magical actions can produce tangible results in the material world. This focus on practical outcomes distinguishes thaumaturgy from other forms of magic that may be more concerned with spiritual or symbolic meanings. Tools and rituals Thaumaturgical practices often involve the use of specific tools and rituals designed to channel and direct magical energy. Common tools include wands, staffs, talismans, and ritual knives, each of which serves a particular purpose in the practice of magic. For instance, a wand might be used to direct energy during a ritual, while a talisman could serve as a focal point for the thaumaturge's intent. The creation and consecration of these tools are themselves ritualized processes, often requiring specific materials and astrological timing to ensure their effectiveness. Rituals in thaumaturgy are typically elaborate and may involve the recitation of incantations, the drawing of protective circles, and the invocation of spirits or deities. These rituals are designed to create a controlled environment in which the thaumaturge can manipulate natural forces according to their will. The complexity of these rituals varies depending on the desired outcome, with more significant or ambitious goals requiring more intricate and time-consuming procedures. Energy manipulation At the heart of thaumaturgy is the metaphor of energy manipulation. Thaumaturges believe that the world is filled with various forms of energy that can be harnessed and directed through magical practices. This energy is often conceptualized as a natural force that permeates the universe, and through the use of specific techniques, thaumaturges believe that they can influence this energy to bring about desired changes in the physical world. Energy manipulation in thaumaturgy involves both drawing energy from the surrounding environment and directing it toward a specific goal. This process often requires a deep understanding of the natural world, as well as the ability to focus and control one's own mental and spiritual energies. In many traditions, this energy is also linked to the practitioner's life force, meaning that the act of performing thaumaturgy can be physically and spiritually taxing. As a result, practitioners often undergo rigorous training and preparation to build their capacity to manipulate energy effectively and safely. In esoteric traditions Hermetic Qabalah In Hermetic Qabalah, thaumaturgy occupies a significant role as it involves the practical application of mystical principles to influence the physical world. This tradition is deeply rooted in the concept of correspondences, where different elements of the cosmos are seen as interconnected. In the Hermetic tradition, a thaumaturge seeks to manipulate these correspondences to bring about desired changes. The sephiroth on the Tree of Life serve as a map for these interactions, with specific rituals and symbols corresponding to different sephiroth and their associated powers. For example, a ritual focusing on Yesod (the sephirah of the Moon) might involve elements such as silver, the color white, and the invocation of lunar deities to influence matters of intuition, dreams, or the subconscious mind. The manipulation of these correspondences through ritual is not just symbolic but is believed to produce real effects in the material world. Practitioners use complex rituals that might include the use of sacred geometry, invocations, and the creation of talismans. These practices are believed to align the practitioner with the forces they wish to control, creating a sympathetic connection that enables them to direct these forces effectively. Aleister Crowley's Magick (Book 4) provides an extensive discussion on the use of ritual tools such as the wand, cup, and sword, each of which corresponds to different elements and powers within the Qabalistic system, emphasizing the practical aspect of these tools in thaumaturgic practices. Alchemy and thaumaturgy Alchemy and thaumaturgy are often intertwined, particularly in the context of spiritual transformation and the pursuit of enlightenment. Alchemy, with its focus on the transmutation of base metals into gold and the quest for the philosopher's stone, can be seen as a form of thaumaturgy where the practitioner seeks to transform not just physical substances but also the self. This process, known as the Great Work, involves the purification and refinement of both matter and spirit. Thaumaturgy comes into play as the practical aspect of alchemy, where rituals, symbols, and substances are used to facilitate these transformations. The alchemical process is heavily laden with symbolic meanings, with each stage representing a different phase of transformation. The stages of nigredo (blackening), albedo (whitening), citrinitas (yellowing), and rubedo (reddening) correspond not only to physical changes in the material being worked on but also to stages of spiritual purification and enlightenment. Thaumaturgy, in this context, is the application of these principles to achieve tangible results, whether in the form of creating alchemical elixirs, talismans, or achieving spiritual goals. Crowley also elaborates on these alchemical principles in Magick (Book 4), particularly in his discussions on the symbolic and practical uses of alchemical symbols and processes within magical rituals. Other esoteric systems Thaumaturgy also plays a role in various other esoteric systems, where it is often viewed as a means of bridging the gap between the mundane and the divine. In Theosophy, for example, thaumaturgy is seen as part of the esoteric knowledge that allows practitioners to manipulate spiritual and material forces. Theosophical teachings emphasize the unity of all life and the interconnection of the cosmos, with thaumaturgy being a practical tool for engaging with these truths. Rituals and meditative practices are used to align the practitioner's will with higher spiritual forces, enabling them to effect change in the physical world. In Rosicrucianism, thaumaturgy is similarly regarded as a method of spiritual practice that leads to the mastery of natural and spiritual laws. Rosicrucians believe that through the study of nature and the application of esoteric principles, one can achieve a deep understanding of the cosmos and develop the ability to influence it. This includes the use of rituals, symbols, and sacred texts to bring about spiritual growth and material success. In the introduction of his translation of the "Spiritual Powers (神通 Jinzū)" chapter of Dōgen's Shōbōgenzō, Carl Bielefeldt refers to the powers developed by adepts of Esoteric Buddhism as belonging to the "thaumaturgical tradition". These powers, known as siddhi or abhijñā, were ascribed to the Buddha and subsequent disciples. Legendary monks like Bodhidharma, Upagupta, Padmasambhava, and others were depicted in popular legends and hagiographical accounts as wielding various supernatural powers. Misconceptions and modern interpretations Distinction from theurgy A common misconception about thaumaturgy is its conflation with theurgy. While both involve the practice of magic, they serve distinct purposes and operate on different principles. Theurgy is primarily concerned with invoking divine or spiritual beings to achieve union with the divine, often for purposes of spiritual ascent or enlightenment. Thaumaturgy, on the other hand, focuses on the manipulation of natural forces to produce tangible effects in the physical world. This distinction is crucial in understanding the differing objectives of these practices: theurgy is inherently religious and mystical, while thaumaturgy is more pragmatic and results-oriented. Aleister Crowley, in his Magick (Book 4), emphasizes the importance of understanding these differences, noting that while theurgic practices seek to align the practitioner with divine will, thaumaturgy allows the practitioner to exert their will over the material world through the application of esoteric knowledge and ritual. Modern misunderstandings In modern times, thaumaturgy is often misunderstood, particularly in popular culture where it is sometimes depicted as synonymous with fantasy magic or "miracle-working" in a religious sense. These portrayals can dilute the rich historical and esoteric significance of thaumaturgy, reducing it to a mere trope of magical fiction. For instance, the term is frequently used in fantasy literature and role-playing games to describe a generic form of magic, without consideration for its historical roots or the complex practices associated with it in esoteric traditions. This modern misunderstanding is partly due to the broadening of the term "thaumaturgy" in contemporary discourse, where it is often detached from its original context and used more loosely. As a result, the nuanced distinctions between different types of magic, such as thaumaturgy and theurgy, are often overlooked, leading to a homogenized view of magical practices. In popular culture The term thaumaturgy is used in various games as a synonym for magic, a particular sub-school (often mechanical) of magic, or as the "science" of magic. Thaumaturgy is defined as the "science" or "physics" of magic by Isaac Bonewits in his 1971 book Real Magic, a definition he also used in creating an RPG reference called Authentic Thaumaturgy (1978, 1998, 2005). See also ; for example, the sigils of the Behenian fixed stars References Works cited External links Alchemy Ceremonial magic Hermetic Qabalah Hermeticism Magic (supernatural) Magical terminology Rosicrucianism Thelema Theosophy Vajrayana Zen
0.773894
0.99755
0.771999
History of Maxwell's equations
By the first half of the 19th century, the understanding of electromagnetics had improved through many experiments and theoretical work. In the 1780s, Charles-Augustin de Coulomb established his law of electrostatics. In 1825, André-Marie Ampère published his force law. In 1831, Michael Faraday discovered electromagnetic induction through his experiments, and proposed lines of forces to describe it. In 1834, Emil Lenz solved the problem of the direction of the induction, and Franz Ernst Neumann wrote down the equation to calculate the induced force by change of magnetic flux. However, these experimental results and rules were not well organized and sometimes confusing to scientists. A comprehensive summary of the electrodynamic principles was needed. This work was done by James C. Maxwell through a series of papers published from the 1850s to the 1870s. In the 1850s, Maxwell was working at the University of Cambridge where he was impressed by Faraday's lines of forces concept. Faraday created this concept by impression of Roger Boscovich, a physicist that impacted Maxwell's work as well. In 1856, he published his first paper in electromagnetism: On Faraday's Lines of Force. He tried to use the analogy of incompressible fluid flow to model the magnetic lines of forces. Later, Maxwell moved to King's College London where he actually came into regular contact with Faraday, and became life-long friends. From 1861 to 1862, Maxwell published a series of four papers under the title of On Physical Lines of Force. In these papers, he used mechanical models, such as rotating vortex tubes, to model the electromagnetic field. He also modeled the vacuum as a kind of insulating elastic medium to account for the stress of the magnetic lines of force given by Faraday. These works had already laid the basis of the formulation of the Maxwell's equations. Moreover, the 1862 paper already derived the speed of light from the expression of the velocity of the electromagnetic wave in relation to the vacuum constants. The final form of Maxwell's equations was published in 1865 A Dynamical Theory of the Electromagnetic Field, in which the theory is formulated in strictly mathematical form. In 1873, Maxwell published A Treatise on Electricity and Magnetism as a summary of his work on electromagnetism. In summary, Maxwell's equations successfully unified theories of light and electromagnetism, which is one of the great unifications in physics. Maxwell built a simple flywheel model of electromagnetism, and Boltzmann built an elaborate mechanical model ("Bicykel") based on Maxwell's flywheel model, which he used for lecture demonstrations. Figures are at the end of Boltzman's 1891 book. Later, Oliver Heaviside studied Maxwell's A Treatise on Electricity and Magnetism and employed vector calculus to synthesize Maxwell's over 20 equations into the four recognizable ones which modern physicists use. Maxwell's equations also inspired Albert Einstein in developing the theory of special relativity. The experimental proof of Maxwell's equations was demonstrated by Heinrich Hertz in a series of experiments in the 1890s. After that, Maxwell's equations were fully accepted by scientists. Relationships among electricity, magnetism, and the speed of light The relationships amongst electricity, magnetism, and the speed of light can be summarized by the modern equation: The left-hand side is the speed of light and the right-hand side is a quantity related to the constants that appear in the equations governing electricity and magnetism. Although the right-hand side has units of velocity, it can be inferred from measurements of electric and magnetic forces, which involve no physical velocities. Therefore, establishing this relationship provided convincing evidence that light is an electromagnetic phenomenon. The discovery of this relationship started in 1855, when Wilhelm Eduard Weber and Rudolf Kohlrausch determined that there was a quantity related to electricity and magnetism, "the ratio of the absolute electromagnetic unit of charge to the absolute electrostatic unit of charge" (in modern language, the value ), and determined that it should have units of velocity. They then measured this ratio by an experiment which involved charging and discharging a Leyden jar and measuring the magnetic force from the discharge current, and found a value , remarkably close to the speed of light, which had recently been measured at by Hippolyte Fizeau in 1848 and at by Léon Foucault in 1850. However, Weber and Kohlrausch did not make the connection to the speed of light. Towards the end of 1861 while working on Part III of his paper On Physical Lines of Force, Maxwell travelled from Scotland to London and looked up Weber and Kohlrausch's results. He converted them into a format which was compatible with his own writings, and in doing so he established the connection to the speed of light and concluded that light is a form of electromagnetic radiation. The term Maxwell's equations The four modern Maxwell's equations can be found individually throughout his 1861 paper, derived theoretically using a molecular vortex model of Michael Faraday's "lines of force" and in conjunction with the experimental result of Weber and Kohlrausch. But it was not until 1884 that Oliver Heaviside, concurrently with similar work by Josiah Willard Gibbs and Heinrich Hertz, grouped the twenty equations together into a set of only four, via vector notation. This group of four equations was known variously as the Hertz–Heaviside equations and the Maxwell–Hertz equations, but are now universally known as Maxwell's equations. Heaviside's equations, which are taught in textbooks and universities as Maxwell's equations are not exactly the same as the ones due to Maxwell, and, in fact, the latter are more easily made to conform to quantum physics. This very subtle and paradoxical sounding situation can perhaps be most easily understood in terms of the similar situation that exists with respect to Newton's second law of motion: In textbooks and in classrooms the law is attributed to Newton, but Newton in fact wrote his second law is clearly visible in a glass case in the Wren Library of Trinity College, Cambridge, where Newton's manuscript is open to the relevant page. as , where is the time derivative of the momentum . This seems a trivial enough fact until you realize that remains true in special relativity, without modification. Maxwell's contribution to science in producing these equations lies in the correction he made to Ampère's circuital law in his 1861 paper On Physical Lines of Force. He added the displacement current term to Ampère's circuital law and this enabled him to derive the electromagnetic wave equation in his later 1865 paper A Dynamical Theory of the Electromagnetic Field and to demonstrate the fact that light is an electromagnetic wave. This fact was later confirmed experimentally by Heinrich Hertz in 1887. The physicist Richard Feynman predicted that, "From a long view of the history of mankind, seen from, say, ten thousand years from now, there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade." The concept of fields was introduced by, among others, Faraday. Albert Einstein wrote: Heaviside worked to eliminate the potentials (electric potential and magnetic potential) that Maxwell had used as the central concepts in his equations; this effort was somewhat controversial, though it was understood by 1884 that the potentials must propagate at the speed of light like the fields, unlike the concept of instantaneous action-at-a-distance like the then conception of gravitational potential. On Physical Lines of Force The four equations we use today appeared separately in Maxwell's 1861 paper, On Physical Lines of Force: Equation (56) in Maxwell's 1861 paper is Gauss's law for magnetism, . Equation (112) is Ampère's circuital law, with Maxwell's addition of displacement current. This may be the most remarkable contribution of Maxwell's work, enabling him to derive the electromagnetic wave equation in his 1865 paper A Dynamical Theory of the Electromagnetic Field, showing that light is an electromagnetic wave. This lent the equations their full significance with respect to understanding the nature of the phenomena he elucidated. (Kirchhoff derived the telegrapher's equations in 1857 without using displacement current, but he did use Poisson's equation and the equation of continuity, which are the mathematical ingredients of the displacement current. Nevertheless, believing his equations to be applicable only inside an electric wire, he cannot be credited with the discovery that light is an electromagnetic wave). Equation (115) is Gauss's law. Equation (54) expresses what Oliver Heaviside referred to as 'Faraday's law', which addresses the time-variant aspect of electromagnetic induction, but not the one induced by motion; Faraday's original flux law accounted for both. Maxwell deals with the motion-related aspect of electromagnetic induction, , in equation (77), which is the same as equation (D) in Maxwell's original equations as listed below. It is expressed today as the force law equation, , which sits adjacent to Maxwell's equations and bears the name Lorentz force, even though Maxwell derived it when Lorentz was still a young boy. The difference between the and the vectors can be traced back to Maxwell's 1855 paper entitled On Faraday's Lines of Force which was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations. It is later clarified in his concept of a sea of molecular vortices that appears in his 1861 paper On Physical Lines of Force. Within that context, represented pure vorticity (spin), whereas was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability to be a measure of the density of the vortex sea. Hence the relationship, Magnetic induction current causes a magnetic current density was essentially a rotational analogy to the linear electric current relationship, Electric convection current where ρ is electric charge density. was seen as a kind of magnetic current of vortices aligned in their axial planes, with being the circumferential velocity of the vortices. With representing vortex density, it follows that the product of with vorticity leads to the magnetic field denoted as . The electric current equation can be viewed as a convective current of electric charge that involves linear motion. By analogy, the magnetic equation is an inductive current involving spin. There is no linear motion in the inductive current along the direction of the vector. The magnetic inductive current represents lines of force. In particular, it represents lines of inverse-square law force. The extension of the above considerations confirms that where is to , and where is to , then it necessarily follows from Gauss's law and from the equation of continuity of charge that is to i.e. parallels with , whereas parallels with . A Dynamical Theory of the Electromagnetic Field In 1865 Maxwell published "A dynamical theory of the electromagnetic field" in which he showed that light was an electromagnetic phenomenon. Confusion over the term "Maxwell's equations" sometimes arises because it has been used for a set of eight equations that appeared in Part III of Maxwell's 1865 paper "A dynamical theory of the electromagnetic field", entitled "General equations of the electromagnetic field", and this confusion is compounded by the writing of six of those eight equations as three separate equations (one for each of the Cartesian axes), resulting in twenty equations and twenty unknowns. The eight original Maxwell's equations can be written in the modern form of Heaviside's vector notation as follows: {|class="wikitable" style="text-align: center;" |- !scope="col" width="250"| [] The law of total currents |scope="col" width="250"| |- ! [] The equation of magnetic force | |- ! [] Ampère's circuital law | |- ! [] Electromotive force created by convection, induction, and by static electricity. (This is in effect the Lorentz force) | |- ! [] The electric elasticity equation | |- ! [] Ohm's law | |- ! [] Gauss's law | |- ! [] Equation of continuity | or |- |} Notation is the magnetizing field, which Maxwell called the magnetic intensity. is the current density (with being the total current including displacement current). is the displacement field (called the electric displacement by Maxwell). is the free charge density (called the quantity of free electricity by Maxwell). is the magnetic potential (called the angular impulse by Maxwell). is called the electromotive force by Maxwell. The term electromotive force is nowadays used for voltage, but it is clear from the context that Maxwell's meaning corresponded more to the modern term electric field. is the electric potential (which Maxwell also called electric potential). is the electrical conductivity (Maxwell called the inverse of conductivity the specific resistance, what is now called the resistivity). Equation [], with the term, is effectively the Lorentz force, similarly to equation (77) of his 1861 paper (see above). When Maxwell derives the electromagnetic wave equation in his 1865 paper, he uses equation [] to cater for electromagnetic induction rather than Faraday's law of induction which is used in modern textbooks. (Faraday's law itself does not appear among his equations.) However, Maxwell drops the term from equation [] when he is deriving the electromagnetic wave equation, as he considers the situation only from the rest frame. A Treatise on Electricity and Magnetism In A Treatise on Electricity and Magnetism, an 1873 treatise on electromagnetism written by James Clerk Maxwell, twelve general equations of the electromagnetic field are listed and these include the eight that are listed in the 1865 paper. His theoretical investigations of the electromagnetic field was guided by the notions of work, energy, potential, the principle of conservation of energy, and Lagrangian dynamics. All the principal equations concerning Maxwell's electromagnetic theory are recapitulated in Chapter IX of Part IV. At the end of this chapter, all the equations are listed and set in quaternion form. The first two equations [] and [] relates the electric scalar potential and magnetic vector potential to the electric and magnetic fields. The third equation [] relates the electromagnetic field to electromagnetic force. The rest of the equations [] to [] relates the electromagnetic field to material data: the current and charge densities as well as the material medium. Here the twelve Maxwell's equations have been given, respecting the original notations used by Maxwell. The only difference is that the vectors have been denoted using bold typeface instead of the original Fraktur typeface. For comparison Maxwell's equations in their original quaternion form and their vector form have been given. The and notations are used to denote the scalar and vector parts of quaternion product. {|class="wikitable" style="text-align: center;" |- ! scope="col" style="width: 15em;" | Name ! scope="col" | Quaternion Form ! scope="col" | Vector Form |- ! [] Magnetic induction | ; | ; |- ! [] Electromotive force | | |- ! [] Mechanical force | | |- ! [] Magnetization | | |- ! [] Electric currents | | |- ! [] Ohm's law | | |- ! [] Electric displacement | | |- ! [] Total current | | |- ! [] When magnetization arises from magnetic induction | | |- ! [] Electric volume density | | |- ! [] Magnetic volume density | | |- ! [] When magnetic force can be derived from a potential | | |- |} Unfamiliar notation is the velocity of a point. is total current. is the intensity of magnetization. is the current of conduction. is the electric potential. is the magnetic potential. is the dielectric constant. is electrical conductivity. is electric charge density. is magnetic charge density. In the same chapter, Maxwell points out that the consequence of equation [] is (in vector notation) . Similarly, taking divergence of equation [] gives conservation of electric charge, , which, Maxwell points out, is true only if the total current includes the variation of electric displacement. Lastly, combining equation [] and equation [], the formula is obtained which relates magnetic potential with current. Elsewhere in the Part I of the book, the electric potential is related to charge density as in the absence of motion. Presciently, Maxwell also mentions that although some of the equations could be combined to eliminate some quantities, the objective of his list was to express every relation of which there was any knowledge of, rather than to obtain compactness of mathematical formulae. Relativity Maxwell's equations were an essential inspiration for the development of special relativity. Possibly the most important aspect was their denial of instantaneous action at a distance. Rather, according to them, forces are propagated at the velocity of light through the electromagnetic field. Maxwell's original equations are based on the idea that light travels through a sea of molecular vortices known as the "luminiferous aether", and that the speed of light has to be respective to the reference frame of this aether. Measurements designed to measure the speed of the Earth through the aether conflicted with this notion, though. A more theoretical approach was suggested by Hendrik Lorentz along with George FitzGerald and Joseph Larmor. Both Larmor (1897) and Lorentz (1899, 1904) ignored aether motion and derived the Lorentz transformation (so named by Henri Poincaré) as one under which Maxwell's equations were invariant. Poincaré (1900) analyzed the coordination of moving clocks by exchanging light signals. He also established the mathematical group property of the Lorentz transformation (Poincaré 1905). Sometimes this transformation is called the FitzGerald–Lorentz transformation or even the FitzGerald–Lorentz–Einstein transformation. Albert Einstein also dismissed the notion of the aether, and relied on Lorentz's conclusion about the fixed speed of light, independent of the velocity of the observer. He applied the FitzGerald–Lorentz transformation to kinematics, and not just Maxwell's equations. Maxwell's equations played a key role in Einstein's groundbreaking 1905 scientific paper on special relativity. For example, in the opening paragraph of his paper, he began his theory by noting that a description of an electric conductor moving with respect to a magnet must generate a consistent set of fields regardless of whether the force is calculated in the rest frame of the magnet or that of the conductor. The general theory of relativity has also had a close relationship with Maxwell's equations. For example, Theodor Kaluza and Oskar Klein in the 1920s showed that Maxwell's equations could be derived by extending general relativity into five physical dimensions. This strategy of using additional dimensions to unify different forces remains an active area of research in physics. See also Classical electromagnetism and special relativity History of electromagnetic theory The Maxwellians Notes References Electrodynamics History of physics History
0.780574
0.988973
0.771966
Ultraviolet catastrophe
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century to early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range. The term "ultraviolet catastrophe" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy. Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence. Problem The Rayleigh-Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength , it is: where is the spectral radiance, the power emitted per unit emitting area, per steradian, per unit wavelength; is the speed of light; is the Boltzmann constant; and is the temperature in kelvins. For frequency , the expression is instead This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of . The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies, i.e. as . An example, from Mason's A History of the Sciences, illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Additionally, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are. According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein, Lord Rayleigh, and Sir James Jeans in 1905. Solution In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some strange (for the time) assumptions. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy: where: is the Planck constant, is the frequency of light, is the speed of light, is the wavelength of light. By applying this new energy to the partition function in statistical mechanics, Planck's assumptions led to the correct form of the spectral distribution functions: where: is the absolute temperature of the body, is the Boltzmann constant, denotes the exponential function. Albert Einstein (in 1905) solved the problem by postulating that Planck's quanta were real physical particles – what we now call photons, not just a mathematical fiction. They modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einstein's photon had an energy proportional to its frequency and also explained an unpublished law of Stokes and the photoelectric effect. This published postulate was specifically cited by the Nobel Prize in Physics committee in their decision to award the prize for 1921 to Einstein. See also Wien approximation Vacuum catastrophe Planckian locus References Bibliography Further reading Foundational quantum physics Physical paradoxes Physical phenomena
0.775131
0.995833
0.771901
Divergence theorem
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region". The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem. Explanation using liquid flow Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface. Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero. However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink. If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem. The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary. Mathematical statement Suppose is a subset of (in the case of represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary (also indicated with ). If is a continuously differentiable vector field defined on a neighborhood of , then: The left side is a volume integral over the volume , and the right side is the surface integral over the boundary of the volume . The closed, measurable set is oriented by outward-pointing normals, and is the outward pointing unit normal at almost each point on the boundary . ( may be used as a shorthand for .) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume , and the right-hand side represents the total flow across the boundary . Informal derivation The divergence theorem follows from the fact that if a volume is partitioned into separate parts, the flux out of the original volume is equal to the sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed. See the diagram. A closed, bounded volume is divided into two volumes and by a surface (green). The flux out of each component region is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is where and are the flux out of surfaces and , is the flux through out of volume 1, and is the flux through out of volume 2. The point is that surface is part of the surface of both volumes. The "outward" direction of the normal vector is opposite for each volume, so the flux out of one through is equal to the negative of the flux out of the other so these two fluxes cancel in the sum. Therefore: Since the union of surfaces and is This principle applies to a volume divided into any number of parts, as shown in the diagram. Since the integral over each internal partition (green surfaces) appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces (grey). Since the external surfaces of all the component volumes equal the original surface. The flux out of each volume is the surface integral of the vector field over the surface The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface area approaches zero. However, from the definition of divergence, the ratio of flux to volume, , the part in parentheses below, does not in general vanish but approaches the divergence as the volume approaches zero. As long as the vector field has continuous derivatives, the sum above holds even in the limit when the volume is divided into infinitely small increments As approaches zero volume, it becomes the infinitesimal , the part in parentheses becomes the divergence, and the sum becomes a volume integral over Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used. Proofs For bounded open subsets of Euclidean space We are going to prove the following: Proof of Theorem. For compact Riemannian manifolds with boundary We are going to prove the following: Proof of Theorem. We use the Einstein summation convention. By using a partition of unity, we may assume that and have compact support in a coordinate patch . First consider the case where the patch is disjoint from . Then is identified with an open subset of and integration by parts produces no boundary terms: In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define as the formal adjoint of . Now suppose intersects . Then is identified with an open set in . We zero extend and to and perform integration by parts to obtain where . By a variant of the straightening theorem for vector fields, we may choose so that is the inward unit normal at . In this case is the volume element on and the above formula reads This completes the proof. Corollaries By replacing in the divergence theorem with specific forms, other useful identities can be derived (cf. vector identities). With for a scalar function and a vector field , A special case of this is , in which case the theorem is the basis for Green's identities. With for two vector fields and , where denotes a cross product, With for two vector fields and , where denotes a dot product, With for a scalar function and vector field c: The last term on the right vanishes for constant or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking to be constant: With for vector field and constant vector c: By reordering the triple product on the right hand side and taking out the constant vector of the integral, Hence, Example Suppose we wish to evaluate where is the unit sphere defined by and is the vector field The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to: where is the unit ball: Since the function is positive in one hemisphere of and negative in the other, in an equal and opposite way, its total integral over is zero. The same is true for : Therefore, because the unit ball has volume . Applications Differential and integral forms of physical laws As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity. Continuity equations Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux). Inverse-square laws Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details. History Joseph-Louis Lagrange introduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of his Mécanique Analytique. Lagrange employed surface integrals in his work on fluid mechanics. He discovered the divergence theorem in 1762. Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. He proved additional special cases in 1833 and 1839. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies. Worked examples Example 1 To verify the planar variant of the divergence theorem for a region : and the vector field: The boundary of is the unit circle, , that can be represented parametrically by: such that where units is the length arc from the point to the point on . Then a vector equation of is At a point on : Therefore, Because , we can evaluate and because . Thus Example 2 Let's say we wanted to evaluate the flux of the following vector field defined by bounded by the following inequalities: By the divergence theorem, We now need to determine the divergence of . If is a three-dimensional vector field, then the divergence of is given by . Thus, we can set up the following flux integral as follows: Now that we have set up the integral, we can evaluate it. Generalizations Multiple dimensions One can use the generalised Stokes' theorem to equate the -dimensional volume integral of the divergence of a vector field over a region to the -dimensional surface integral of over the boundary of : This equation is also known as the divergence theorem. When , this is equivalent to Green's theorem. When , it reduces to the fundamental theorem of calculus, part 2. Tensor fields Writing the theorem in Einstein notation: suggestively, replacing the vector field with a rank- tensor field , this can be generalized to: where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity). See also Kelvin–Stokes theorem Generalized Stokes theorem Differential form References External links Differential Operators and the Divergence Theorem at MathPages The Divergence (Gauss) Theorem by Nick Bykov, Wolfram Demonstrations Project. – This article was originally based on the GFDL article from PlanetMath at https://web.archive.org/web/20021029094728/http://planetmath.org/encyclopedia/Divergence.html Theorems in calculus
0.773043
0.998509
0.771891
Adiabatic process
An adiabatic process (adiabatic ) is a type of thermodynamic process that occurs without transferring heat or mass between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics. The opposite term to "adiabatic" is diabatic. Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings. In meteorology, adiabatic expansion and cooling of moist air, which can be triggered by winds flowing up and over a mountain for example, can cause the water vapor pressure to exceed the saturation vapor pressure. Expansion and cooling beyond the saturation vapor pressure is often idealized as a pseudo-adiabatic process whereby excess vapor instantly precipitates into water droplets. The change in temperature of an air undergoing pseudo-adiabatic expansion differs from air undergoing adiabatic expansion because latent heat is released by precipitation. Description A process without transfer of heat to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system. The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as , where is the ratio of specific heats at constant pressure and at constant volume and is the pressure of the gas. Various applications of the adiabatic assumption For a closed system, one may write the first law of thermodynamics as , where denotes the change of the system's internal energy, the quantity of energy added to it as heat, and the work done by the system on its surroundings. If the system has such rigid walls that work cannot be transferred in or out, and the walls are not adiabatic and energy is added in the form of heat, and there is no phase change, then the temperature of the system will rise. If the system has such rigid walls that pressure–volume work cannot be done, but the walls are adiabatic, and energy is added as isochoric (constant volume) work in the form of friction or the stirring of a viscous fluid within the system, and there is no phase change, then the temperature of the system will rise. If the system walls are adiabatic but not rigid, and, in a fictive idealized process, energy is added to the system in the form of frictionless, non-viscous pressure–volume work, and there is no phase change, then the temperature of the system will rise. Such a process is called an isentropic process and is said to be "reversible". Ideally, if the process were reversed the energy could be recovered entirely as work done by the system. If the system contains a compressible gas and is reduced in volume, the uncertainty of the position of the gas is reduced, and seemingly would reduce the entropy of the system, but the temperature of the system will rise as the process is isentropic. Should the work be added in such a way that friction or viscous forces are operating within the system, then the process is not isentropic, and if there is no phase change, then the temperature of the system will rise, the process is said to be "irreversible", and the work added to the system is not entirely recoverable in the form of work. If the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having , and according to the second law of thermodynamics. Naturally occurring adiabatic processes are irreversible (entropy is produced). The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by ). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation. The other extreme kind of work is isochoric work, for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with , as friction or viscosity are always present to some extent. Adiabatic compression and expansion The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas. Adiabatic compression occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it. Adiabatic compression occurs in the Earth's atmosphere when an air mass descends, for example, in a Katabatic wind, Foehn wind, or Chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process. Adiabatic expansion occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand in size, thus causing it to do work on its surroundings. When the pressure applied on a parcel of gas is reduced, the gas in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic expansion occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pilei or lenticular clouds. Due in part to adiabatic expansion in mountainous areas, snowfall infrequently occurs in some parts of the Sahara desert. Adiabatic expansion does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic expansion. Also, the contents of an expanding universe can be described (to first order) as an adiabatically expanding fluid. (See heat death of the universe.) Rising magma also undergoes adiabatic expansion before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites. In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth. Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes. In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist. Ideal gas (reversible process) The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation where is pressure, is volume, and is the adiabatic index or heat capacity ratio defined as Here is the specific heat for constant pressure, is the specific heat for constant volume, and is the number of degrees of freedom (3 for a monatomic gas, 5 for a diatomic gas or a gas of linear molecules such as carbon dioxide). For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, the main components of air), . Note that the above formula is only applicable to classical ideal gases (that is, gases far above absolute zero temperature) and not Bose–Einstein or Fermi gases. One can also use the ideal gas law to rewrite the above relationship between and as where T is the absolute or thermodynamic temperature. Example of adiabatic compression The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so ); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure). so the adiabatic constant for this example is about 6.31 Pa m4.2. The gas is now compressed to a 0.1 L (0.0001 m3) volume, which we assume happens quickly enough that no heat enters or leaves the gas through the walls. The adiabatic constant remains the same, but with the resulting pressure unknown We can now solve for the final pressure or 25.1 bar. This pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure. We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, PV = nRT (n is amount of gas in moles and R the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant (nR) is: We know the compressed gas has  = 0.1 L and  = , so we can solve for temperature: That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas pressure, which ensures immediate ignition of the injected fuel. Adiabatic free expansion of a gas For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible. Derivation of P–V relation for adiabatic compression and expansion The definition of an adiabatic process is that heat transfer to the system is zero, . Then, according to the first law of thermodynamics, where is the change in the internal energy of the system and is work done by the system. Any work done must be done at the expense of internal energy , since no heat is being supplied from the surroundings. Pressure–volume work done by the system is defined as However, does not remain constant during an adiabatic process but instead changes along with . It is desired to know how the values of and relate to each other as the adiabatic process proceeds. For an ideal gas (recall ideal gas law ) the internal energy is given by where is the number of degrees of freedom divided by 2, is the universal gas constant and is the number of moles in the system (a constant). Differentiating equation (a3) yields Equation (a4) is often expressed as because . Now substitute equations (a2) and (a4) into equation (a1) to obtain factorize : and divide both sides by : After integrating the left and right sides from to and from to and changing the sides respectively, Exponentiate both sides, substitute with , the heat capacity ratio and eliminate the negative sign to obtain Therefore, and At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (b4) gives Substituting this into (b2) gives Integrating, we obtain the expression for work, Substituting in the second term, Rearranging, Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), By the continuous formula, or Substituting into the previous expression for , Substituting this expression and (b1) in (b3) gives Simplifying, Derivation of discrete formula and work expression The change in internal energy of a system, measured from state 1 to state 2, is equal to At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (c4) gives Substituting this into (c2) gives Integrating we obtain the expression for work, Substituting in second term, Rearranging, Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), By the continuous formula, or Substituting into the previous expression for , Substituting this expression and (c1) in (c3) gives Simplifying, Graphing adiabats An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a P–V diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV becomes small (low temperature), where quantum effects become important. Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms). Each adiabat intersects each isotherm exactly once. An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical). If isotherms are concave towards the north-east direction (45° from V-axis), then adiabats are concave towards the east north-east (31° from V-axis). If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem). Etymology The term adiabatic is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers). It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine). The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall. The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come"). Furthermore, in atmospheric thermodynamics, a diabatic process is one in which heat is exchanged. An adiabatic process is the opposite – a process in which no heat is exchanged. Conceptual significance in thermodynamic theory The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work. Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity. In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs. For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis. In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view. Divergent usages of the word adiabatic This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall. Some authors, like Pippard, recommend using "adiathermal" to refer to processes where no heat-exchange occurs (such as Joule expansion), and "adiabatic" to reversible quasi-static adiathermal processes (so that rapid compression of a gas is not "adiabatic"). And Laidler has summarized the complicated etymology of "adiabatic". Quantum mechanics and quantum statistical mechanics, however, use the word adiabatic in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines. On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it changes the occupation numbers and energies of the eigenstates in proportion to the transition moment integral and in accordance with time-dependent perturbation theory, as well as perturbing the functional form of the eigenstates themselves. In that theory, such a rapid change is said not to be adiabatic, and the contrary word diabatic is applied to it. Recent research suggests that the power absorbed from the perturbation corresponds to the rate of these non-adiabatic transitions. This corresponds to the classical process of energy transfer in the form of heat, but with the relative time scales reversed in the quantum case. Quantum adiabatic processes occur over relatively long time scales, while classical adiabatic processes occur over relatively short time scales. It should also be noted that the concept of 'heat' (in reference to the quantity of thermal energy transferred) breaks down at the quantum level, and the specific form of energy (typically electromagnetic) must be considered instead. The small or negligible absorption of energy from the perturbation in a quantum adiabatic process provides a good justification for identifying it as the quantum analogue of adiabatic processes in classical thermodynamics, and for the reuse of the term. In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage. Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid significant heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above. See also Fire piston Heat burst Related physics topics First law of thermodynamics Entropy (classical thermodynamics) Adiabatic conductivity Adiabatic lapse rate Total air temperature Magnetic refrigeration Berry phase Related thermodynamic processes Cyclic process Isobaric process Isenthalpic process Isentropic process Isochoric process Isothermal process Polytropic process Quasistatic process References General Nave, Carl Rod. "Adiabatic Processes". HyperPhysics. Thorngren, Dr. Jane R. "Adiabatic Processes". Daphne – A Palomar College Web Server, 21 July 1995. . External links Article in HyperPhysics Encyclopaedia Thermodynamic processes Atmospheric thermodynamics Entropy
0.772989
0.998565
0.77188
Ehrenfest theorem
The Ehrenfest theorem, named after Austrian theoretical physicist Paul Ehrenfest, relates the time derivative of the expectation values of the position and momentum operators x and p to the expectation value of the force on a massive particle moving in a scalar potential , The Ehrenfest theorem is a special case of a more general relation between the expectation of any quantum mechanical operator and the expectation of the commutator of that operator with the Hamiltonian of the system where is some quantum mechanical operator and is its expectation value. It is most apparent in the Heisenberg picture of quantum mechanics, where it amounts to just the expectation value of the Heisenberg equation of motion. It provides mathematical support to the correspondence principle. The reason is that Ehrenfest's theorem is closely related to Liouville's theorem of Hamiltonian mechanics, which involves the Poisson bracket instead of a commutator. Dirac's rule of thumb suggests that statements in quantum mechanics which contain a commutator correspond to statements in classical mechanics where the commutator is supplanted by a Poisson bracket multiplied by . This makes the operator expectation values obey corresponding classical equations of motion, provided the Hamiltonian is at most quadratic in the coordinates and momenta. Otherwise, the evolution equations still may hold approximately, provided fluctuations are small. Relation to classical physics Although, at first glance, it might appear that the Ehrenfest theorem is saying that the quantum mechanical expectation values obey Newton’s classical equations of motion, this is not actually the case. If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have to be which is typically not the same as If for example, the potential is cubic, (i.e. proportional to ), then is quadratic (proportional to ). This means, in the case of Newton's second law, the right side would be in the form of , while in the Ehrenfest theorem it is in the form of . The difference between these two quantities is the square of the uncertainty in and is therefore nonzero. An exception occurs in case when the classical equations of motion are linear, that is, when is quadratic and is linear. In that special case, and do agree. Thus, for the case of a quantum harmonic oscillator, the expected position and expected momentum do exactly follow the classical trajectories. For general systems, if the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will approximately follow the classical trajectories, at least for as long as the wave function remains localized in position. Derivation in the Schrödinger picture Suppose some system is presently in a quantum state . If we want to know the instantaneous time derivative of the expectation value of , that is, by definition where we are integrating over all of space. If we apply the Schrödinger equation, we find that By taking the complex conjugate we find Note , because the Hamiltonian is Hermitian. Placing this into the above equation we have Often (but not always) the operator is time-independent so that its derivative is zero and we can ignore the last term. Derivation in the Heisenberg picture In the Heisenberg picture, the derivation is straightforward. The Heisenberg picture moves the time dependence of the system to operators instead of state vectors. Starting with the Heisenberg equation of motion, Ehrenfest's theorem follows simply upon projecting the Heisenberg equation onto from the right and from the left, or taking the expectation value, so One may pull the out of the first term, since the state vectors are no longer time dependent in the Heisenberg Picture. Therefore, General example For the very general example of a massive particle moving in a potential, the Hamiltonian is simply where is the position of the particle. Suppose we wanted to know the instantaneous change in the expectation of the momentum . Using Ehrenfest's theorem, we have since the operator commutes with itself and has no time dependence. By expanding the right-hand-side, replacing by , we get After applying the product rule on the second term, we have As explained in the introduction, this result does not say that the pair satisfies Newton's second law, because the right-hand side of the formula is rather than . Nevertheless, as explained in the introduction, for states that are highly localized in space, the expected position and momentum will approximately follow classical trajectories, which may be understood as an instance of the correspondence principle. Similarly, we can obtain the instantaneous change in the position expectation value. This result is actually in exact accord with the classical equation. Derivation of the Schrödinger equation from the Ehrenfest theorems It was established above that the Ehrenfest theorems are consequences of the Schrödinger equation. However, the converse is also true: the Schrödinger equation can be inferred from the Ehrenfest theorems. We begin from Application of the product rule leads to Here, apply Stone's theorem, using to denote the quantum generator of time translation. The next step is to show that this is the same as the Hamiltonian operator used in quantum mechanics. Stone's theorem implies where was introduced as a normalization constant to the balance dimensionality. Since these identities must be valid for any initial state, the averaging can be dropped and the system of commutator equations for are derived: Assuming that observables of the coordinate and momentum obey the canonical commutation relation . Setting , the commutator equations can be converted into the differential equations whose solution is the familiar quantum Hamiltonian Whence, the Schrödinger equation was derived from the Ehrenfest theorems by assuming the canonical commutation relation between the coordinate and momentum. If one assumes that the coordinate and momentum commute, the same computational method leads to the Koopman–von Neumann classical mechanics, which is the Hilbert space formulation of classical mechanics. Therefore, this derivation as well as the derivation of the Koopman–von Neumann mechanics, shows that the essential difference between quantum and classical mechanics reduces to the value of the commutator . The implications of the Ehrenfest theorem for systems with classically chaotic dynamics are discussed at Scholarpedia article Ehrenfest time and chaos. Due to exponential instability of classical trajectories the Ehrenfest time, on which there is a complete correspondence between quantum and classical evolution, is shown to be logarithmically short being proportional to a logarithm of typical quantum number. For the case of integrable dynamics this time scale is much larger being proportional to a certain power of quantum number. Notes References Theorems in quantum mechanics Mathematical physics
0.777557
0.992652
0.771844
Liouville's theorem (Hamiltonian)
In physics, Liouville's theorem, named after the French mathematician Joseph Liouville, is a key theorem in classical statistical and Hamiltonian mechanics. It asserts that the phase-space distribution function is constant along the trajectories of the system—that is that the density of system points in the vicinity of a given system point traveling through phase-space is constant with time. This time-independent density is in statistical mechanics known as the classical a priori probability. Liouville's theorem applies to conservative systems, that is, systems in which the effects of friction are absent or can be ignored. The general mathematical formulation for such systems is the measure-preserving dynamical system. Liouville's theorem applies when there are degrees of freedom that can be interpreted as positions and momenta; not all measure-preserving dynamical systems have these, but Hamiltonian systems do. The general setting for conjugate position and momentum coordinates is available in the mathematical setting of symplectic geometry. Liouville's theorem ignores the possibility of chemical reactions, where the total number of particles may change over time, or where energy may be transferred to internal degrees of freedom. There are extensions of Liouville's theorem to cover these various generalized settings, including stochastic systems. Liouville equation The Liouville equation describes the time evolution of the phase space distribution function. Although the equation is usually referred to as the "Liouville equation", Josiah Willard Gibbs was the first to recognize the importance of this equation as the fundamental equation of statistical mechanics. It is referred to as the Liouville equation because its derivation for non-canonical systems utilises an identity first derived by Liouville in 1838. Consider a Hamiltonian dynamical system with canonical coordinates and conjugate momenta , where . Then the phase space distribution determines the probability that the system will be found in the infinitesimal phase space volume . The Liouville equation governs the evolution of in time : Time derivatives are denoted by dots, and are evaluated according to Hamilton's equations for the system. This equation demonstrates the conservation of density in phase space (which was Gibbs's name for the theorem). Liouville's theorem states that The distribution function is constant along any trajectory in phase space. A proof of Liouville's theorem uses the n-dimensional divergence theorem. This proof is based on the fact that the evolution of obeys an 2n-dimensional version of the continuity equation: That is, the 3-tuple is a conserved current. Notice that the difference between this and Liouville's equation are the terms where is the Hamiltonian, and where the derivatives and have been evaluated using Hamilton's equations of motion. That is, viewing the motion through phase space as a 'fluid flow' of system points, the theorem that the convective derivative of the density, , is zero follows from the equation of continuity by noting that the 'velocity field' in phase space has zero divergence (which follows from Hamilton's relations). Other formulations Poisson bracket The theorem above is often restated in terms of the Poisson bracket as or, in terms of the linear Liouville operator or Liouvillian, as Ergodic theory In ergodic theory and dynamical systems, motivated by the physical considerations given so far, there is a corresponding result also referred to as Liouville's theorem. In Hamiltonian mechanics, the phase space is a smooth manifold that comes naturally equipped with a smooth measure (locally, this measure is the 6n-dimensional Lebesgue measure). The theorem says this smooth measure is invariant under the Hamiltonian flow. More generally, one can describe the necessary and sufficient condition under which a smooth measure is invariant under a flow. The Hamiltonian case then becomes a corollary. Symplectic geometry We can also formulate Liouville's Theorem in terms of symplectic geometry. For a given system, we can consider the phase space of a particular Hamiltonian as a manifold endowed with a symplectic 2-form The volume form of our manifold is the top exterior power of the symplectic 2-form, and is just another representation of the measure on the phase space described above. On our phase space symplectic manifold we can define a Hamiltonian vector field generated by a function as Specifically, when the generating function is the Hamiltonian itself, , we get where we utilized Hamilton's equations of motion and the definition of the chain rule. In this formalism, Liouville's Theorem states that the Lie derivative of the volume form is zero along the flow generated by . That is, for a 2n-dimensional symplectic manifold, In fact, the symplectic structure itself is preserved, not only its top exterior power. That is, Liouville's Theorem also gives Quantum Liouville equation The analog of Liouville equation in quantum mechanics describes the time evolution of a mixed state. Canonical quantization yields a quantum-mechanical version of this theorem, the von Neumann equation. This procedure, often used to devise quantum analogues of classical systems, involves describing a classical system using Hamiltonian mechanics. Classical variables are then re-interpreted as quantum operators, while Poisson brackets are replaced by commutators. In this case, the resulting equation is where ρ is the density matrix. When applied to the expectation value of an observable, the corresponding equation is given by Ehrenfest's theorem, and takes the form where is an observable. Note the sign difference, which follows from the assumption that the operator is stationary and the state is time-dependent. In the phase-space formulation of quantum mechanics, substituting the Moyal brackets for Poisson brackets in the phase-space analog of the von Neumann equation results in compressibility of the probability fluid, and thus violations of Liouville's theorem incompressibility. This, then, leads to concomitant difficulties in defining meaningful quantum trajectories. Examples SHO phase-space volume Consider an -particle system in three dimensions, and focus on only the evolution of particles. Within phase space, these particles occupy an infinitesimal volume given by We want to remain the same throughout time, so that is constant along the trajectories of the system. If we allow our particles to evolve by an infinitesimal time step , we see that each particle phase space location changes as where and denote and respectively, and we have only kept terms linear in . Extending this to our infinitesimal hypercube , the side lengths change as To find the new infinitesimal phase-space volume , we need the product of the above quantities. To first order in , we get the following: So far, we have yet to make any specifications about our system. Let us now specialize to the case of -dimensional isotropic harmonic oscillators. That is, each particle in our ensemble can be treated as a simple harmonic oscillator. The Hamiltonian for this system is given by By using Hamilton's equations with the above Hamiltonian we find that the term in parentheses above is identically zero, thus yielding From this we can find the infinitesimal volume of phase space: Thus we have ultimately found that the infinitesimal phase-space volume is unchanged, yielding demonstrating that Liouville's theorem holds for this system. The question remains of how the phase-space volume actually evolves in time. Above we have shown that the total volume is conserved, but said nothing about what it looks like. For a single particle we can see that its trajectory in phase space is given by the ellipse of constant . Explicitly, one can solve Hamilton's equations for the system and find where and denote the initial position and momentum of the -th particle. For a system of multiple particles, each one will have a phase-space trajectory that traces out an ellipse corresponding to the particle's energy. The frequency at which the ellipse is traced is given by the in the Hamiltonian, independent of any differences in energy. As a result, a region of phase space will simply rotate about the point with frequency dependent on . This can be seen in the animation above. Damped harmonic oscillator To see an example where Liouville's theorem does not apply, we can modify the equations of motion for the simple harmonic oscillator to account for the effects of friction or damping. Consider again the system of particles each in a -dimensional isotropic harmonic potential, the Hamiltonian for which is given in the previous example. This time, we add the condition that each particle experiences a frictional force , where is a positive constant dictating the amount of friction. As this is a non-conservative force, we need to extend Hamilton's equations as Unlike the equations of motion for the simple harmonic oscillator, these modified equations do not take the form of Hamilton's equations, and therefore we do not expect Liouville's theorem to hold. Instead, as depicted in the animation in this section, a generic phase space volume will shrink as it evolves under these equations of motion. To see this violation of Liouville's theorem explicitly, we can follow a very similar procedure to the undamped harmonic oscillator case, and we arrive again at Plugging in our modified Hamilton's equations, we find Calculating our new infinitesimal phase space volume, and keeping only first order in we find the following result: We have found that the infinitesimal phase-space volume is no longer constant, and thus the phase-space density is not conserved. As can be seen from the equation as time increases, we expect our phase-space volume to decrease to zero as friction affects the system. As for how the phase-space volume evolves in time, we will still have the constant rotation as in the undamped case. However, the damping will introduce a steady decrease in the radii of each ellipse. Again we can solve for the trajectories explicitly using Hamilton's equations, taking care to use the modified ones above. Letting for convenience, we find where the values and denote the initial position and momentum of the -th particle. As the system evolves the total phase-space volume will spiral in to the origin. This can be seen in the figure above. Remarks The Liouville equation is valid for both equilibrium and nonequilibrium systems. It is a fundamental equation of non-equilibrium statistical mechanics. The Liouville equation is integral to the proof of the fluctuation theorem from which the second law of thermodynamics can be derived. It is also the key component of the derivation of Green–Kubo relations for linear transport coefficients such as shear viscosity, thermal conductivity or electrical conductivity. Virtually any textbook on Hamiltonian mechanics, advanced statistical mechanics, or symplectic geometry will derive the Liouville theorem. In plasma physics, the Vlasov equation can be interpreted as Liouville's theorem, which reduces the task of solving the Vlasov equation to that of single particle motion. By using Liouville's theorem in this way with energy or magnetic moment conservation, for example, one can determine unknown fields using known particle distribution functions, or vice versa. This method is known as Liouville mapping. See also Boltzmann transport equation Reversible reference system propagation algorithm (r-RESPA) References Further reading External links Eponymous theorems of physics Hamiltonian mechanics Theorems in dynamical systems Statistical mechanics theorems
0.776001
0.994632
0.771835
Quantum thermodynamics
Quantum thermodynamics is the study of the relations between two independent physical theories: thermodynamics and quantum mechanics. The two independent theories address the physical phenomena of light and matter. In 1905, Albert Einstein argued that the requirement of consistency between thermodynamics and electromagnetism leads to the conclusion that light is quantized, obtaining the relation . This paper is the dawn of quantum theory. In a few decades quantum theory became established with an independent set of rules. Currently quantum thermodynamics addresses the emergence of thermodynamic laws from quantum mechanics. It differs from quantum statistical mechanics in the emphasis on dynamical processes out of equilibrium. In addition, there is a quest for the theory to be relevant for a single individual quantum system. Dynamical view There is an intimate connection of quantum thermodynamics with the theory of open quantum systems. Quantum mechanics inserts dynamics into thermodynamics, giving a sound foundation to finite-time-thermodynamics. The main assumption is that the entire world is a large closed system, and therefore, time evolution is governed by a unitary transformation generated by a global Hamiltonian. For the combined system bath scenario, the global Hamiltonian can be decomposed into: where is the system Hamiltonian, is the bath Hamiltonian and is the system-bath interaction. The state of the system is obtained from a partial trace over the combined system and bath: . Reduced dynamics is an equivalent description of the system dynamics utilizing only system operators. Assuming Markov property for the dynamics the basic equation of motion for an open quantum system is the Lindblad equation (GKLS): is a (Hermitian) Hamiltonian part and : is the dissipative part describing implicitly through system operators the influence of the bath on the system. The Markov property imposes that the system and bath are uncorrelated at all times . The L-GKS equation is unidirectional and leads any initial state to a steady state solution which is an invariant of the equation of motion . The Heisenberg picture supplies a direct link to quantum thermodynamic observables. The dynamics of a system observable represented by the operator, , has the form: where the possibility that the operator, is explicitly time-dependent, is included. Emergence of time derivative of first law of thermodynamics When the first law of thermodynamics emerges: where power is interpreted as and the heat current . Additional conditions have to be imposed on the dissipator to be consistent with thermodynamics. First the invariant should become an equilibrium Gibbs state. This implies that the dissipator should commute with the unitary part generated by . In addition an equilibrium state is stationary and stable. This assumption is used to derive the Kubo-Martin-Schwinger stability criterion for thermal equilibrium i.e. KMS state. A unique and consistent approach is obtained by deriving the generator, , in the weak system bath coupling limit. In this limit, the interaction energy can be neglected. This approach represents a thermodynamic idealization: it allows energy transfer, while keeping a tensor product separation between the system and bath, i.e., a quantum version of an isothermal partition. Markovian behavior involves a rather complicated cooperation between system and bath dynamics. This means that in phenomenological treatments, one cannot combine arbitrary system Hamiltonians, , with a given L-GKS generator. This observation is particularly important in the context of quantum thermodynamics, where it is tempting to study Markovian dynamics with an arbitrary control Hamiltonian. Erroneous derivations of the quantum master equation can easily lead to a violation of the laws of thermodynamics. An external perturbation modifying the Hamiltonian of the system will also modify the heat flow. As a result, the L-GKS generator has to be renormalized. For a slow change, one can adopt the adiabatic approach and use the instantaneous system’s Hamiltonian to derive . An important class of problems in quantum thermodynamics is periodically driven systems. Periodic quantum heat engines and power-driven refrigerators fall into this class. A reexamination of the time-dependent heat current expression using quantum transport techniques has been proposed. A derivation of consistent dynamics beyond the weak coupling limit has been suggested. Phenomenological formulations of irreversible quantum dynamics consistent with the second law and implementing the geometric idea of "steepest entropy ascent" or "gradient flow" have been suggested to model relaxation and strong coupling. Emergence of the second law The second law of thermodynamics is a statement on the irreversibility of dynamics or, the breakup of time reversal symmetry (T-symmetry). This should be consistent with the empirical direct definition: heat will flow spontaneously from a hot source to a cold sink. From a static viewpoint, for a closed quantum system, the 2nd law of thermodynamics is a consequence of the unitary evolution. In this approach, one accounts for the entropy change before and after a change in the entire system. A dynamical viewpoint is based on local accounting for the entropy changes in the subsystems and the entropy generated in the baths. Entropy In thermodynamics, entropy is related to the amount of energy of a system that can be converted into mechanical work in a concrete process. In quantum mechanics, this translates to the ability to measure and manipulate the system based on the information gathered by measurement. An example is the case of Maxwell’s demon, which has been resolved by Leó Szilárd. The entropy of an observable is associated with the complete projective measurement of an observable,, where the operator has a spectral decomposition: where are the projection operators of the eigenvalue The probability of outcome is The entropy associated with the observable is the Shannon entropy with respect to the possible outcomes: The most significant observable in thermodynamics is the energy represented by the Hamiltonian operator and its associated energy entropy, John von Neumann suggested to single out the most informative observable to characterize the entropy of the system. This invariant is obtained by minimizing the entropy with respect to all possible observables. The most informative observable operator commutes with the state of the system. The entropy of this observable is termed the Von Neumann entropy and is equal to As a consequence, for all observables. At thermal equilibrium the energy entropy is equal to the von Neumann entropy: is invariant to a unitary transformation changing the state. The Von Neumann entropy is additive only for a system state that is composed of a tensor product of its subsystems: Clausius version of the II-law No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature. This statement for N-coupled heat baths in steady state becomes A dynamical version of the II-law can be proven, based on Spohn's inequality: which is valid for any L-GKS generator, with a stationary state, . Consistency with thermodynamics can be employed to verify quantum dynamical models of transport. For example, local models for networks where local L-GKS equations are connected through weak links have been thought to violate the second law of thermodynamics. In 2018 has been shown that, by correctly taking into account all work and energy contributions in the full system, local master equations are fully coherent with the second law of thermodynamics. Quantum and thermodynamic adiabatic conditions and quantum friction Thermodynamic adiabatic processes have no entropy change. Typically, an external control modifies the state. A quantum version of an adiabatic process can be modeled by an externally controlled time dependent Hamiltonian . If the system is isolated, the dynamics are unitary, and therefore, is a constant. A quantum adiabatic process is defined by the energy entropy being constant. The quantum adiabatic condition is therefore equivalent to no net change in the population of the instantaneous energy levels. This implies that the Hamiltonian should commute with itself at different times: . When the adiabatic conditions are not fulfilled, additional work is required to reach the final control value. For an isolated system, this work is recoverable, since the dynamics is unitary and can be reversed. In this case, quantum friction can be suppressed using shortcuts to adiabaticity as demonstrated in the laboratory using a unitary Fermi gas in a time-dependent trap. The coherence stored in the off-diagonal elements of the density operator carry the required information to recover the extra energy cost and reverse the dynamics. Typically, this energy is not recoverable, due to interaction with a bath that causes energy dephasing. The bath, in this case, acts like a measuring apparatus of energy. This lost energy is the quantum version of friction. Emergence of the dynamical version of the third law of thermodynamics There are seemingly two independent formulations of the third law of thermodynamics. Both were originally stated by Walther Nernst. The first formulation is known as the Nernst heat theorem, and can be phrased as: The entropy of any pure substance in thermodynamic equilibrium approaches zero as the temperature approaches zero. The second formulation is dynamical, known as the unattainability principle It is impossible by any procedure, no matter how idealized, to reduce any assembly to absolute zero temperature in a finite number of operations. At steady state the second law of thermodynamics implies that the total entropy production is non-negative. When the cold bath approaches the absolute zero temperature, it is necessary to eliminate the entropy production divergence at the cold side when , therefore For the fulfillment of the second law depends on the entropy production of the other baths, which should compensate for the negative entropy production of the cold bath. The first formulation of the third law modifies this restriction. Instead of the third law imposes , guaranteeing that at absolute zero the entropy production at the cold bath is zero: . This requirement leads to the scaling condition of the heat current . The second formulation, known as the unattainability principle can be rephrased as; No refrigerator can cool a system to absolute zero temperature at finite time. The dynamics of the cooling process is governed by the equation: where is the heat capacity of the bath. Taking and with , we can quantify this formulation by evaluating the characteristic exponent of the cooling process, This equation introduces the relation between the characteristic exponents and . When then the bath is cooled to zero temperature in a finite time, which implies a violation of the third law. It is apparent from the last equation, that the unattainability principle is more restrictive than the Nernst heat theorem. Typicality as a source of emergence of thermodynamic phenomena The basic idea of quantum typicality is that the vast majority of all pure states featuring a common expectation value of some generic observable at a given time will yield very similar expectation values of the same observable at any later time. This is meant to apply to Schrödinger type dynamics in high dimensional Hilbert spaces. As a consequence individual dynamics of expectation values are then typically well described by the ensemble average. Quantum ergodic theorem originated by John von Neumann is a strong result arising from the mere mathematical structure of quantum mechanics. The QET is a precise formulation of termed normal typicality, i.e. the statement that, for typical large systems, every initial wave function from an energy shell is ‘normal’: it evolves in such a way that for most t, is macroscopically equivalent to the micro-canonical density matrix. Resource theory The second law of thermodynamics can be interpreted as quantifying state transformations which are statistically unlikely so that they become effectively forbidden. The second law typically applies to systems composed of many particles interacting; Quantum thermodynamics resource theory is a formulation of thermodynamics in the regime where it can be applied to a small number of particles interacting with a heat bath. For processes which are cyclic or very close to cyclic, the second law for microscopic systems takes on a very different form than it does at the macroscopic scale, imposing not just one constraint on what state transformations are possible, but an entire family of constraints. These second laws are not only relevant for small systems, but also apply to individual macroscopic systems interacting via long-range interactions, which only satisfy the ordinary second law on average. By making precise the definition of thermal operations, the laws of thermodynamics take on a form with the first law defining the class of thermal operations, the zeroth law emerging as a unique condition ensuring the theory is nontrivial, and the remaining laws being a monotonicity property of generalised free energies. Engineered reservoirs Nanoscale allows for the preparation of quantum systems in physical states without classical analogs. There, complex out-of-equilibrium scenarios may be produced by the initial preparation of either the working substance or the reservoirs of quantum particles, the latter dubbed as "engineered reservoirs". There are different forms of engineered reservoirs. Some of them involve subtle quantum coherence or correlation effects, while others rely solely on nonthermal classical probability distribution functions. Interesting phenomena may emerge from the use of engineered reservoirs such as efficiencies greater than the Otto limit, violations of Clausius inequalities, or simultaneous extraction of heat and work from the reservoirs. See also Quantum statistical mechanics Thermal quantum field theory References Further reading F. Binder, L. A. Correa, C. Gogolin, J. Anders, G. Adesso (eds.) (2018). Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions. Springer, . Jochen Gemmer, M. Michel, Günter Mahler (2009). Quantum thermodynamics: Emergence of Thermodynamic Behavior Within Composite Quantum Systems. 2nd edition, Springer, . Heinz-Peter Breuer, Francesco Petruccione (2007). The Theory of Open Quantum Systems. Oxford University Press, . External links Go to "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light" to read an English translation of Einstein's 1905 paper. (Retrieved: 2014 Apr 11) Quantum mechanics Thermodynamics Non-equilibrium thermodynamics Philosophy of thermal and statistical physics
0.78346
0.985162
0.771835
PhysX
PhysX is an open-source realtime physics engine middleware SDK developed by Nvidia as part of the Nvidia GameWorks software suite. Initially, video games supporting PhysX were meant to be accelerated by PhysX PPU (expansion cards designed by Ageia). However, after Ageia's acquisition by Nvidia, dedicated PhysX cards have been discontinued in favor of the API being run on CUDA-enabled GeForce GPUs. In both cases, hardware acceleration allowed for the offloading of physics calculations from the CPU, allowing it to perform other tasks instead. PhysX and other middleware physics engines are used in many video games today because they free game developers from having to write their own code that implements classical mechanics (Newtonian physics) to do, for example, soft body dynamics. History What is known today as PhysX originated as a physics simulation engine called NovodeX. The engine was developed by Swiss company NovodeX AG, an ETH Zurich spin-off. In 2004, Ageia acquired NovodeX AG and began developing a hardware technology that could accelerate physics calculations, aiding the CPU. Ageia called the technology PhysX, the SDK was renamed from NovodeX to PhysX, and the accelerator cards were dubbed PPUs (Physics Processing Units). In its implementation, the first video game to use PhysX technology is The Stalin Subway, released in Russia-only game stores in September 2005. In 2008, Ageia was itself acquired by graphics technology manufacturer Nvidia. Nvidia started enabling PhysX hardware acceleration on its line of GeForce graphics cards and eventually dropped support for Ageia PPUs. PhysX SDK 3.0 was released in May 2011 and represented a significant rewrite of the SDK, bringing improvements such as more efficient multithreading and a unified code base for all supported platforms. At GDC 2015, Nvidia made the source code for PhysX available on GitHub, but required registration at developer.nvidia.com. The proprietary SDK was provided to developers for free for both commercial and non-commercial use on Windows, Linux, macOS, iOS and Android platforms. On December 3, 2018, PhysX was made open source under a 3-clause BSD license, but this change applied only to computer and mobile platforms. On November 8, 2022, the open source release was updated to PhysX 5, under the same 3-clause BSD license. Features The PhysX engine and SDK are available for Microsoft Windows, macOS, Linux, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Wii, iOS and Android. PhysX is a multi-threaded physics simulation SDK. It supports rigid body dynamics, soft body dynamics (like cloth simulation, including tearing and pressurized cloth), ragdolls and character controllers, vehicle dynamics, particles and volumetric fluid simulation. Hardware acceleration PPU A physics processing unit (PPU) is a processor specially designed to alleviate the calculation burden on the CPU, specifically calculations involving physics. PhysX PPUs were offered to consumers in the forms of PCI or PCIe cards by ASUS, BFG Technologies, Dell and ELSA Technology. Beginning with version 2.8.3 of the PhysX SDK, support for PPU cards was dropped, and PPU cards are no longer manufactured. The last incarnation of PhysX PPU standalone card designed by Ageia had roughly the same PhysX performance as a dedicated 9800GTX. GPU After Nvidia's acquisition of Ageia, PhysX development turned away from PPU expansion cards and focused instead on the GPGPU capabilities of modern GPUs. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for accelerating physical simulations using PhysX. Any CUDA-ready GeForce graphics card (8-series or later GPU with a minimum of 32 cores and a minimum of 256 MB dedicated graphics memory) can take advantage of PhysX without the need to install a dedicated PhysX card. APEX Nvidia APEX technology is a multi-platform scalable dynamics framework build around the PhysX SDK. It was first introduced in Mafia II in August 2010. Nvidia's APEX comprises the following modules: APEX Destruction, APEX Clothing, APEX Particles, APEX Turbulence, APEX ForceField and formerly APEX Vegetation which was suspended in 2011. From version 1.4.1 APEX SDK is deprecated. Nvidia FleX FleX is a particle based simulation technique for real-time visual effects. Traditionally, visual effects are made using a combination of elements created using specialized solvers for rigid bodies, fluids, clothing, etc. Because FleX uses a unified particle representation for all object types, it enables new effects where different simulated substances can interact with each other seamlessly. Such unified physics solvers are a staple of the offline computer graphics world, where tools such as Autodesk Maya's nCloth, and Softimage's Lagoa are widely used. The goal for FleX is to use the power of GPUs to bring the capabilities of these offline applications to real-time computer graphics. Criticism from Real World Technologies On July 5, 2010, Real World Technologies published an analysis of the PhysX architecture. According to this analysis, most of the code used in PhysX applications at the time was based on x87 instructions without any multithreading optimization. This could cause significant performance drops when running PhysX code on the CPU. The article suggested that a PhysX rewrite using SSE instructions may substantially lessen the performance discrepancy between CPU PhysX and GPU PhysX. In response to the Real World Technologies analysis, Mike Skolones, product manager of PhysX, said that SSE support had been left behind because most games are developed for consoles first and then ported to the PC. As a result, modern computers run these games faster and better than the consoles even with little or no optimization. Senior PR manager of Nvidia, Bryan Del Rizzo, explained that multithreading had already been available with CPU PhysX 2.x and that it had been up to the developer to make use of it. He also stated that automatic multithreading and SSE would be introduced with version 3 of the PhysX SDK. PhysX SDK 3.0 was released in May 2011 and represented a significant rewrite of the SDK, bringing improvements such as more efficient multithreading and a unified code base for all supported platforms. Usage PhysX in video games PhysX technology is used by game engines such as Unreal Engine (version 3 onwards), Unity, Gamebryo, Vision (version 6 onwards), Instinct Engine, Panda3D, Diesel, Torque, HeroEngine, and BigWorld. As one of the handful of major physics engines, it is used in many games, such as The Witcher 3: Wild Hunt, Warframe, Killing Floor 2, Fallout 4, Batman: Arkham Knight, Planetside 2, and Borderlands 2. Most of these games use the CPU to process the physics simulations. Video games with optional support for hardware-accelerated PhysX often include additional effects such as tearable cloth, dynamic smoke or simulated particle debris. PhysX in other software Other software with PhysX support includes: Active Worlds (AW), a 3D virtual reality platform with its client running on Windows Amazon Lumberyard, a 3D game development engine developed by Amazon Autodesk 3ds Max, Autodesk Maya and Autodesk Softimage, computer animation suites DarkBASIC Professional (with DarkPHYSICS upgrade), a programming language targeted at game development DX Studio, an integrated development environment for creating interactive 3D graphics ForgeLight, a game engine developed by the former Sony Online Entertainment. Futuremark's 3DMark06 and Vantage benchmarking tools Microsoft Robotics Studio, an environment for robot control and simulation Nvidia's SuperSonic Sled and Raging Rapids Ride, technology demos OGRE (via the NxOgre wrapper), an open source rendering engine The Physics Abstraction Layer, a physical simulation API abstraction system (it provides COLLADA and Scythe Physics Editor support for PhysX) Rayfire, a plug-in for Autodesk 3ds Max that allows fracturing and other physics simulations The Physics Engine Evaluation Lab, a tool designed to evaluate, compare and benchmark physics engines. Unreal Engine game development software by Epic Games. Unreal Engine 4.26 and onwards has officially deprecated PhysX. Unity by Unity ApS. Unity's Data-Oriented Technology Stack does not use PhysX. See also DirectX Bullet (software) Havok (software) Open Dynamics Engine Newton Game Dynamics OpenGL Vortex (software) AGX Multiphysics References External links Official Product Site Techgage: AGEIA PhysX.. First Impressions Techgage: NVIDIA's PhysX: Performance and Status Report Computer physics engines MacOS programming tools Nvidia software PlayStation 3 software PlayStation 4 software Programming tools for Windows Science software for Linux Science software for macOS Science software for Windows Software using the BSD license Video game development software for Linux Video game development Virtual reality Wii software Xbox 360 software
0.777364
0.992846
0.771803
Deformation (physics)
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an configuration to a configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body. A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc. In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials. Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations may remain, and these exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation. In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material. Definition and formulation Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred. It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at is considered the reference configuration, . The configuration at the current time is the current configuration. For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest. The components of the position vector of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components of the position vector of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description. There is continuity during deformation of a continuum body in the sense that: The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. Affine deformation An affine deformation is a deformation that can be completely described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations. Therefore, an affine deformation has the form where is the position of a point in the deformed configuration, is the position in a reference configuration, is a time-like parameter, is the linear transformer and is the translation. In matrix form, where the components are with respect to an orthonormal basis, The above deformation becomes non-affine or inhomogeneous if or . Rigid body motion A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix is proper orthogonal in order to allow rotations but no reflections. A rigid body motion can be described by where In matrix form, Background: displacement A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 1). If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred. The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector in the Lagrangian description, or in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as or in terms of the spatial coordinates as where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus and the relationship between and is then given by Knowing that then It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas: Thus, we have or in terms of the spatial coordinates as Displacement gradient tensor The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor . Thus we have: or where is the deformation gradient tensor. Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor . Thus we have, or Examples Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are uniform extension pure dilation equibiaxial tension simple shear pure shear Linear or longitudinal deformations of long objects, such as beams and fibers, are called elongation or shortening; derived quantities are the relative elongation and the stretch ratio. Plane deformations are also of interest, particularly in the experimental context. Volume deformation is a uniform scaling due to isotropic compression; the relative volume deformation is called volumetric strain. Plane deformation A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors , , the deformation gradient has the form In matrix form, From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write where is the angle of rotation and , are the principal stretches. Isochoric plane deformation If the deformation is isochoric (volume preserving) then and we have Alternatively, Simple shear A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation. If is the fixed reference orientation in which line elements do not deform during the deformation then and . Therefore, Since the deformation is isochoric, Define Then, the deformation gradient in simple shear can be expressed as Now, Since we can also write the deformation gradient as See also The deformation of long elements such as beams or studs due to bending forces is known as deflection. Euler–Bernoulli beam theory Deformation (engineering) Finite strain theory Infinitesimal strain theory Moiré pattern Shear modulus Shear stress Shear strength Strain (mechanics) Stress (mechanics) Stress measures References Further reading Tensors Continuum mechanics Non-Newtonian fluids Solid mechanics Geometry
0.780935
0.988304
0.771801
Psychodynamics
Psychodynamics, also known as psychodynamic psychology, in its broadest sense, is an approach to psychology that emphasizes systematic study of the psychological forces underlying human behavior, feelings, and emotions and how they might relate to early experience. It is especially interested in the dynamic relations between conscious motivation and unconscious motivation. The term psychodynamics is also used to refer specifically to the psychoanalytical approach developed by Sigmund Freud (1856–1939) and his followers. Freud was inspired by the theory of thermodynamics and used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido or psi) in an organically complex brain. There are four major schools of thought regarding psychological treatment: psychodynamic, cognitive-behavioral, biological, and humanistic treatment. In the treatment of psychological distress, psychodynamic psychotherapy tends to be a less intensive (once- or twice-weekly) modality than the classical Freudian psychoanalysis treatment (of 3–5 sessions per week). Psychodynamic therapies depend upon a theory of inner conflict, wherein repressed behaviours and emotions surface into the patient's consciousness; generally, one's conflict is unconscious. Since the 1970s, psychodynamics has largely been abandoned as not fact-based; Freudian psychoanalysis has been criticized as pseudoscience. Overview In general, psychodynamics is the study of the interrelationship of various parts of the mind, personality, or psyche as they relate to mental, emotional, or motivational forces especially at the unconscious level. The mental forces involved in psychodynamics are often divided into two parts: (a) the interaction of the emotional and motivational forces that affect behavior and mental states, especially on a subconscious level; (b) inner forces affecting behavior: the study of the emotional and motivational forces that affect behavior and states of mind. Freud proposed that psychological energy was constant (hence, emotional changes consisted only in displacements) and that it tended to rest (point attractor) through discharge (catharsis). In mate selection psychology, psychodynamics is defined as the study of the forces, motives, and energy generated by the deepest of human needs. In general, psychodynamics studies the transformations and exchanges of "psychic energy" within the personality. A focus in psychodynamics is the connection between the energetics of emotional states in the Id, ego and super-ego as they relate to early childhood developments and processes. At the heart of psychological processes, according to Freud, is the ego, which he envisions as battling with three forces: the id, the super-ego, and the outside world. The id is the unconscious reservoir of libido, the psychic energy that fuels instincts and psychic processes. The ego serves as the general manager of personality, making decisions regarding the pleasures that will be pursued at the id's demand, the person's safety requirements, and the moral dictates of the superego that will be followed. The superego refers to the repository of an individual's moral values, divided into the conscience – the internalization of a society's rules and regulations – and the ego-ideal – the internalization of one's goals. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behaviour or mental states in terms of innate emotional forces or processes. History Freud used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido) in an organically complex brain. The idea for this came from his first year adviser, Ernst von Brücke at the University of Vienna, who held the view that all living organisms, including humans, are basically energy-systems to which the principle of the conservation of energy applies. This principle states that "the total amount of energy in any given physical system is always constant, that energy quanta can be changed but not annihilated, and that consequently when energy is moved from one part of the system, it must reappear in another part." This principle is at the very root of Freud's ideas, whereby libido, which is primarily seen as sexual energy, is transformed into other behaviours. However, it is now clear that the term energy in physics means something quite different from the term energy in relation to mental functioning. Psychodynamics was initially further developed by Carl Jung, Alfred Adler and Melanie Klein. By the mid-1940s and into the 1950s, the general application of the "psychodynamic theory" had been well established. In his 1988 book Introduction to Psychodynamics – a New Synthesis, psychiatrist Mardi J. Horowitz states that his own interest and fascination with psychodynamics began during the 1950s, when he heard Ralph Greenson, a popular local psychoanalyst who spoke to the public on topics such as "People who Hate", speak on the radio at UCLA. In his radio discussion, according to Horowitz, he "vividly described neurotic behavior and unconscious mental processes and linked psychodynamics theory directly to everyday life." In the 1950s, American psychiatrist Eric Berne built on Freud's psychodynamic model, particularly that of the "ego states", to develop a psychology of human interactions called transactional analysis which, according to physician James R. Allen, is a "cognitive-behavioral approach to treatment and that it is a very effective way of dealing with internal models of self and others as well as other psychodynamic issues.". Around the 1970s, a growing number of researchers began departing from the psychodynamics model and Freudian subconscious. Many felt that the evidence was over-reliant on imaginative discourse in therapy, and on patient reports of their state-of-mind. These subjective experiences are inaccessible to others. Philosopher of science Karl Popper argued that much of Freudianism was untestable and therefore not scientific. In 1975 literary critic Frederick Crews began a decades-long campaign against the scientific credibility of Freudianism. This culminated in Freud: The Making of an Illusion which aggregated years of criticism from many quarters. Medical schools and psychology departments no longer offer much training in psychodynamics, according to a 2007 survey. An Emory University psychology professor explained, “I don’t think psychoanalysis is going to survive unless there is more of an appreciation for empirical rigor and testing.” Freudian analysis According to American psychologist Calvin S. Hall, from his 1954 Primer in Freudian Psychology: At the heart of psychological processes, according to Freud, is the ego, which he sees battling with three forces: the id, the super-ego, and the outside world. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behavior or mental states in terms of innate emotional forces or processes. In his writings about the "engines of human behavior", Freud used the German word Trieb, a word that can be translated into English as either instinct or drive. In the 1930s, Freud's daughter Anna Freud began to apply Freud's psychodynamic theories of the "ego" to the study of parent-child attachment and especially deprivation and in doing so developed ego psychology. Jungian analysis At the turn of the 20th century, during these decisive years, a young Swiss psychiatrist named Carl Jung had been following Freud's writings and had sent him copies of his articles and his first book, the 1907 Psychology of Dementia Praecox, in which he upheld the Freudian psychodynamic viewpoint, although with some reservations. That year, Freud invited Jung to visit him in Vienna. The two men, it is said, were greatly attracted to each other, and they talked continuously for thirteen hours. This led to a professional relationship in which they corresponded on a weekly basis, for a period of six years. Carl Jung's contributions in psychodynamic psychology include: The psyche tends toward wholeness. The self is composed of the ego, the personal unconscious, the collective unconscious. The collective unconscious contains the archetypes which manifest in ways particular to each individual. Archetypes are composed of dynamic tensions and arise spontaneously in the individual and collective psyche. Archetypes are autonomous energies common to the human species. They give the psyche its dynamic properties and help organize it. Their effects can be seen in many forms and across cultures. The Transcendent Function: The emergence of the third resolves the split between dynamic polar tensions within the archetypal structure. The recognition of the spiritual dimension of the human psyche. The role of images which spontaneously arise in the human psyche (images include the interconnection between affect, images, and instinct) to communicate the dynamic processes taking place in the personal and collective unconscious, images which can be used to help the ego move in the direction of psychic wholeness. Recognition of the multiplicity of psyche and psychic life, that there are several organizing principles within the psyche, and that they are at times in conflict. See also Ernst Wilhelm Brücke Yisrael Salantar Cathexis Object relations theory Reaction formation Robert Langs References Further reading Brown, Junius Flagg & Menninger, Karl Augustus (1940). The Psychodynamics of Abnormal Behavior, 484 pages, McGraw-Hill Book Company, inc. Weiss, Edoardo (1950). Principles of Psychodynamics, 268 pages, Grune & Stratton Pearson Education (1970). The Psychodynamics of Patient Care Prentice Hall, 422 pgs. Stanford University: Higher Education Division. Jean Laplanche et J.B. Pontalis (1974). The Language of Psycho-Analysis, Editeur: W. W. Norton & Company, Shedler, Jonathan. "That was Then, This is Now: An Introduction to Contemporary Psychodynamic Therapy", PDF PDM Task Force. (2006). Psychodynamic Diagnostic Manual. Silver Spring, MD. Alliance of Psychoanalytic Organizations. Hutchinson, E.(ED.) (2017).Essentials of human behavior: Integrating person, environment, and the life course. Thousand Oaks, CA: Sage. Freudian psychology Psychoanalysis
0.774503
0.996473
0.771771
Electromagnetic field
An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field. Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave. The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion. The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics. History The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831. In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics. Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience. Mathematical description There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field). If only the electric field is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations. With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws. The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are: Gauss's law Gauss's law for magnetism Faraday's law Ampère–Maxwell law where is the charge density, which is a function of time and position, is the vacuum permittivity, is the vacuum permeability, and is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. The Lorentz force law governs the interaction of the electromagnetic field with charged matter. When a field travels across to different media, the behavior of the field changes according to the properties of the media. Properties of the field Electrostatics and magnetostatics The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field, and along with two formulae that involve the magnetic field: and These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena. Transformations of electromagnetic fields Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a field must be present. In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields. Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently. Reciprocal behavior of electric and magnetic fields The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator. Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor. Behavior of the fields in the absence of charges or currents Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where and are zero, the electric and magnetic fields satisfy these electromagnetic wave equations: James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum. Time-varying EM fields in Maxwell's equations An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles. A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen. A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field. Changing dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances. Changing dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies. Health and safety The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances. See also Classification of electromagnetic fields Electric field Electromagnetism Electromagnetic propagation Electromagnetic radiation Electromagnetic spectrum Electromagnetic field measurements Magnetic field Maxwell's equations Photoelectric effect Photon Quantization of the electromagnetic field Quantum electrodynamics References Citations Sources Further reading (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) External links Electromagnetism
0.773296
0.998008
0.771756
Derivations of the Lorentz transformations
There are many ways to derive the Lorentz transformations using a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory. This article provides a few of the easier ones to follow in the context of special relativity, for the simplest case of a Lorentz boost in standard configuration, i.e. two inertial frames moving relative to each other at constant (uniform) relative velocity less than the speed of light, and using Cartesian coordinates so that the x and x′ axes are collinear. Lorentz transformation In the fundamental branches of modern physics, namely general relativity and its widely applicable subset special relativity, as well as relativistic quantum mechanics and relativistic quantum field theory, the Lorentz transformation is the transformation rule under which all four-vectors and tensors containing physical quantities transform from one frame of reference to another. The prime examples of such four-vectors are the four-position and four-momentum of a particle, and for fields the electromagnetic tensor and stress–energy tensor. The fact that these objects transform according to the Lorentz transformation is what mathematically defines them as vectors and tensors; see tensor for a definition. Given the components of the four-vectors or tensors in some frame, the "transformation rule" allows one to determine the altered components of the same four-vectors or tensors in another frame, which could be boosted or accelerated, relative to the original frame. A "boost" should not be conflated with spatial translation, rather it's characterized by the relative velocity between frames. The transformation rule itself depends on the relative motion of the frames. In the simplest case of two inertial frames the relative velocity between enters the transformation rule. For rotating reference frames or general non-inertial reference frames, more parameters are needed, including the relative velocity (magnitude and direction), the rotation axis and angle turned through. Historical background The usual treatment (e.g., Albert Einstein's original work) is based on the invariance of the speed of light. However, this is not necessarily the starting point: indeed (as is described, for example, in the second volume of the Course of Theoretical Physics by Landau and Lifshitz), what is really at stake is the locality of interactions: one supposes that the influence that one particle, say, exerts on another can not be transmitted instantaneously. Hence, there exists a theoretical maximal speed of information transmission which must be invariant, and it turns out that this speed coincides with the speed of light in vacuum. Newton had himself called the idea of action at a distance philosophically "absurd", and held that gravity had to be transmitted by some agent according to certain laws. Michelson and Morley in 1887 designed an experiment, employing an interferometer and a half-silvered mirror, that was accurate enough to detect aether flow. The mirror system reflected the light back into the interferometer. If there were an aether drift, it would produce a phase shift and a change in the interference that would be detected. However, no phase shift was ever found. The negative outcome of the Michelson–Morley experiment left the concept of aether (or its drift) undermined. There was consequent perplexity as to why light evidently behaves like a wave, without any detectable medium through which wave activity might propagate. In a 1964 paper, Erik Christopher Zeeman showed that the causality-preserving property, a condition that is weaker in a mathematical sense than the invariance of the speed of light, is enough to assure that the coordinate transformations are the Lorentz transformations. Norman Goldstein's paper shows a similar result using inertiality (the preservation of time-like lines) rather than causality. Physical principles Einstein based his theory of special relativity on two fundamental postulates. First, all physical laws are the same for all inertial frames of reference, regardless of their relative state of motion; and second, the speed of light in free space is the same in all inertial frames of reference, again, regardless of the relative velocity of each reference frame. The Lorentz transformation is fundamentally a direct consequence of this second postulate. The second postulate Assume the second postulate of special relativity stating the constancy of the speed of light, independent of reference frame, and consider a collection of reference systems moving with respect to each other with constant velocity, i.e. inertial systems, each endowed with its own set of Cartesian coordinates labeling the points, i.e. events of spacetime. To express the invariance of the speed of light in mathematical form, fix two events in spacetime, to be recorded in each reference frame. Let the first event be the emission of a light signal, and the second event be it being absorbed. Pick any reference frame in the collection. In its coordinates, the first event will be assigned coordinates , and the second . The spatial distance between emission and absorption is , but this is also the distance traveled by the signal. One may therefore set up the equation Every other coordinate system will record, in its own coordinates, the same equation. This is the immediate mathematical consequence of the invariance of the speed of light. The quantity on the left is called the spacetime interval. The interval is, for events separated by light signals, the same (zero) in all reference frames, and is therefore called invariant. Invariance of interval For the Lorentz transformation to have the physical significance realized by nature, it is crucial that the interval is an invariant measure for any two events, not just for those separated by light signals. To establish this, one considers an infinitesimal interval, as recorded in a system . Let be another system assigning the interval to the same two infinitesimally separated events. Since if , then the interval will also be zero in any other system (second postulate), and since and are infinitesimals of the same order, they must be proportional to each other, On what may depend? It may not depend on the positions of the two events in spacetime, because that would violate the postulated homogeneity of spacetime. It might depend on the relative velocity between and , but only on the speed, not on the direction, because the latter would violate the isotropy of space. Now bring in systems and , From these it follows, Now, one observes that on the right-hand side that depend on both and ; as well as on the angle between the vectors and . However, one also observes that the left-hand side does not depend on this angle. Thus, the only way for the equation to hold true is if the function is a constant. Further, by the same equation this constant is unity. Thus, for all systems . Since this holds for all infinitesimal intervals, it holds for all intervals. Most, if not all, derivations of the Lorentz transformations take this for granted. In those derivations, they use the constancy of the speed of light (invariance of light-like separated events) only. This result ensures that the Lorentz transformation is the correct transformation. Rigorous Statement and Proof of Proportionality of ds2 and ds′2 Theorem: Let be integers, and a vector space over of dimension . Let be an indefinite-inner product on with signature type . Suppose is a symmetric bilinear form on such that the null set of the associated quadratic form of is contained in that of (i.e. suppose that for every , if then ). Then, there exists a constant such that . Furthermore, if we assume and that also has signature type , then we have . Remarks. In the section above, the term "infinitesimal" in relation to is actually referring (pointwise) to a quadratic form over a four-dimensional real vector space (namely the tangent space at a point of the spacetime manifold). The argument above is copied almost verbatim from Landau and Lifshitz, where the proportionality of and is merely stated as an 'obvious' fact even though the statement is not formulated in a mathematically precise fashion nor proven. This is a non-obvious mathematical fact which needs to be justified; fortunately the proof is relatively simple and it amounts to basic algebraic observations and manipulations. The above assumptions on means the following: is a bilinear form which is symmetric and non-degenerate, such that there exists an ordered basis of for which An equivalent way of saying this is that has the matrix representation relative to the ordered basis . If we consider the special case where then we're dealing with the situation of Lorentzian signature in 4-dimensions, which is what relativity is based on (or one could adopt the opposite convention with an overall minus sign; but this clearly doesn't affect the truth of the theorem). Also, in this case, if we assume and both have quadratics forms with the same null-set (in physics terminology, we say that and give rise to the same light cone) then the theorem tells us that there is a constant such that . Modulo some differences in notation, this is precisely what was used in the section above. Proof of Theorem. Fix a basis of relative to which has the matrix representation . The point is that the vector space can be decomposed into subspaces (the span of the first basis vectors) and (then span of the other basis vectors) such that each vector in can be written uniquely as for and ; moreover , and . So (by bilinearity) Since the first summand on the right in non-positive and the second in non-negative, for any and , we can find a scalar such that . From now on, always consider and . By bilinearity If , then also and the same is true for (since the null-set of is contained in that of ). In that case, subtracting the two expression above (and dividing by 4) yields As above, for each and , there is a scalar such that , so , which by bilinearity means . Now consider nonzero such that . We can find such that . By the expressions above, Analogically, for , one can show that if , then also . So it holds for all vectors in . For , if , for some , we can (scaling one of the if necessary) assume , which by the above means that . So . Finally, if we assume that both have signature types and then (we can't have because that would mean , which is impossible since having signature type means it is a non-zero bilinear form. Also, if , then it means has positive diagonal entries and negative diagonal entries; i.e. it is of signature , since we assumed , so this is also not possible. This leaves us with as the only option). This completes the proof of the theorem. Standard configuration The invariant interval can be seen as a non-positive definite distance function on spacetime. The set of transformations sought must leave this distance invariant. Due to the reference frame's coordinate system's cartesian nature, one concludes that, as in the Euclidean case, the possible transformations are made up of translations and rotations, where a slightly broader meaning should be allowed for the term rotation. The interval is quite trivially invariant under translation. For rotations, there are four coordinates. Hence there are six planes of rotation. Three of those are rotations in spatial planes. The interval is invariant under ordinary rotations too. It remains to find a "rotation" in the three remaining coordinate planes that leaves the interval invariant. Equivalently, to find a way to assign coordinates so that they coincide with the coordinates corresponding to a moving frame. The general problem is to find a transformation such that To solve the general problem, one may use the knowledge about invariance of the interval of translations and ordinary rotations to assume, without loss of generality, that the frames and are aligned in such a way that their coordinate axes all meet at and that the and axes are permanently aligned and system has speed along the positive . Call this the standard configuration. It reduces the general problem to finding a transformation such that The standard configuration is used in most examples below. A linear solution of the simpler problem solves the more general problem since coordinate differences then transform the same way. Linearity is often assumed or argued somehow in the literature when this simpler problem is considered. If the solution to the simpler problem is not linear, then it doesn't solve the original problem because of the cross terms appearing when expanding the squares. The solutions As mentioned, the general problem is solved by translations in spacetime. These do not appear as a solution to the simpler problem posed, while the boosts do (and sometimes rotations depending on angle of attack). Even more solutions exist if one only insist on invariance of the interval for lightlike separated events. These are nonlinear conformal ("angle preserving") transformations. One has Some equations of physics are conformal invariant, e.g. the Maxwell's equations in source-free space, but not all. The relevance of the conformal transformations in spacetime is not known at present, but the conformal group in two dimensions is highly relevant in conformal field theory and statistical mechanics. It is thus the Poincaré group that is singled out by the postulates of special relativity. It is the presence of Lorentz boosts (for which velocity addition is different from mere vector addition that would allow for speeds greater than the speed of light) as opposed to ordinary boosts that separates it from the Galilean group of Galilean relativity. Spatial rotations, spatial and temporal inversions and translations are present in both groups and have the same consequences in both theories (conservation laws of momentum, energy, and angular momentum). Not all accepted theories respect symmetry under the inversions. Using the geometry of spacetime Landau & Lifshitz solution These three hyperbolic function formulae (H1–H3) are referenced below: The problem posed in standard configuration for a boost in the , where the primed coordinates refer to the moving system is solved by finding a linear solution to the simpler problem The most general solution is, as can be verified by direct substitution using (H1), To find the role of in the physical setting, record the progression of the origin of , i.e. . The equations become (using first ), Now divide: where was used in the first step, (H2) and (H3) in the second, which, when plugged back in, gives or, with the usual abbreviations, This calculation is repeated with more detail in section hyperbolic rotation. Hyperbolic rotation The Lorentz transformations can also be derived by simple application of the special relativity postulates and using hyperbolic identities. Relativity postulates Start from the equations of the spherical wave front of a light pulse, centred at the origin: which take the same form in both frames because of the special relativity postulates. Next, consider relative motion along the x-axes of each frame, in standard configuration above, so that y = y′, z = z′, which simplifies to Linearity Now assume that the transformations take the linear form: where A, B, C, D are to be found. If they were non-linear, they would not take the same form for all observers, since fictitious forces (hence accelerations) would occur in one frame even if the velocity was constant in another, which is inconsistent with inertial frame transformations. Substituting into the previous result: and comparing coefficients of , , : Hyperbolic rotation The equations suggest the hyperbolic identity Introducing the rapidity parameter as a hyperbolic angle allows the consistent identifications where the signs after the square roots are chosen so that and increase if and increase, respectively. The hyperbolic transformations have been solved for: If the signs were chosen differently the position and time coordinates would need to be replaced by and/or so that and increase not decrease. To find how relates to the relative velocity, from the standard configuration the origin of the primed frame is measured in the unprimed frame to be (or the equivalent and opposite way round; the origin of the unprimed frame is and in the primed frame it is at ): and hyperbolic identities leads to the relations between , , and , From physical principles The problem is usually restricted to two dimensions by using a velocity along the x axis such that the y and z coordinates do not intervene, as described in standard configuration above. Time dilation and length contraction The transformation equations can be derived from time dilation and length contraction, which in turn can be derived from first principles. With and representing the spatial origins of the frames and , and some event , the relation between the position vectors (which here reduce to oriented segments , and ) in both frames is given by: Using coordinates in and in for event M, in frame the segments are , and (since is as measured in ): Likewise, in frame , the segments are (since is as measured in ), and : By rearranging the first equation, we get which is the space part of the Lorentz transformation. The second relation gives which is the inverse of the space part. Eliminating between the two space part equations gives that, if , simplifies to: which is the time part of the transformation, the inverse of which is found by a similar elimination of : Spherical wavefronts of light The following is similar to that of Einstein. As in the Galilean transformation, the Lorentz transformation is linear since the relative velocity of the reference frames is constant as a vector; otherwise, inertial forces would appear. They are called inertial or Galilean reference frames. According to relativity no Galilean reference frame is privileged. Another condition is that the speed of light must be independent of the reference frame, in practice of the velocity of the light source. Consider two inertial frames of reference O and O′, assuming O to be at rest while O′ is moving with a velocity v with respect to O in the positive x-direction. The origins of O and O′ initially coincide with each other. A light signal is emitted from the common origin and travels as a spherical wave front. Consider a point P on a spherical wavefront at a distance r and r′ from the origins of O and O′ respectively. According to the second postulate of the special theory of relativity the speed of light is the same in both frames, so for the point P: The equation of a sphere in frame O is given by For the spherical wavefront that becomes Similarly, the equation of a sphere in frame O′ is given by so the spherical wavefront satisfies The origin O′ is moving along x-axis. Therefore, must vary linearly with and . Therefore, the transformation has the form For the origin of O′ and are given by so, for all , and thus This simplifies the transformation to where is to be determined. At this point is not necessarily a constant, but is required to reduce to 1 for . The inverse transformation is the same except that the sign of is reversed: The above two equations give the relation between and as: or Replacing , , and in the spherical wavefront equation in the O′ frame, with their expressions in terms of x, y, z and t produces: and therefore, which implies, or Comparing the coefficient of in the above equation with the coefficient of in the spherical wavefront equation for frame O produces: Equivalent expressions for γ can be obtained by matching the x2 coefficients or setting the coefficient to zero. Rearranging: or, choosing the positive root to ensure that the x and x' axes and the time axes point in the same direction, which is called the Lorentz factor. This produces the Lorentz transformation from the above expression. It is given by The Lorentz transformation is not the only transformation leaving invariant the shape of spherical waves, as there is a wider set of spherical wave transformations in the context of conformal geometry, leaving invariant the expression . However, scale changing conformal transformations cannot be used to symmetrically describe all laws of nature including mechanics, whereas the Lorentz transformations (the only one implying ) represent a symmetry of all laws of nature and reduce to Galilean transformations at . Galilean and Einstein's relativity Galilean reference frames In classical kinematics, the total displacement x in the R frame is the sum of the relative displacement x′ in frame R′ and of the distance between the two origins x − x′. If v is the relative velocity of R′ relative to R, the transformation is: , or . This relationship is linear for a constant , that is when R and R′ are Galilean frames of reference. In Einstein's relativity, the main difference from Galilean relativity is that space and time coordinates are intertwined, and in different inertial frames t ≠ t′. Since space is assumed to be homogeneous, the transformation must be linear. The most general linear relationship is obtained with four constant coefficients, A, B, γ, and b: The linear transformation becomes the Galilean transformation when γ = B = 1, b = −v and A = 0. An object at rest in the R′ frame at position x′ = 0 moves with constant velocity v in the R frame. Hence the transformation must yield x′ = 0 if x = vt. Therefore, b = −γv and the first equation is written as Using the principle of relativity According to the principle of relativity, there is no privileged Galilean frame of reference: therefore the inverse transformation for the position from frame R′ to frame R should have the same form as the original but with the velocity in the opposite direction, i.o.w. replacing v with -v: and thus Determining the constants of the first equation Since the speed of light is the same in all frames of reference, for the case of a light signal, the transformation must guarantee that t = x/c when t′ = x′/c. Substituting for t and t′ in the preceding equations gives: Multiplying these two equations together gives, At any time after t = t′ = 0, xx′ is not zero, so dividing both sides of the equation by xx′ results in which is called the "Lorentz factor". When the transformation equations are required to satisfy the light signal equations in the form and x′ = ct′, by substituting the x and x'-values, the same technique produces the same expression for the Lorentz factor. Determining the constants of the second equation The transformation equation for time can be easily obtained by considering the special case of a light signal, again satisfying and , by substituting term by term into the earlier obtained equation for the spatial coordinate giving so that which, when identified with determines the transformation coefficients A and B as So A and B are the unique constant coefficients necessary to preserve the constancy of the speed of light in the primed system of coordinates. Einstein's popular derivation In his popular book Einstein derived the Lorentz transformation by arguing that there must be two non-zero coupling constants and such that that correspond to light traveling along the positive and negative x-axis, respectively. For light if and only if . Adding and subtracting the two equations and defining gives Substituting corresponding to and noting that the relative velocity is , this gives The constant can be evaluated by demanding as per standard configuration. Using group theory From group postulates Following is a classical derivation (see, e.g., and references therein) based on group postulates and isotropy of the space. Coordinate transformations as a group The coordinate transformations between inertial frames form a group (called the proper Lorentz group) with the group operation being the composition of transformations (performing one transformation after another). Indeed, the four group axioms are satisfied: Closure: the composition of two transformations is a transformation: consider a composition of transformations from the inertial frame K to inertial frame K′, (denoted as K → K′), and then from K′ to inertial frame K′′, [K′ → K′′], there exists a transformation, [K → K′] [K′ → K′′], directly from an inertial frame K to inertial frame K′′. Associativity: the transformations ( [K → K′] [K′ → K′′] ) [K′′ → K′′′] and [K → K′] ( [K′ → K′′] [K′′ → K′′′] ) are identical. Identity element: there is an identity element, a transformation K → K. Inverse element: for any transformation K → K′ there exists an inverse transformation K′ → K. Transformation matrices consistent with group axioms Consider two inertial frames, K and K′, the latter moving with velocity with respect to the former. By rotations and shifts we can choose the x and x′ axes along the relative velocity vector and also that the events and coincide. Since the velocity boost is along the (and ) axes nothing happens to the perpendicular coordinates and we can just omit them for brevity. Now since the transformation we are looking after connects two inertial frames, it has to transform a linear motion in (t, x) into a linear motion in coordinates. Therefore, it must be a linear transformation. The general form of a linear transformation is where , , and are some yet unknown functions of the relative velocity . Let us now consider the motion of the origin of the frame K′. In the K′ frame it has coordinates , while in the K frame it has coordinates . These two points are connected by the transformation from which we get Analogously, considering the motion of the origin of the frame K, we get from which we get Combining these two gives and the transformation matrix has simplified, Now consider the group postulate inverse element. There are two ways we can go from the K′ coordinate system to the K coordinate system. The first is to apply the inverse of the transform matrix to the K′ coordinates: The second is, considering that the K′ coordinate system is moving at a velocity v relative to the K coordinate system, the K coordinate system must be moving at a velocity −v relative to the K′ coordinate system. Replacing v with −v in the transformation matrix gives: Now the function can not depend upon the direction of because it is apparently the factor which defines the relativistic contraction and time dilation. These two (in an isotropic world of ours) cannot depend upon the direction of . Thus, and comparing the two matrices, we get According to the closure group postulate a composition of two coordinate transformations is also a coordinate transformation, thus the product of two of our matrices should also be a matrix of the same form. Transforming K to K′ and from K′ to K′′ gives the following transformation matrix to go from K to K′′: In the original transform matrix, the main diagonal elements are both equal to , hence, for the combined transform matrix above to be of the same form as the original transform matrix, the main diagonal elements must also be equal. Equating these elements and rearranging gives: The denominator will be nonzero for nonzero , because is always nonzero; If we have the identity matrix which coincides with putting in the matrix we get at the end of this derivation for the other values of , making the final matrix valid for all nonnegative . For the nonzero , this combination of function must be a universal constant, one and the same for all inertial frames. Define this constant as , where has the dimension of . Solving we finally get and thus the transformation matrix, consistent with the group axioms, is given by If , then there would be transformations (with ) which transform time into a spatial coordinate and vice versa. We exclude this on physical grounds, because time can only run in the positive direction. Thus two types of transformation matrices are consistent with group postulates: Galilean transformations If then we get the Galilean-Newtonian kinematics with the Galilean transformation, where time is absolute, , and the relative velocity of two inertial frames is not limited. Lorentz transformations If , then we set which becomes the invariant speed, the speed of light in vacuum. This yields and thus we get special relativity with Lorentz transformation where the speed of light is a finite universal constant determining the highest possible relative velocity between inertial frames. If the Galilean transformation is a good approximation to the Lorentz transformation. Only experiment can answer the question which of the two possibilities, or , is realized in our world. The experiments measuring the speed of light, first performed by a Danish physicist Ole Rømer, show that it is finite, and the Michelson–Morley experiment showed that it is an absolute speed, and thus that . Boost from generators Using rapidity to parametrize the Lorentz transformation, the boost in the direction is likewise for a boost in the -direction and the -direction where are the Cartesian basis vectors, a set of mutually perpendicular unit vectors along their indicated directions. If one frame is boosted with velocity relative to another, it is convenient to introduce a unit vector in the direction of relative motion. The general boost is Notice the matrix depends on the direction of the relative motion as well as the rapidity, in all three numbers (two for direction, one for rapidity). We can cast each of the boost matrices in another form as follows. First consider the boost in the direction. The Taylor expansion of the boost matrix about is where the derivatives of the matrix with respect to are given by differentiating each entry of the matrix separately, and the notation indicates is set to zero after the derivatives are evaluated. Expanding to first order gives the infinitesimal transformation which is valid if is small (hence and higher powers are negligible), and can be interpreted as no boost (the first term is the 4×4 identity matrix), followed by a small boost. The matrix is the generator of the boost in the direction, so the infinitesimal boost is Now, is small, so dividing by a positive integer gives an even smaller increment of rapidity , and of these infinitesimal boosts will give the original infinitesimal boost with rapidity , In the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation which is the limit definition of the exponential due to Leonhard Euler, and is now true for any . Repeating the process for the boosts in the and directions obtains the other generators and the boosts are For any direction, the infinitesimal transformation is (small and expansion to first order) where is the generator of the boost in direction . It is the full boost generator, a vector of matrices , projected into the direction of the boost . The infinitesimal boost is Then in the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation which is now true for any . Expanding the matrix exponential of in its power series we now need the powers of the generator. The square is but the cube returns to , and as always the zeroth power is the 4×4 identity, . In general the odd powers are while the even powers are therefore the explicit form of the boost matrix depends only the generator and its square. Splitting the power series into an odd power series and an even power series, using the odd and even powers of the generator, and the Taylor series of and about obtains a more compact but detailed form of the boost matrix where is introduced for the even power series to complete the Taylor series for . The boost is similar to Rodrigues' rotation formula, Negating the rapidity in the exponential gives the inverse transformation matrix, In quantum mechanics, relativistic quantum mechanics, and quantum field theory, a different convention is used for the boost generators; all of the boost generators are multiplied by a factor of the imaginary unit . From experiments Howard Percy Robertson and others showed that the Lorentz transformation can also be derived empirically. In order to achieve this, it's necessary to write down coordinate transformations that include experimentally testable parameters. For instance, let there be given a single "preferred" inertial frame in which the speed of light is constant, isotropic, and independent of the velocity of the source. It is also assumed that Einstein synchronization and synchronization by slow clock transport are equivalent in this frame. Then assume another frame in relative motion, in which clocks and rods have the same internal constitution as in the preferred frame. The following relations, however, are left undefined: differences in time measurements, differences in measured longitudinal lengths, differences in measured transverse lengths, depends on the clock synchronization procedure in the moving frame, then the transformation formulas (assumed to be linear) between those frames are given by: depends on the synchronization convention and is not determined experimentally, it obtains the value by using Einstein synchronization in both frames. The ratio between and is determined by the Michelson–Morley experiment, the ratio between and is determined by the Kennedy–Thorndike experiment, and alone is determined by the Ives–Stilwell experiment. In this way, they have been determined with great precision to and , which converts the above transformation into the Lorentz transformation. See also Lorentz group Noether's theorem Poincaré group Proper time Relativistic metric Spinor Notes References General relativity Special relativity
0.78048
0.988813
0.771749
Water cycle
The water cycle (or hydrologic cycle or hydrological cycle), is a biogeochemical cycle that involves the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time. However, the partitioning of the water into the major reservoirs of ice, fresh water, salt water and atmospheric water is variable and depends on climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere. The processes that drive these movements are evaporation, transpiration, condensation, precipitation, sublimation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation. The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence the climate system. The evaporative phase of the cycle purifies water because it causes salts and other solids picked up during the cycle to be left behind. The condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It also reshapes the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet. Human actions are greatly affecting the water cycle. Activities such as deforestation, urbanization, and the extraction of groundwater are altering natural landscapes (land use changes) all have an effect on the water cycle. On top of this, climate change is leading to an intensification of the water cycle. Research has shown that global warming is causing shifts in precipitation patterns, increased frequency of extreme weather events, and changes in the timing and intensity of rainfall. These water cycle changes affect ecosystems, water availability, agriculture, and human societies. Description Overall process The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. The water molecule has smaller molecular mass than the major components of the atmosphere, nitrogen and oxygen and hence is less dense. Due to the significant difference in density, buoyancy drives humid air higher. As altitude increases, air pressure decreases and the temperature drops (see Gas laws). The lower temperature causes water vapor to condense into tiny liquid water droplets which are heavier than the air, and which fall unless supported by an updraft. A huge concentration of these droplets over a large area in the atmosphere becomes visible as cloud, while condensation near ground level is referred to as fog. Atmospheric circulation moves water vapor around the globe; cloud particles collide, grow, and fall out of the upper atmospheric layers as precipitation. Some precipitation falls as snow, hail, or sleet, and can accumulate in ice caps and glaciers, which can store frozen water for thousands of years. Most water falls as rain back into the ocean or onto land, where the water flows over the ground as surface runoff. A portion of this runoff enters rivers, with streamflow moving water towards the oceans. Runoff and water emerging from the ground (groundwater) may be stored as freshwater in lakes. Not all runoff flows into rivers; much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes aquifers, which can store freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into surface-water bodies (and the ocean) as groundwater discharge or be taken up by plants and transferred back to the atmosphere as water vapor by transpiration. Some groundwater finds openings in the land surface and emerges as freshwater springs. In river valleys and floodplains, there is often continuous water exchange between surface water and ground water in the hyporheic zone. Over time, the water returns to the ocean, to continue the water cycle. The ocean plays a key role in the water cycle. The ocean holds "97% of the total water on the planet; 78% of global precipitation occurs over the ocean, and it is the source of 86% of global evaporation". Important physical processes within the water cycle include (in alphabetical order): Advection: The movement of water through the atmosphere. Without advection, water that evaporated over the oceans could not precipitate over land. Atmospheric rivers that move large volumes of water vapor over long distances are an example of advection. Condensation: The transformation of water vapor to liquid water droplets in the air, creating clouds and fog. Evaporation: The transformation of water from liquid to gas phases as it moves from the ground or bodies of water into the overlying atmosphere. The source of energy for evaporation is primarily solar radiation. Evaporation often implicitly includes transpiration from plants, though together they are specifically referred to as evapotranspiration. Total annual evapotranspiration amounts to approximately of water, of which evaporates from the oceans. 86% of global evaporation occurs over the ocean. Infiltration: The flow of water from the ground surface into the ground. Once infiltrated, the water becomes soil moisture or groundwater. A recent global study using water stable isotopes, however, shows that not all soil moisture is equally available for groundwater recharge or for plant transpiration. Percolation: Water flows vertically through the soil and rocks under the influence of gravity. Precipitation: Condensed water vapor that falls to the Earth's surface. Most precipitation occurs as rain, but also includes snow, hail, fog drip, graupel, and sleet. Approximately of water falls as precipitation each year, of it over the oceans. The rain on land contains of water per year and a snowing only . 78% of global precipitation occurs over the ocean. Runoff: The variety of ways by which water moves across the land. This includes both surface runoff and channel runoff. As it flows, the water may seep into the ground, evaporate into the air, become stored in lakes or reservoirs, or be extracted for agricultural or other human uses. Subsurface flow: The flow of water underground, in the vadose zone and aquifers. Subsurface water may return to the surface (e.g. as a spring or by being pumped) or eventually seep into the oceans. Water returns to the land surface at lower elevation than where it infiltrated, under the force of gravity or gravity induced pressures. Groundwater tends to move slowly and is replenished slowly, so it can remain in aquifers for thousands of years. Transpiration: The release of water vapor from plants and soil into the air. Residence times The residence time of a reservoir within the hydrologic cycle is the average time a water molecule will spend in that reservoir (see table). It is a measure of the average age of the water in that reservoir. Groundwater can spend over 10,000 years beneath Earth's surface before leaving. Particularly old groundwater is called fossil water. Water stored in the soil remains there very briefly, because it is spread thinly across the Earth, and is readily lost by evaporation, transpiration, stream flow, or groundwater recharge. After evaporating, the residence time in the atmosphere is about 9 days before condensing and falling to the Earth as precipitation. The major ice sheets – Antarctica and Greenland – store ice for very long periods. Ice from Antarctica has been reliably dated to 800,000 years before present, though the average residence time is shorter. In hydrology, residence times can be estimated in two ways. The more common method relies on the principle of conservation of mass (water balance) and assumes the amount of water in a given reservoir is roughly constant. With this method, residence times are estimated by dividing the volume of the reservoir by the rate by which water either enters or exits the reservoir. Conceptually, this is equivalent to timing how long it would take the reservoir to become filled from empty if no water were to leave (or how long it would take the reservoir to empty from full if no water were to enter). An alternative method to estimate residence times, which is gaining in popularity for dating groundwater, is the use of isotopic techniques. This is done in the subfield of isotope hydrology. Water in storage The water cycle describes the processes that drive the movement of water throughout the hydrosphere. However, much more water is "in storage" (or in "pools") for long periods of time than is actually moving through the cycle. The storehouses for the vast majority of all water on Earth are the oceans. It is estimated that of the 1,386,000,000 km3 of the world's water supply, about 1,338,000,000 km3 is stored in oceans, or about 97%. It is also estimated that the oceans supply about 90% of the evaporated water that goes into the water cycle. The Earth's ice caps, glaciers, and permanent snowpack stores another 24,064,000 km3 accounting for only 1.7% of the planet's total water volume. However, this quantity of water is 68.7% of all freshwater on the planet. Changes caused by humans Local or regional impacts Human activities can alter the water cycle at the local or regional level. This happens due to changes in land use and land cover. Such changes affect "precipitation, evaporation, flooding, groundwater, and the availability of freshwater for a variety of uses". Examples for such land use changes are converting fields to urban areas or clearing forests. Such changes can affect the ability of soils to soak up surface water. Deforestation has local as well as regional effects. For example it reduces soil moisture, evaporation and rainfall at the local level. Furthermore, deforestation causes regional temperature changes that can affect rainfall patterns. Aquifer drawdown or overdrafting and the pumping of fossil water increase the total amount of water in the hydrosphere. This is because the water that was originally in the ground has now become available for evaporation as it is now in contact with the atmosphere. Water cycle intensification due to climate change Since the middle of the 20th century, human-caused climate change has resulted in observable changes in the global water cycle. The IPCC Sixth Assessment Report in 2021 predicted that these changes will continue to grow significantly at the global and regional level. These findings are a continuation of scientific consensus expressed in the IPCC Fifth Assessment Report from 2007 and other special reports by the Intergovernmental Panel on Climate Change which had already stated that the water cycle will continue to intensify throughout the 21st century. Related processes Biogeochemical cycling While the water cycle is itself a biogeochemical cycle, flow of water over and beneath the Earth is a key component of the cycling of other biogeochemicals. Runoff is responsible for almost all of the transport of eroded sediment and phosphorus from land to waterbodies. The salinity of the oceans is derived from erosion and transport of dissolved salts from the land. Cultural eutrophication of lakes is primarily due to phosphorus, applied in excess to agricultural fields in fertilizers, and then transported overland and down rivers. Both runoff and groundwater flow play significant roles in transporting nitrogen from the land to waterbodies. The dead zone at the outlet of the Mississippi River is a consequence of nitrates from fertilizer being carried off agricultural fields and funnelled down the river system to the Gulf of Mexico. Runoff also plays a part in the carbon cycle, again through the transport of eroded rock and soil. Slow loss over geologic time The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as Hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Planets with hot lower atmospheres could result in humid upper atmospheres that accelerate the loss of hydrogen. Historical interpretations In ancient times, it was widely thought that the land mass floated on a body of water, and that most of the water in rivers has its origin under the earth. Examples of this belief can be found in the works of Homer. In Works and Days (ca. 700 BC), the Greek poet Hesiod outlines the idea of the water cycle: "[Vapour] is drawn from the ever-flowing rivers and is raised high above the earth by windstorm, and sometimes it turns to rain towards evening, and sometimes to wind when Thracian Boreas huddles the thick clouds." In the ancient Near East, Hebrew scholars observed that even though the rivers ran into the sea, the sea never became full. Some scholars conclude that the water cycle was described completely during this time in this passage: "The wind goeth toward the south, and turneth about unto the north; it whirleth about continually, and the wind returneth again according to its circuits. All the rivers run into the sea, yet the sea is not full; unto the place from whence the rivers come, thither they return again" (Ecclesiastes 1:6-7). Furthermore, it was also observed that when the clouds were full, they emptied rain on the earth (Ecclesiastes 11:3). In the Adityahridayam (a devotional hymn to the Sun God) of Ramayana, a Hindu epic dated to the 4th century BCE, it is mentioned in the 22nd verse that the Sun heats up water and sends it down as rain. By roughly 500 BCE, Greek scholars were speculating that much of the water in rivers can be attributed to rain. The origin of rain was also known by then. These scholars maintained the belief, however, that water rising up through the earth contributed a great deal to rivers. Examples of this thinking included Anaximander (570 BCE) (who also speculated about the evolution of land animals from fish) and Xenophanes of Colophon (530 BCE). Warring States period Chinese scholars such as Chi Ni Tzu (320 BCE) and Lu Shih Ch'un Ch'iu (239 BCE) had similar thoughts. The idea that the water cycle is a closed cycle can be found in the works of Anaxagoras of Clazomenae (460 BCE) and Diogenes of Apollonia (460 BCE). Both Plato (390 BCE) and Aristotle (350 BCE) speculated about percolation as part of the water cycle. Aristotle correctly hypothesized that the sun played a role in the Earth's hydraulic cycle in his book Meteorology, writing "By it [the sun's] agency the finest and sweetest water is everyday carried up and is dissolved into vapor and rises to the upper regions, where it is condensed again by the cold and so returns to the earth.", and believed that clouds were composed of cooled and condensed water vapor. Much like the earlier Aristotle, the Eastern Han Chinese scientist Wang Chong (27–100 AD) accurately described the water cycle of Earth in his Lunheng but was dismissed by his contemporaries. Up to the time of the Renaissance, it was wrongly assumed that precipitation alone was insufficient to feed rivers, for a complete water cycle, and that underground water pushing upwards from the oceans were the main contributors to river water. Bartholomew of England held this view (1240 CE), as did Leonardo da Vinci (1500 CE) and Athanasius Kircher (1644 CE). Discovery of the correct theory The first published thinker to assert that rainfall alone was sufficient for the maintenance of rivers was Bernard Palissy (1580 CE), who is often credited as the discoverer of the modern theory of the water cycle. Palissy's theories were not tested scientifically until 1674, in a study commonly attributed to Pierre Perrault. Even then, these beliefs were not accepted in mainstream science until the early nineteenth century. See also References External links The Water Cycle, United States Geological Survey The Water Cycle for Kids, United States Geological Survey The Water Cycle: Following The Water (NASA Visualization Explorer with videos) Biogeochemical cycle Forms of water Hydrology Soil physics Water Articles containing video clips Limnology Oceanography
0.772436
0.999079
0.771724
Incompressible flow
In fluid mechanics, or more generally continuum mechanics, incompressible flow (isochoric flow) refers to a flow in which the material density of each fluid parcel — an infinitesimal volume that moves with the flow velocity — is time-invariant. An equivalent statement that implies incompressible flow is that the divergence of the flow velocity is zero (see the derivation below, which illustrates why these conditions are equivalent). Incompressible flow does not imply that the fluid itself is incompressible. It is shown in the derivation below that under the right conditions even the flow of compressible fluids can, to a good approximation, be modelled as incompressible flow. Derivation The fundamental requirement for incompressible flow is that the density, , is constant within a small element volume, dV, which moves at the flow velocity u. Mathematically, this constraint implies that the material derivative (discussed below) of the density must vanish to ensure incompressible flow. Before introducing this constraint, we must apply the conservation of mass to generate the necessary relations. The mass is calculated by a volume integral of the density, : The conservation of mass requires that the time derivative of the mass inside a control volume be equal to the mass flux, J, across its boundaries. Mathematically, we can represent this constraint in terms of a surface integral: The negative sign in the above expression ensures that outward flow results in a decrease in the mass with respect to time, using the convention that the surface area vector points outward. Now, using the divergence theorem we can derive the relationship between the flux and the partial time derivative of the density: therefore: The partial derivative of the density with respect to time need not vanish to ensure incompressible flow. When we speak of the partial derivative of the density with respect to time, we refer to this rate of change within a control volume of fixed position. By letting the partial time derivative of the density be non-zero, we are not restricting ourselves to incompressible fluids, because the density can change as observed from a fixed position as fluid flows through the control volume. This approach maintains generality, and not requiring that the partial time derivative of the density vanish illustrates that compressible fluids can still undergo incompressible flow. What interests us is the change in density of a control volume that moves along with the flow velocity, u. The flux is related to the flow velocity through the following function: So that the conservation of mass implies that: The previous relation (where we have used the appropriate product rule) is known as the continuity equation. Now, we need the following relation about the total derivative of the density (where we apply the chain rule): So if we choose a control volume that is moving at the same rate as the fluid (i.e. (dx/dt, dy/dt, dz/dt) = u), then this expression simplifies to the material derivative: And so using the continuity equation derived above, we see that: A change in the density over time would imply that the fluid had either compressed or expanded (or that the mass contained in our constant volume, dV, had changed), which we have prohibited. We must then require that the material derivative of the density vanishes, and equivalently (for non-zero density) so must the divergence of the flow velocity: And so beginning with the conservation of mass and the constraint that the density within a moving volume of fluid remains constant, it has been shown that an equivalent condition required for incompressible flow is that the divergence of the flow velocity vanishes. Relation to compressibility In some fields, a measure of the incompressibility of a flow is the change in density as a result of the pressure variations. This is best expressed in terms of the compressibility If the compressibility is acceptably small, the flow is considered incompressible. Relation to solenoidal field An incompressible flow is described by a solenoidal flow velocity field. But a solenoidal field, besides having a zero divergence, also has the additional connotation of having non-zero curl (i.e., rotational component). Otherwise, if an incompressible flow also has a curl of zero, so that it is also irrotational, then the flow velocity field is actually Laplacian. Difference from material As defined earlier, an incompressible (isochoric) flow is the one in which This is equivalent to saying that i.e. the material derivative of the density is zero. Thus if one follows a material element, its mass density remains constant. Note that the material derivative consists of two terms. The first term describes how the density of the material element changes with time. This term is also known as the unsteady term. The second term, describes the changes in the density as the material element moves from one point to another. This is the advection term (convection term for scalar field). For a flow to be accounted as bearing incompressibility, the accretion sum of these terms should vanish. On the other hand, a homogeneous, incompressible material is one that has constant density throughout. For such a material, . This implies that, and independently. From the continuity equation it follows that Thus homogeneous materials always undergo flow that is incompressible, but the converse is not true. That is, compressible materials might not experience compression in the flow. Related flow constraints In fluid dynamics, a flow is considered incompressible if the divergence of the flow velocity is zero. However, related formulations can sometimes be used, depending on the flow system being modelled. Some versions are described below: Incompressible flow: . This can assume either constant density (strict incompressible) or varying density flow. The varying density set accepts solutions involving small perturbations in density, pressure and/or temperature fields, and can allow for pressure stratification in the domain. Anelastic flow: . Principally used in the field of atmospheric sciences, the anelastic constraint extends incompressible flow validity to stratified density and/or temperature as well as pressure. This allows the thermodynamic variables to relax to an 'atmospheric' base state seen in the lower atmosphere when used in the field of meteorology, for example. This condition can also be used for various astrophysical systems. Low Mach-number flow, or pseudo-incompressibility: . The low Mach-number constraint can be derived from the compressible Euler equations using scale analysis of non-dimensional quantities. The restraint, like the previous in this section, allows for the removal of acoustic waves, but also allows for large perturbations in density and/or temperature. The assumption is that the flow remains within a Mach number limit (normally less than 0.3) for any solution using such a constraint to be valid. Again, in accordance with all incompressible flows the pressure deviation must be small in comparison to the pressure base state. These methods make differing assumptions about the flow, but all take into account the general form of the constraint for general flow dependent functions and . Numerical approximations The stringent nature of incompressible flow equations means that specific mathematical techniques have been devised to solve them. Some of these methods include: The projection method (both approximate and exact) Artificial compressibility technique (approximate) Compressibility pre-conditioning See also Bernoulli's principle Euler equations (fluid dynamics) Navier–Stokes equations References Fluid mechanics
0.77895
0.990703
0.771708
Particle accelerator
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to very high speeds and energies to contain them in well-defined beams. Small accelerators are used for fundamental research in particle physics. Accelerators are also used as synchrotron light sources for the study of condensed matter physics. Smaller particle accelerators are used in a wide variety of applications, including particle therapy for oncological purposes, radioisotope production for medical diagnostics, ion implanters for the manufacture of semiconductors, and accelerator mass spectrometers for measurements of rare isotopes such as radiocarbon. Large accelerators include the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in New York and the largest accelerator, the Large Hadron Collider near Geneva, Switzerland, operated by CERN. It is a collider accelerator, which can accelerate two beams of protons to an energy of 6.5 TeV and cause them to collide head-on, creating center-of-mass energies of 13 TeV. There are more than 30,000 accelerators in operation around the world. There are two basic classes of accelerators: electrostatic and electrodynamic (or electromagnetic) accelerators. Electrostatic particle accelerators use static electric fields to accelerate particles. The most common types are the Cockcroft–Walton generator and the Van de Graaff generator. A small-scale example of this class is the cathode-ray tube in an ordinary old television set. The achievable kinetic energy for particles in these devices is determined by the accelerating voltage, which is limited by electrical breakdown. Electrodynamic or electromagnetic accelerators, on the other hand, use changing electromagnetic fields (either magnetic induction or oscillating radio frequency fields) to accelerate particles. Since in these types the particles can pass through the same accelerating field multiple times, the output energy is not limited by the strength of the accelerating field. This class, which was first developed in the 1920s, is the basis for most modern large-scale accelerators. Rolf Widerøe, Gustav Ising, Leó Szilárd, Max Steenbeck, and Ernest Lawrence are considered pioneers of this field, having conceived and built the first operational linear particle accelerator, the betatron, as well as the cyclotron. Because the target of the particle beams of early accelerators was usually the atoms of a piece of matter, with the goal being to create collisions with their nuclei in order to investigate nuclear structure, accelerators were commonly referred to as atom smashers in the 20th century. The term persists despite the fact that many modern accelerators create collisions between two subatomic particles, rather than a particle and an atomic nucleus. Uses Beams of high-energy particles are useful for fundamental and applied research in the sciences and also in many technical and industrial fields unrelated to fundamental research. There are approximately 30,000 accelerators worldwide; of these, only about 1% are research machines with energies above 1 GeV, while about 44% are for radiotherapy, 41% for ion implantation, 9% for industrial processing and research, and 4% for biomedical and other low-energy research. Particle physics For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. This elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and antiprotons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. The largest and highest-energy particle accelerator used for elementary particle physics is the Large Hadron Collider (LHC) at CERN, operating since 2009. Nuclear physics and isotope production Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nucleiof atoms like iron or goldat energies of several GeV per nucleon. The largest such particle accelerator is the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. Particle accelerators can also produce proton beams, which can produce proton-rich medical or research isotopes as opposed to the neutron-rich ones made in fission reactors; however, recent work has shown how to make 99Mo, usually made in reactors, by accelerating isotopes of hydrogen, although this method still requires a reactor to produce tritium. An example of this type of machine is LANSCE at Los Alamos National Laboratory. Synchrotron radiation Electrons propagating through a magnetic field emit very bright and coherent photon beams via synchrotron radiation. It has numerous uses in the study of atomic structure, chemistry, condensed matter physics, biology, and technology. A large number of synchrotron light sources exist worldwide. Examples in the U.S. are SSRL at SLAC National Accelerator Laboratory, APS at Argonne National Laboratory, ALS at Lawrence Berkeley National Laboratory, and NSLS-II at Brookhaven National Laboratory. In Europe, there are MAX IV in Lund, Sweden, BESSY in Berlin, Germany, Diamond in Oxfordshire, UK, ESRF in Grenoble, France, the latter has been used to extract detailed 3-dimensional images of insects trapped in amber. Free-electron lasers (FELs) are a special class of light sources based on synchrotron radiation that provides shorter pulses with higher temporal coherence. A specially designed FEL is the most brilliant source of x-rays in the observable universe. The most prominent examples are the LCLS in the U.S. and European XFEL in Germany. More attention is being drawn towards soft x-ray lasers, which together with pulse shortening opens up new methods for attosecond science. Apart from x-rays, FELs are used to emit terahertz light, e.g. FELIX in Nijmegen, Netherlands, TELBE in Dresden, Germany and NovoFEL in Novosibirsk, Russia. Thus there is a great demand for electron accelerators of moderate (GeV) energy, high intensity and high beam quality to drive light sources. Low-energy machines and particle therapy Everyday examples of particle accelerators are cathode ray tubes found in television sets and X-ray generators. These low-energy accelerators use a single pair of electrodes with a DC voltage of a few thousand volts between them. In an X-ray generator, the target itself is one of the electrodes. A low-energy particle accelerator called an ion implanter is used in the manufacture of integrated circuits. At lower energies, beams of accelerated nuclei are also used in medicine as particle therapy, for the treatment of cancer. DC accelerator types capable of accelerating particles to speeds sufficient to cause nuclear reactions are Cockcroft–Walton generators or voltage multipliers, which convert AC to high voltage DC, or Van de Graaff generators that use static electricity carried by belts. Radiation sterilization of medical devices Electron beam processing is commonly used for sterilization. Electron beams are an on-off technology that provide a much higher dose rate than gamma or X-rays emitted by radioisotopes like cobalt-60 (60Co) or caesium-137 (137Cs). Due to the higher dose rate, less exposure time is required and polymer degradation is reduced. Because electrons carry a charge, electron beams are less penetrating than both gamma and X-rays. Electrostatic particle accelerators Historically, the first accelerators used simple technology of a single static high voltage to accelerate charged particles. The charged particle was accelerated through an evacuated tube with an electrode at either end, with the static potential across it. Since the particle passed only once through the potential difference, the output energy was limited to the accelerating voltage of the machine. While this method is still extremely popular today, with the electrostatic accelerators greatly out-numbering any other type, they are more suited to lower energy studies owing to the practical voltage limit of about 1 MV for air insulated machines, or 30 MV when the accelerator is operated in a tank of pressurized gas with high dielectric strength, such as sulfur hexafluoride. In a tandem accelerator the potential is used twice to accelerate the particles, by reversing the charge of the particles while they are inside the terminal. This is possible with the acceleration of atomic nuclei by using anions (negatively charged ions), and then passing the beam through a thin foil to strip electrons off the anions inside the high voltage terminal, converting them to cations (positively charged ions), which are accelerated again as they leave the terminal. The two main types of electrostatic accelerator are the Cockcroft–Walton accelerator, which uses a diode-capacitor voltage multiplier to produce high voltage, and the Van de Graaff accelerator, which uses a moving fabric belt to carry charge to the high voltage electrode. Although electrostatic accelerators accelerate particles along a straight line, the term linear accelerator is more often used for accelerators that employ oscillating rather than static electric fields. Electrodynamic (electromagnetic) particle accelerators Due to the high voltage ceiling imposed by electrical discharge, in order to accelerate particles to higher energies, techniques involving dynamic fields rather than static fields are used. Electrodynamic acceleration can arise from either of two mechanisms: non-resonant magnetic induction, or resonant circuits or cavities excited by oscillating radio frequency (RF) fields. Electrodynamic accelerators can be linear, with particles accelerating in a straight line, or circular, using magnetic fields to bend particles in a roughly circular orbit. Magnetic induction accelerators Magnetic induction accelerators accelerate particles by induction from an increasing magnetic field, as if the particles were the secondary winding in a transformer. The increasing magnetic field creates a circulating electric field which can be configured to accelerate the particles. Induction accelerators can be either linear or circular. Linear induction accelerators Linear induction accelerators utilize ferrite-loaded, non-resonant induction cavities. Each cavity can be thought of as two large washer-shaped disks connected by an outer cylindrical tube. Between the disks is a ferrite toroid. A voltage pulse applied between the two disks causes an increasing magnetic field which inductively couples power into the charged particle beam. The linear induction accelerator was invented by Christofilos in the 1960s. Linear induction accelerators are capable of accelerating very high beam currents (>1000 A) in a single short pulse. They have been used to generate X-rays for flash radiography (e.g. DARHT at LANL), and have been considered as particle injectors for magnetic confinement fusion and as drivers for free electron lasers. Betatrons The Betatron is a circular magnetic induction accelerator, invented by Donald Kerst in 1940 for accelerating electrons. The concept originates ultimately from Norwegian-German scientist Rolf Widerøe. These machines, like synchrotrons, use a donut-shaped ring magnet (see below) with a cyclically increasing B field, but accelerate the particles by induction from the increasing magnetic field, as if they were the secondary winding in a transformer, due to the changing magnetic flux through the orbit. Achieving constant orbital radius while supplying the proper accelerating electric field requires that the magnetic flux linking the orbit be somewhat independent of the magnetic field on the orbit, bending the particles into a constant radius curve. These machines have in practice been limited by the large radiative losses suffered by the electrons moving at nearly the speed of light in a relatively small radius orbit. Linear accelerators In a linear particle accelerator (linac), particles are accelerated in a straight line with a target of interest at one end. They are often used to provide an initial low-energy kick to particles before they are injected into circular accelerators. The longest linac in the world is the Stanford Linear Accelerator, SLAC, which is long. SLAC was originally an electron–positron collider but is now a X-ray Free-electron laser. Linear high-energy accelerators use a linear array of plates (or drift tubes) to which an alternating high-energy field is applied. As the particles approach a plate they are accelerated towards it by an opposite polarity charge applied to the plate. As they pass through a hole in the plate, the polarity is switched so that the plate now repels them and they are now accelerated by it towards the next plate. Normally a stream of "bunches" of particles are accelerated, so a carefully controlled AC voltage is applied to each plate to continuously repeat this process for each bunch. As the particles approach the speed of light the switching rate of the electric fields becomes so high that they operate at radio frequencies, and so microwave cavities are used in higher energy machines instead of simple plates. Linear accelerators are also widely used in medicine, for radiotherapy and radiosurgery. Medical grade linacs accelerate electrons using a klystron and a complex bending magnet arrangement which produces a beam of energy . The electrons can be used directly or they can be collided with a target to produce a beam of X-rays. The reliability, flexibility and accuracy of the radiation beam produced has largely supplanted the older use of cobalt-60 therapy as a treatment tool. Circular or cyclic RF accelerators In the circular accelerator, particles move in a circle until they reach enough energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator). Depending on the energy and the particle being accelerated, circular accelerators suffer a disadvantage in that the particles emit synchrotron radiation. When any charged particle is accelerated, it emits electromagnetic radiation and secondary emissions. As a particle traveling in a circle is always accelerating towards the center of the circle, it continuously radiates towards the tangent of the circle. This radiation is called synchrotron light and depends highly on the mass of the accelerating particle. For this reason, many high energy electron accelerators are linacs. Certain accelerators (synchrotrons) are however built specially for producing synchrotron light (X-rays). Since the special theory of relativity requires that matter always travels slower than the speed of light in vacuum, in high-energy accelerators, as the energy increases the particle speed approaches the speed of light as a limit, but never attains it. Therefore, particle physicists do not generally think in terms of speed, but rather in terms of a particle's energy or momentum, usually measured in electron volts (eV). An important principle for circular accelerators, and particle beams in general, is that the curvature of the particle trajectory is proportional to the particle charge and to the magnetic field, but inversely proportional to the (typically relativistic) momentum. Cyclotrons The earliest operational circular accelerators were cyclotrons, invented in 1929 by Ernest Lawrence at the University of California, Berkeley. Cyclotrons have a single pair of hollow D-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. It is a characteristic property of charged particles in a uniform and constant magnetic field B that they orbit with a constant period, at a frequency called the cyclotron frequency, so long as their speed is small compared to the speed of light c. This means that the accelerating D's of a cyclotron can be driven at a constant frequency by a RF accelerating power source, as the beam spirals outwards continuously. The particles are injected in the center of the magnet and are extracted at the outer edge at their maximum energy. Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of sync with the accelerating RF. Therefore, simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field. If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. To accommodate relativistic effects the magnetic field needs to be increased to higher radii as is done in isochronous cyclotrons. An example of an isochronous cyclotron is the PSI Ring cyclotron in Switzerland, which provides protons at the energy of 590 MeV which corresponds to roughly 80% of the speed of light. The advantage of such a cyclotron is the maximum achievable extracted proton current which is currently 2.2 mA. The energy and current correspond to 1.3 MW beam power which is the highest of any accelerator currently existing. Synchrocyclotrons and isochronous cyclotrons A classic cyclotron can be modified to increase its energy limit. The historically first approach was the synchrocyclotron, which accelerates the particles in bunches. It uses a constant magnetic field , but reduces the accelerating field's frequency so as to keep the particles in step as they spiral outward, matching their mass-dependent cyclotron resonance frequency. This approach suffers from low average beam intensity due to the bunching, and again from the need for a huge magnet of large radius and constant field over the larger orbit demanded by high energy. The second approach to the problem of accelerating relativistic particles is the isochronous cyclotron. In such a structure, the accelerating field's frequency (and the cyclotron resonance frequency) is kept constant for all energies by shaping the magnet poles so to increase magnetic field with radius. Thus, all particles get accelerated in isochronous time intervals. Higher energy particles travel a shorter distance in each orbit than they would in a classical cyclotron, thus remaining in phase with the accelerating field. The advantage of the isochronous cyclotron is that it can deliver continuous beams of higher average intensity, which is useful for some applications. The main disadvantages are the size and cost of the large magnet needed, and the difficulty in achieving the high magnetic field values required at the outer edge of the structure. Synchrocyclotrons have not been built since the isochronous cyclotron was developed. Synchrotrons To reach still higher energies, with relativistic mass approaching or exceeding the rest mass of the particles (for protons, billions of electron volts or GeV), it is necessary to use a synchrotron. This is an accelerator in which the particles are accelerated in a ring of constant radius. An immediate advantage over cyclotrons is that the magnetic field need only be present over the actual region of the particle orbits, which is much narrower than that of the ring. (The largest cyclotron built in the US had a magnet pole, whereas the diameter of synchrotrons such as the LEP and LHC is nearly 10 km. The aperture of the two beams of the LHC is of the order of a centimeter.) The LHC contains 16 RF cavities, 1232 superconducting dipole magnets for beam steering, and 24 quadrupoles for beam focusing. Even at this size, the LHC is limited by its ability to steer the particles without them going adrift. This limit is theorized to occur at 14 TeV. However, since the particle momentum increases during acceleration, it is necessary to turn up the magnetic field B in proportion to maintain constant curvature of the orbit. In consequence, synchrotrons cannot accelerate particles continuously, as cyclotrons can, but must operate cyclically, supplying particles in bunches, which are delivered to a target or an external beam in beam "spills" typically every few seconds. Since high energy synchrotrons do most of their work on particles that are already traveling at nearly the speed of light c, the time to complete one orbit of the ring is nearly constant, as is the frequency of the RF cavity resonators used to drive the acceleration. In modern synchrotrons, the beam aperture is small and the magnetic field does not cover the entire area of the particle orbit as it does for a cyclotron, so several necessary functions can be separated. Instead of one huge magnet, one has a line of hundreds of bending magnets, enclosing (or enclosed by) vacuum connecting pipes. The design of synchrotrons was revolutionized in the early 1950s with the discovery of the strong focusing concept. The focusing of the beam is handled independently by specialized quadrupole magnets, while the acceleration itself is accomplished in separate RF sections, rather similar to short linear accelerators. Also, there is no necessity that cyclic machines be circular, but rather the beam pipe may have straight sections between magnets where beams may collide, be cooled, etc. This has developed into an entire separate subject, called "beam physics" or "beam optics". More complex modern synchrotrons such as the Tevatron, LEP, and LHC may deliver the particle bunches into storage rings of magnets with a constant magnetic field, where they can continue to orbit for long periods for experimentation or further acceleration. The highest-energy machines such as the Tevatron and LHC are actually accelerator complexes, with a cascade of specialized elements in series, including linear accelerators for initial beam creation, one or more low energy synchrotrons to reach intermediate energy, storage rings where beams can be accumulated or "cooled" (reducing the magnet aperture required and permitting tighter focusing; see beam cooling), and a last large ring for final acceleration and experimentation. Electron synchrotrons Circular electron accelerators fell somewhat out of favor for particle physics around the time that SLAC's linear particle accelerator was constructed, because their synchrotron losses were considered economically prohibitive and because their beam intensity was lower than for the unpulsed linear machines. The Cornell Electron Synchrotron, built at low cost in the late 1970s, was the first in a series of high-energy circular electron accelerators built for fundamental particle physics, the last being LEP, built at CERN, which was used from 1989 until 2000. A large number of electron synchrotrons have been built in the past two decades, as part of synchrotron light sources that emit ultraviolet light and X rays; see below. Synchrotron radiation sources Some circular accelerators have been built to deliberately generate radiation (called synchrotron light) as X-rays also called synchrotron radiation, for example the Diamond Light Source which has been built at the Rutherford Appleton Laboratory in England or the Advanced Photon Source at Argonne National Laboratory in Illinois, USA. High-energy X-rays are useful for X-ray spectroscopy of proteins or X-ray absorption fine structure (XAFS), for example. Synchrotron radiation is more powerfully emitted by lighter particles, so these accelerators are invariably electron accelerators. Synchrotron radiation allows for better imaging as researched and developed at SLAC's SPEAR. Fixed-field alternating gradient accelerators Fixed-Field Alternating Gradient accelerators (FFA)s, in which a magnetic field which is fixed in time, but with a radial variation to achieve strong focusing, allows the beam to be accelerated with a high repetition rate but in a much smaller radial spread than in the cyclotron case. Isochronous FFAs, like isochronous cyclotrons, achieve continuous beam operation, but without the need for a huge dipole bending magnet covering the entire radius of the orbits. Some new developments in FFAs are covered in. Rhodotron A Rhodotron is an industrial electron accelerator first proposed in 1987 by J. Pottier of the French Atomic Energy Agency (CEA), manufactured by Belgian company Ion Beam Applications. It accelerates electrons by recirculating them across the diameter of a cylinder-shaped radiofrequency cavity. A Rhodotron has an electron gun, which emits an electron beam that is attracted to a pillar in the center of the cavity. The pillar has holes the electrons can pass through. The electron beam passes through the pillar via one of these holes and then travels through a hole in the wall of the cavity, and meets a bending magnet, the beam is then bent and sent back into the cavity, to another hole in the pillar, the electrons then again go across the pillar and pass though another part of the wall of the cavity and into another bending magnet, and so on, gradually increasing the energy of the beam until it is allowed to exit the cavity for use. The cylinder and pillar may be lined with copper on the inside. History Ernest Lawrence's first cyclotron was a mere 4 inches (100 mm) in diameter. Later, in 1939, he built a machine with a 60-inch diameter pole face, and planned one with a 184-inch diameter in 1942, which was, however, taken over for World War II-related work connected with uranium isotope separation; after the war it continued in service for research and medicine over many years. The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV (1953–1968). The Bevatron at Berkeley, completed in 1954, was specifically designed to accelerate protons to enough energy to create antiprotons, and verify the particle–antiparticle symmetry of nature, then only theorized. The Alternating Gradient Synchrotron (AGS) at Brookhaven (1960–) was the first large synchrotron with alternating gradient, "strong focusing" magnets, which greatly reduced the required aperture of the beam, and correspondingly the size and cost of the bending magnets. The Proton Synchrotron, built at CERN (1959–), was the first major European particle accelerator and generally similar to the AGS. The Stanford Linear Accelerator, SLAC, became operational in 1966, accelerating electrons to 30 GeV in a 3 km long waveguide, buried in a tunnel and powered by hundreds of large klystrons. It is still the largest linear accelerator in existence, and has been upgraded with the addition of storage rings and an electron-positron collider facility. It is also an X-ray and UV synchrotron photon source. The Fermilab Tevatron has a ring with a beam path of . It has received several upgrades, and has functioned as a proton-antiproton collider until it was shut down due to budget cuts on September 30, 2011. The largest circular accelerator ever built was the LEP synchrotron at CERN with a circumference 26.6 kilometers, which was an electron/positron collider. It achieved an energy of 209 GeV before it was dismantled in 2000 so that the tunnel could be used for the Large Hadron Collider (LHC). The LHC is a proton collider, and currently the world's largest and highest-energy accelerator, achieving 6.5 TeV energy per beam (13 TeV in total). The aborted Superconducting Super Collider (SSC) in Texas would have had a circumference of 87 km. Construction was started in 1991, but abandoned in 1993. Very large circular accelerators are invariably built in tunnels a few metres wide to minimize the disruption and cost of building such a structure on the surface, and to provide shielding against intense secondary radiations that occur, which are extremely penetrating at high energies. Current accelerators such as the Spallation Neutron Source, incorporate superconducting cryomodules. The Relativistic Heavy Ion Collider, and Large Hadron Collider also make use of superconducting magnets and RF cavity resonators to accelerate particles. Targets The output of a particle accelerator can generally be directed towards multiple lines of experiments, one at a given time, by means of a deviating electromagnet. This makes it possible to operate multiple experiments without needing to move things around or shutting down the entire accelerator beam. Except for synchrotron radiation sources, the purpose of an accelerator is to generate high-energy particles for interaction with matter. This is usually a fixed target, such as the phosphor coating on the back of the screen in the case of a television tube; a piece of uranium in an accelerator designed as a neutron source; or a tungsten target for an X-ray generator. In a linac, the target is simply fitted to the end of the accelerator. The particle track in a cyclotron is a spiral outwards from the centre of the circular machine, so the accelerated particles emerge from a fixed point as for a linear accelerator. For synchrotrons, the situation is more complex. Particles are accelerated to the desired energy. Then, a fast acting dipole magnet is used to switch the particles out of the circular synchrotron tube and towards the target. A variation commonly used for particle physics research is a collider, also called a storage ring collider. Two circular synchrotrons are built in close proximityusually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear. Detectors The detectors gather clues about the particles including their speed and charge. Using these, the scientists can actually work on the particle. The process of detection is very complex it requires strong electromagnets and accelerators to generate enough usable information. Higher energies At present the highest energy accelerators are all circular colliders, but both hadron accelerators and electron accelerators are running into limits. Higher energy hadron and ion cyclic accelerators will require accelerator tunnels of larger physical size due to the increased beam rigidity. For cyclic electron accelerators, a limit on practical bend radius is placed by synchrotron radiation losses and the next generation will probably be linear accelerators 10 times the current length. An example of such a next generation electron accelerator is the proposed 40 km long International Linear Collider. It is believed that plasma wakefield acceleration in the form of electron-beam "afterburners" and standalone laser pulsers might be able to provide dramatic increases in efficiency over RF accelerators within two to three decades. In plasma wakefield accelerators, the beam cavity is filled with a plasma (rather than vacuum). A short pulse of electrons or laser light either constitutes or immediately precedes the particles that are being accelerated. The pulse disrupts the plasma, causing the charged particles in the plasma to integrate into and move toward the rear of the bunch of particles that are being accelerated. This process transfers energy to the particle bunch, accelerating it further, and continues as long as the pulse is coherent. Energy gradients as steep as 200 GeV/m have been achieved over millimeter-scale distances using laser pulsers and gradients approaching 1 GeV/m are being produced on the multi-centimeter-scale with electron-beam systems, in contrast to a limit of about 0.1 GeV/m for radio-frequency acceleration alone. Existing electron accelerators such as SLAC could use electron-beam afterburners to greatly increase the energy of their particle beams, at the cost of beam intensity. Electron systems in general can provide tightly collimated, reliable beams; laser systems may offer more power and compactness. Thus, plasma wakefield accelerators could be used – if technical issues can be resolved – to both increase the maximum energy of the largest accelerators and to bring high energies into university laboratories and medical centres. Higher than 0.25 GeV/m gradients have been achieved by a dielectric laser accelerator, which may present another viable approach to building compact high-energy accelerators. Using femtosecond duration laser pulses, an electron accelerating gradient 0.69 GeV/m was recorded for dielectric laser accelerators. Higher gradients of the order of are anticipated after further optimizations. Advanced Accelerator Concepts Advanced Accelerator Concepts encompasses methods of beam acceleration with gradients beyond state of the art in operational facilities. This includes diagnostics methods, timing technology, special needs for injectors, beam matching, beam dynamics and development of adequate simulations. Workshops dedicated to this subject are being held in the US (alternating locations) and in Europe, mostly on Isola d'Elba. The series of Advanced Accelerator Concepts Workshops, held in the US, started as an international series in 1982. The European Advanced Accelerator Concepts Workshop series started in 2019. Topics related to Advanced Accelerator Concepts: Laser Plasma Acceleration of electrons and positrons Laser and High-Gradient Structure-Based Acceleration Beam-Driven Acceleration Laser-Plasma Acceleration of Ions Beam Sources such as electron gun, Monitoring, and Control. See Accelerator physics Computer simulation for Accelerator Physics Laser technology for particle acceleration Electromagnetic radiation Generation Muon collider According to the Inverse scattering problem, any mechanism by which a particle produces radiation (where kinetic energy of the particle is transferred to the electromagnetic field), can be inverted such that the same radiation mechanism leads to the acceleration of the particle (energy of the radiation field is transferred to kinetic energy of the particle). The opposite is also true, any acceleration mechanism can be inverted to deposit the energy of the particle into a decelerating field, like in a kinetic energy recovery system. This is the idea enabling an energy recovery linac. This principle, which is also behind the plasma or dielectric wakefield accelerrators, led to a few other interesting developments in advanced accelerator concepts: Cherenkov radiation led to inverse Cherenkov radiation accelerator. Free-electron laser led to the Inverse Free-electron laser accelerator. A laser can also be inverted to produce acceleration of electrons. Black hole production and public safety concerns In the future, the possibility of a black hole production at the highest energy accelerators may arise if certain predictions of superstring theory are accurate. This and other possibilities have led to public safety concerns that have been widely reported in connection with the LHC, which began operation in 2008. The various possible dangerous scenarios have been assessed as presenting "no conceivable danger" in the latest risk assessment produced by the LHC Safety Assessment Group. If black holes are produced, it is theoretically predicted that such small black holes should evaporate extremely quickly via Bekenstein–Hawking radiation, but which is as yet experimentally unconfirmed. If colliders can produce black holes, cosmic rays (and particularly ultra-high-energy cosmic rays, UHECRs) must have been producing them for eons, but they have yet to harm anybody. It has been argued that to conserve energy and momentum, any black holes created in a collision between an UHECR and local matter would necessarily be produced moving at relativistic speed with respect to the Earth, and should escape into space, as their accretion and growth rate should be very slow, while black holes produced in colliders (with components of equal mass) would have some chance of having a velocity less than Earth escape velocity, 11.2 km per sec, and would be liable to capture and subsequent growth. Yet even on such scenarios the collisions of UHECRs with white dwarfs and neutron stars would lead to their rapid destruction, but these bodies are observed to be common astronomical objects. Thus if stable micro black holes should be produced, they must grow far too slowly to cause any noticeable macroscopic effects within the natural lifetime of the solar system. Accelerator operator The use of advanced technologies such as superconductivity, cryogenics, and high powered radiofrequency amplifiers, as well as the presence of ionizing radiation, pose challenges for the safe operation of accelerator facilities. An accelerator operator controls the operation of a particle accelerator, adjusts operating parameters such as aspect ratio, current intensity, and position on target. They communicate with and assist accelerator maintenance personnel to ensure readiness of support systems, such as vacuum, magnets, magnetic and radiofrequency power supplies and controls, and cooling systems. Additionally, the accelerator operator maintains a record of accelerator related events. See also Accelerator physics Atom smasher (disambiguation) Compact Linear Collider Dielectric wall accelerator Future Circular Collider International Linear Collider KALI Linear particle accelerator List of accelerators in particle physics Momentum compaction Nuclear transmutation Rolf Widerøe Superconducting Super Collider References External links What are particle accelerators used for? Stanley Humphries (1999) Principles of Charged Particle Acceleration Particle Accelerators around the world Wolfgang K. H. Panofsky: The Evolution of Particle Accelerators & Colliders, (PDF), Stanford, 1997 P.J. Bryant, A Brief History and Review of Accelerators (PDF), CERN, 1994. David Kestenbaum, Massive Particle Accelerator Revving Up NPR's Morning Edition article on 9 April 2007 Annotated bibliography for particle accelerators from the Alsos Digital Library for Nuclear Issues Accelerators-for-Society.org, to know more about applications of accelerators for Research and Development, energy and environment, health and medicine, industry, material characterization.
0.77267
0.998754
0.771707
Hazen–Williams equation
The Hazen–Williams equation is an empirical relationship which relates the flow of water in a pipe with the physical properties of the pipe and the pressure drop caused by friction. It is used in the design of water pipe systems such as fire sprinkler systems, water supply networks, and irrigation systems. It is named after Allen Hazen and Gardner Stewart Williams. The Hazen–Williams equation has the advantage that the coefficient C is not a function of the Reynolds number, but it has the disadvantage that it is only valid for water. Also, it does not account for the temperature or viscosity of the water, and therefore is only valid at room temperature and conventional velocities. General form Henri Pitot discovered that the velocity of a fluid was proportional to the square root of its head in the early 18th century. It takes energy to push a fluid through a pipe, and Antoine de Chézy discovered that the hydraulic head loss was proportional to the velocity squared. Consequently, the Chézy formula relates hydraulic slope S (head loss per unit length) to the fluid velocity V and hydraulic radius R: The variable C expresses the proportionality, but the value of C is not a constant. In 1838 and 1839, Gotthilf Hagen and Jean Léonard Marie Poiseuille independently determined a head loss equation for laminar flow, the Hagen–Poiseuille equation. Around 1845, Julius Weisbach and Henry Darcy developed the Darcy–Weisbach equation. The Darcy-Weisbach equation was difficult to use because the friction factor was difficult to estimate. In 1906, Hazen and Williams provided an empirical formula that was easy to use. The general form of the equation relates the mean velocity of water in a pipe with the geometric properties of the pipe and slope of the energy line. where: V is velocity (in ft/s for US customary units, in m/s for SI units) k is a conversion factor for the unit system (k = 1.318 for US customary units, k = 0.849 for SI units) C is a roughness coefficient R is the hydraulic radius (in ft for US customary units, in m for SI units) S is the slope of the energy line (head loss per length of pipe or hf/L) The equation is similar to the Chézy formula but the exponents have been adjusted to better fit data from typical engineering situations. A result of adjusting the exponents is that the value of C appears more like a constant over a wide range of the other parameters. The conversion factor k was chosen so that the values for C were the same as in the Chézy formula for the typical hydraulic slope of S=0.001. The value of k is 0.001−0.04. Typical C factors used in design, which take into account some increase in roughness as pipe ages are as follows: Pipe equation The general form can be specialized for full pipe flows. Taking the general form and exponentiating each side by gives (rounding exponents to 3–4 decimals) Rearranging gives The flow rate , so The hydraulic radius (which is different from the geometric radius ) for a full pipe of geometric diameter is ; the pipe's cross sectional area is , so U.S. customary units (Imperial) When used to calculate the pressure drop using the US customary units system, the equation is: where: Spsi per foot = frictional resistance (pressure drop per foot of pipe) in psig/ft (pounds per square inch gauge pressure per foot) Sfoot of water per foot of pipe Pd = pressure drop over the length of pipe in psig (pounds per square inch gauge pressure)L = length of pipe in feetQ = flow, gpm (gallons per minute)C = pipe roughness coefficientd = inside pipe diameter, in (inches) Note: Caution with U S Customary Units is advised. The equation for head loss in pipes, also referred to as slope, S, expressed in "feet per foot of length" vs. in 'psi per foot of length' as described above, with the inside pipe diameter, d, being entered in feet vs. inches, and the flow rate, Q, being entered in cubic feet per second, cfs, vs. gallons per minute, gpm, appears very similar. However, the constant is 4.73 vs. the 4.52 constant as shown above in the formula as arranged by NFPA for sprinkler system design. The exponents and the Hazen-Williams "C" values are unchanged. SI units When used to calculate the head loss with the International System of Units, the equation will then become where: S = Hydraulic slope hf = head loss in meters (water) over the length of pipe L = length of pipe in meters Q = volumetric flow rate, m3/s (cubic meters per second) C = pipe roughness coefficient d'' = inside pipe diameter, m (meters) Note: pressure drop can be computed from head loss as hf × the unit weight of water (e.g., 9810 N/m3 at 4 deg C) See also Darcy–Weisbach equation and Prony equation for alternatives Fluid dynamics Friction Minor losses in pipe flow Plumbing Pressure Volumetric flow rate References Further reading Williams and Hazen, Second edition, 1909 External links Engineering Toolbox reference Engineering toolbox Hazen–Williams coefficients Online Hazen–Williams calculator for gravity-fed pipes. Online Hazen–Williams calculator for pressurized pipes. https://books.google.com/books?id=DxoMAQAAIAAJ&pg=PA736 https://books.google.com/books?id=RAMX5xuXSrUC&pg=PA145 States pocket calculators and computers make calculations easier. H-W is good for smooth pipes, but Manning better for rough pipes (compared to D-W model). Eponymous equations of physics Equations of fluid dynamics Piping Plumbing Hydraulics Hydrodynamics Irrigation
0.777135
0.993012
0.771705
Euler equations (fluid dynamics)
In fluid dynamics, the Euler equations are a set of partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity. The Euler equations can be applied to incompressible and compressible flows. The incompressible Euler equations consist of Cauchy equations for conservation of mass and balance of momentum, together with the incompressibility condition that the flow velocity is a solenoidal field. The compressible Euler equations consist of equations for conservation of mass, balance of momentum, and balance of energy, together with a suitable constitutive equation for the specific energy density of the fluid. Historically, only the equations of conservation of mass and balance of momentum were derived by Euler. However, fluid dynamics literature often refers to the full set of the compressible Euler equations – including the energy equation – as "the compressible Euler equations". The mathematical characters of the incompressible and compressible Euler equations are rather different. For constant fluid density, the incompressible equations can be written as a quasilinear advection equation for the fluid velocity together with an elliptic Poisson's equation for the pressure. On the other hand, the compressible Euler equations form a quasilinear hyperbolic system of conservation equations. The Euler equations can be formulated in a "convective form" (also called the "Lagrangian form") or a "conservation form" (also called the "Eulerian form"). The convective form emphasizes changes to the state in a frame of reference moving with the fluid. The conservation form emphasizes the mathematical interpretation of the equations as conservation equations for a control volume fixed in space (which is useful from a numerical point of view). History The Euler equations first appeared in published form in Euler's article "Principes généraux du mouvement des fluides", published in Mémoires de l'Académie des Sciences de Berlin in 1757 (although Euler had previously presented his work to the Berlin Academy in 1752). Prior work included contributions from the Bernoulli family as well as from Jean le Rond d'Alembert. The Euler equations were among the first partial differential equations to be written down, after the wave equation. In Euler's original work, the system of equations consisted of the momentum and continuity equations, and thus was underdetermined except in the case of an incompressible flow. An additional equation, which was called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. During the second half of the 19th century, it was found that the equation related to the balance of energy must at all times be kept for compressible flows, and the adiabatic condition is a consequence of the fundamental laws in the case of smooth solutions. With the discovery of the special theory of relativity, the concepts of energy density, momentum density, and stress were unified into the concept of the stress–energy tensor, and energy and momentum were likewise unified into a single concept, the energy–momentum vector. Incompressible Euler equations with constant and uniform density In convective form (i.e., the form with the convective operator made explicit in the momentum equation), the incompressible Euler equations in case of density constant in time and uniform in space are: where: is the flow velocity vector, with components in an N-dimensional space , , for a generic function (or field) denotes its material derivative in time with respect to the advective field and is the gradient of the specific (with the sense of per unit mass) thermodynamic work, the internal source term, and is the flow velocity divergence. represents body accelerations (per unit mass) acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, and so on. The first equation is the Euler momentum equation with uniform density (for this equation it could also not be constant in time). By expanding the material derivative, the equations become: In fact for a flow with uniform density the following identity holds: where is the mechanic pressure. The second equation is the incompressible constraint, stating the flow velocity is a solenoidal field (the order of the equations is not causal, but underlines the fact that the incompressible constraint is not a degenerate form of the continuity equation, but rather of the energy equation, as it will become clear in the following). Notably, the continuity equation would be required also in this incompressible case as an additional third equation in case of density varying in time or varying in space. For example, with density nonuniform in space but constant in time, the continuity equation to be added to the above set would correspond to: So the case of constant and uniform density is the only one not requiring the continuity equation as additional equation regardless of the presence or absence of the incompressible constraint. In fact, the case of incompressible Euler equations with constant and uniform density discussed here is a toy model featuring only two simplified equations, so it is ideal for didactical purposes even if with limited physical relevance. The equations above thus represent respectively conservation of mass (1 scalar equation) and momentum (1 vector equation containing scalar components, where is the physical dimension of the space of interest). Flow velocity and pressure are the so-called physical variables. In a coordinate system given by the velocity and external force vectors and have components and , respectively. Then the equations may be expressed in subscript notation as: where the and subscripts label the N-dimensional space components, and is the Kroenecker delta. The use of Einstein notation (where the sum is implied by repeated indices instead of sigma notation) is also frequent. Properties Although Euler first presented these equations in 1755, many fundamental questions or concepts about them remain unanswered. In three space dimensions, in certain simplified scenarios, the Euler equations produce singularities. Smooth solutions of the free (in the sense of without source term: g=0) equations satisfy the conservation of specific kinetic energy: In the one-dimensional case without the source term (both pressure gradient and external force), the momentum equation becomes the inviscid Burgers' equation: This model equation gives many insights into Euler equations. Nondimensionalisation In order to make the equations dimensionless, a characteristic length , and a characteristic velocity , need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: and of the field unit vector: Substitution of these inversed relations in Euler equations, defining the Froude number, yields (omitting the * at apix): Euler equations in the Froude limit (no external field) are named free equations and are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory. Conservation form The conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are some advantages in using the conserved variables. This gives rise to a large class of numerical methods called conservative methods. The free Euler equations are conservative, in the sense they are equivalent to a conservation equation: or simply in Einstein notation: where the conservation quantity in this case is a vector, and is a flux matrix. This can be simply proved. At last Euler equations can be recast into the particular equation: Spatial dimensions For certain problems, especially when used to analyze compressible flow in a duct or in case the flow is cylindrically or spherically symmetric, the one-dimensional Euler equations are a useful first approximation. Generally, the Euler equations are solved by Riemann's method of characteristics. This involves finding curves in plane of independent variables (i.e., and ) along which partial differential equations (PDEs) degenerate into ordinary differential equations (ODEs). Numerical solutions of the Euler equations rely heavily on the method of characteristics. Incompressible Euler equations In convective form the incompressible Euler equations in case of density variable in space are: where the additional variables are: is the fluid mass density, is the pressure, . The first equation, which is the new one, is the incompressible continuity equation. In fact the general continuity equation would be: but here the last term is identically zero for the incompressibility constraint. Conservation form The incompressible Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: Here has length and has size . In general (not only in the Froude limit) Euler equations are expressible as: Conservation variables The variables for the equations in conservation form are not yet optimised. In fact we could define: where is the momentum density, a conservation variable. where is the force density, a conservation variable. Euler equations In differential convective form, the compressible (and most general) Euler equations can be written shortly with the material derivative notation: where the additional variables here is: is the specific internal energy (internal energy per unit mass). The equations above thus represent conservation of mass, momentum, and energy: the energy equation expressed in the variable internal energy allows to understand the link with the incompressible case, but it is not in the simplest form. Mass density, flow velocity and pressure are the so-called convective variables (or physical variables, or lagrangian variables), while mass density, momentum density and total energy density are the so-called conserved variables (also called eulerian, or mathematical variables). If one expands the material derivative the equations above are: Incompressible constraint (revisited) Coming back to the incompressible case, it now becomes apparent that the incompressible constraint typical of the former cases actually is a particular form valid for incompressible flows of the energy equation, and not of the mass equation. In particular, the incompressible constraint corresponds to the following very simple energy equation: Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates in incompressible flows. Basing on the mass conservation equation, one can put this equation in the conservation form: meaning that for an incompressible inviscid nonconductive flow a continuity equation holds for the internal energy. Enthalpy conservation Since by definition the specific enthalpy is: The material derivative of the specific internal energy can be expressed as: Then by substituting the momentum equation in this expression, one obtains: And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler energy equation: In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly corresponds to a variation of pressure. Thermodynamics of ideal fluids In thermodynamics the independent variables are the specific volume, and the specific entropy, while the specific energy is a function of state of these two variables. For a thermodynamic fluid, the compressible Euler equations are consequently best written as: where: is the specific volume is the flow velocity vector is the specific entropy In the general case and not only in the incompressible case, the energy equation means that for an inviscid thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow. Basing on the mass conservation equation, one can put this equation in the conservation form: meaning that for an inviscid nonconductive flow a continuity equation holds for the entropy. On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum equation require the specification of the fundamental equation of state of the material considered, i.e. of the specific internal energy as function of the two variables specific volume and specific entropy: The fundamental equation of state contains all the thermodynamic information about the system (Callen, 1985), exactly like the couple of a thermal equation of state together with a caloric equation of state. Conservation form The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: where: is the momentum density, a conservation variable. is the total energy density (total energy per unit volume). Here has length N + 2 and has size N(N + 2). In general (not only in the Froude limit) Euler equations are expressible as: where is the force density, a conservation variable. We remark that also the Euler equation even when conservative (no external field, Froude limit) have no Riemann invariants in general. Some further assumptions are required However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is equivalent to the conservation equation: Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as: where is the entropy density, a thermodynamic conservation variable. Another possible form for the energy equation, being particularly useful for isobarics, is: where is the total enthalpy density. Quasilinear form and characteristic equations Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting (approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the equations in conservative form can be put in quasilinear form: where are called the flux Jacobians defined as the matrices: Obviously this Jacobian does not exist in discontinuity regions (e.g. contact discontinuities, shock waves in inviscid nonconductive flows). If the flux Jacobians are not functions of the state vector , the equations reveals linear. Characteristic equations The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables. In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the system is defined hyperbolic, and physically eigenvalues represent the speeds of propagation of information. If they are all distinguished, the system is defined strictly hyperbolic (it will be proved to be the case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic fluids) than in other energy variables. This will become clear by considering the 1D case. If is the right eigenvector of the matrix corresponding to the eigenvalue , by building the projection matrix: One can finally find the characteristic variables as: Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P−1 yields the characteristic equations: The original equations have been decoupled into N+2 characteristic equations each describing a simple wave, with the eigenvalues being the wave speeds. The variables wi are called the characteristic variables and are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic variables is finally very simple. In one spatial dimension it is: Then the solution in terms of the original conservative variables is obtained by transforming back: this computation can be explicited as the linear combination of the eigenvectors: Now it becomes apparent that the characteristic variables act as weights in the linear combination of the jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected independently without change in shape. Each i-th wave has shape wipi and speed of propagation λi. In the following we show a very simple example of this solution procedure. Waves in 1D inviscid, nonconductive thermodynamic fluid If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial dimension and free (no external field: g = 0): If one defines the vector of variables: recalling that is the specific volume, the flow speed, the specific entropy, the corresponding jacobian matrix is: At first one must find the eigenvalues of this matrix by solving the characteristic equation: that is explicitly: This determinant is very simple: the fastest computation starts on the last row, since it has the highest number of zero elements. Now by computing the determinant 2×2: by defining the parameter: or equivalently in mechanical variables, as: This parameter is always real according to the second law of thermodynamics. In fact the second law of thermodynamics can be expressed by several postulates. The most elementary of them in mathematical terms is the statement of convexity of the fundamental equation of state, i.e. the hessian matrix of the specific energy expressed as function of specific volume and specific entropy: is defined positive. This statement corresponds to the two conditions: The first condition is the one ensuring the parameter a is defined real. The characteristic equation finally results: That has three real solutions: Then the matrix has three real eigenvalues all distinguished: the 1D Euler equations are a strictly hyperbolic system. At this point one should determine the three eigenvectors: each one is obtained by substituting one eigenvalue in the eigenvalue equation and then solving it. By substituting the first eigenvalue λ1 one obtains: Basing on the third equation that simply has solution s1=0, the system reduces to: The two equations are redundant as usual, then the eigenvector is defined with a multiplying constant. We choose as right eigenvector: The other two eigenvectors can be found with analogous procedure as: Then the projection matrix can be built: Finally it becomes apparent that the real parameter a previously defined is the speed of propagation of the information characteristic of the hyperbolic system made of Euler equations, i.e. it is the wave speed. It remains to be shown that the sound speed corresponds to the particular case of an isentropic transformation: Compressibility and sound speed Sound speed is defined as the wavespeed of an isentropic transformation: by the definition of the isoentropic compressibility: the soundspeed results always the square root of ratio between the isentropic compressibility and the density: Ideal gas The sound speed in an ideal gas depends only on its temperature: Since the specific enthalpy in an ideal gas is proportional to its temperature: the sound speed in an ideal gas can also be made dependent only on its specific enthalpy: Bernoulli's theorem for steady inviscid flow Bernoulli's theorem is a direct consequence of the Euler equations. Incompressible case and Lamb's form The vector calculus identity of the cross product of a curl holds: where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor . Lamb in his famous classical book Hydrodynamics (1895), still in print, used this identity to change the convective term of the flow velocity in rotational form: the Euler momentum equation in Lamb's form becomes: Now, basing on the other identity: the Euler momentum equation assumes a form that is optimal to demonstrate Bernoulli's theorem for steady flows: In fact, in case of an external conservative field, by defining its potential φ: In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes: And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears because its result is always perpendicular to the velocity: In the steady incompressible case the mass equation is simply: that is the mass conservation for a steady incompressible flow states that the density along a streamline is constant. Then the Euler momentum equation in the steady incompressible case becomes: The convenience of defining the total head for an inviscid liquid flow is now apparent: which may be simply written as: That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant. Compressible case In the most general steady (compressible) case the mass equation in conservation form is: Therefore, the previous expression is rather The right-hand side appears on the energy equation in convective form, which on the steady state reads: The energy equation therefore becomes: so that the internal specific energy now features in the head. Since the external field potential is usually small compared to the other terms, it is convenient to group the latter ones in the total enthalpy: and the Bernoulli invariant for an inviscid gas flow is: which can be written as: That is, the energy balance for a steady inviscid flow in an external conservative field states that the sum of the total enthalpy and the external potential is constant along a streamline. In the usual case of small potential field, simply: Friedmann form and Crocco form By substituting the pressure gradient with the entropy and enthalpy gradient, according to the first law of thermodynamics in the enthalpy form: in the convective form of Euler momentum equation, one arrives to: Friedmann deduced this equation for the particular case of a perfect gas and published it in 1922. However, this equation is general for an inviscid nonconductive fluid and no equation of state is implicit in it. On the other hand, by substituting the enthalpy form of the first law of thermodynamics in the rotational form of Euler momentum equation, one obtains: and by defining the specific total enthalpy: one arrives to the Crocco–Vazsonyi form (Crocco, 1937) of the Euler momentum equation: In the steady case the two variables entropy and total enthalpy are particularly useful since Euler equations can be recast into the Crocco's form: Finally if the flow is also isothermal: by defining the specific total Gibbs free energy: the Crocco's form can be reduced to: From these relationships one deduces that the specific total free energy is uniform in a steady, irrotational, isothermal, isoentropic, inviscid flow. Discontinuities The Euler equations are quasilinear hyperbolic equations and their general solutions are waves. Under certain assumptions they can be simplified leading to Burgers equation. Much like the familiar oceanic waves, waves described by the Euler Equations 'break' and so-called shock waves are formed; this is a nonlinear effect and represents the solution becoming multi-valued. Physically this represents a breakdown of the assumptions that led to the formulation of the differential equations, and to extract further information from the equations we must go back to the more fundamental integral form. Then, weak solutions are formulated by working in 'jumps' (discontinuities) into the flow quantities – density, velocity, pressure, entropy – using the Rankine–Hugoniot equations. Physical quantities are rarely discontinuous; in real flows, these discontinuities are smoothed out by viscosity and by heat transfer. (See Navier–Stokes equations) Shock propagation is studied – among many other fields – in aerodynamics and rocket propulsion, where sufficiently fast flows occur. To properly compute the continuum quantities in discontinuous zones (for example shock waves or boundary layers) from the local forms (all the above forms are local forms, since the variables being described are typical of one point in the space considered, i.e. they are local variables) of Euler equations through finite difference methods generally too many space points and time steps would be necessary for the memory of computers now and in the near future. In these cases it is mandatory to avoid the local forms of the conservation equations, passing some weak forms, like the finite volume one. Rankine–Hugoniot equations Starting from the simplest case, one consider a steady free conservation equation in conservation form in the space domain: where in general F is the flux matrix. By integrating this local equation over a fixed volume Vm, it becomes: Then, basing on the divergence theorem, we can transform this integral in a boundary integral of the flux: This global form simply states that there is no net flux of a conserved quantity passing through a region in the case steady and without source. In 1D the volume reduces to an interval, its boundary being its extrema, then the divergence theorem reduces to the fundamental theorem of calculus: that is the simple finite difference equation, known as the jump relation: That can be made explicit as: where the notation employed is: Or, if one performs an indefinite integral: On the other hand, a transient conservation equation: brings to a jump relation: For one-dimensional Euler equations the conservation variables and the flux are the vectors: where: is the specific volume, is the mass flux. In the one dimensional case the correspondent jump relations, called the Rankine–Hugoniot equations, are:< In the steady one dimensional case the become simply: Thanks to the mass difference equation, the energy difference equation can be simplified without any restriction: where is the specific total enthalpy. These are the usually expressed in the convective variables: where: is the flow speed is the specific internal energy. The energy equation is an integral form of the Bernoulli equation in the compressible case. The former mass and momentum equations by substitution lead to the Rayleigh equation: Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not dependent of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine–Hugoniot equations, that can be also made explicit as: One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity. These are respectively: The Hugoniot equation, coupled with the fundamental equation of state of the material: describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered. It is also customary to define a Hugoniot function: allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the hydraulic head, useful for the deviations from the Bernoulli equation. Finite volume form On the other hand, by integrating a generic conservation equation: on a fixed volume Vm, and then basing on the divergence theorem, it becomes: By integrating this equation also over a time interval: Now by defining the node conserved quantity: we deduce the finite volume form: In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution: Then the explicit finite volume expressions of the original convective variables are: Constraints It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations. Ideal polytropic gas For an ideal polytropic gas the fundamental equation of state is: where is the specific energy, is the specific volume, is the specific entropy, is the molecular mass, here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics. From this equation one can derive the equation for pressure by its thermodynamic definition: By inverting it one arrives to the mechanical equation of state: Then for an ideal gas the compressible Euler equations can be simply expressed in the mechanical or primitive variables specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result: and in one-dimensional quasilinear form they results: where the conservative vector variable is: and the corresponding jacobian matrix is: Steady flow in material coordinates In the case of steady flow, it is convenient to choose the Frenet–Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation: where , and denote the flow velocity, the pressure and the density, respectively. Let be a Frenet–Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows: where and is the radius of curvature of the streamline. Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form: For barotropic flow , Bernoulli's equation is derived from the first equation: The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient. The third equation expresses that pressure is constant along the binormal axis. Streamline curvature theorem Let be the distance from the center of curvature of the streamline, then the second equation is written as follows: where This equation states:In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure. Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the "Streamline curvature theorem". This "theorem" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines. This also is a way to intuitively explain why airfoils generate lift forces. Exact solutions All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic. Solutions to the Euler equations with vorticity are: parallel shear flows – where the flow is unidirectional, and the flow velocity only varies in the cross-flow directions, e.g. in a Cartesian coordinate system the flow is for instance in the -direction – with the only non-zero velocity component being only dependent on and and not on Arnold–Beltrami–Childress flow – an exact solution of the incompressible Euler equations. Two solutions of the three-dimensional Euler equations with cylindrical symmetry have been presented by Gibbon, Moore and Stuart in 2003. These two solutions have infinite energy; they blow up everywhere in space in finite time. See also Bernoulli's theorem Kelvin's circulation theorem Cauchy equations Froude number Madelung equations Navier–Stokes equations Burgers equation Jeans equations Perfect fluid D'Alembert's paradox References Notes Citations Sources Further reading Eponymous equations of physics Equations of fluid dynamics Leonhard Euler
0.774279
0.996662
0.771695
Derivation of the Navier–Stokes equations
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics. Basic assumptions The Navier–Stokes equations are based on the assumption that the fluid, at the scale of interest, is a continuum – a continuous substance rather than discrete particles. Another necessary assumption is that all the fields of interest including pressure, flow velocity, density, and temperature are at least weakly differentiable. The equations are derived from the basic principles of continuity of mass, conservation of momentum, and conservation of energy. Sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. This finite volume is denoted by and its bounding surface . The control volume can remain fixed in space or can move with the fluid. The material derivative Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the Eulerian derivative, while the derivative following a moving parcel is called the advective or material (or Lagrangian) derivative. The material derivative is defined as the nonlinear operator: where is the flow velocity. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position (see advection). This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule in which all independent variables are checked for change along the path (which is to say, the total derivative). For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by observing the movement of a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the flow. Continuity equations The Navier–Stokes equation is a special continuity equation. A continuity equation may be derived from conservation principles of: mass, momentum, energy. A continuity equation (or conservation law) is an integral relation stating that the rate of change of some integrated property defined over a control volume must be equal to the rate at which it is lost or gained through the boundaries of the volume plus the rate at which it is created or consumed by sources and sinks inside the volume. This is expressed by the following integral continuity equation: where is the flow velocity of the fluid, is the outward-pointing unit normal vector, and represents the sources and sinks in the flow, taking the sinks as positive. The divergence theorem may be applied to the surface integral, changing it into a volume integral: Applying the Reynolds transport theorem to the integral on the left and then combining all of the integrals: The integral must be zero for any control volume; this can only be true if the integrand itself is zero, so that: From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if is a vector, in which case the vector-vector product in the second term will be a dyad. Conservation of mass Mass may be considered also. When the intensive property is considered as the mass, by substitution into the general continuity equation, and taking (no sources or sinks of mass): where is the mass density (mass per unit volume), and is the flow velocity. This equation is called the mass continuity equation, or simply the continuity equation. This equation generally accompanies the Navier–Stokes equation. In the case of an incompressible fluid, (the density following the path of a fluid element is constant) and the equation reduces to: which is in fact a statement of the conservation of volume. Conservation of momentum A general momentum equation is obtained when the conservation relation is applied to momentum. When the intensive property is considered as the mass flux (also momentum density), that is, the product of mass density and flow velocity , by substitution into the general continuity equation: where is a dyad, a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (a first-rank tensor). Using the formula for the divergence of a dyad, we then have Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it is important to understand that this is not simply an element by element gradient. Rearranging : The leftmost expression enclosed in parentheses is, by mass continuity (shown before), equal to zero. Noting that what remains on the left side of the equation is the material derivative of flow velocity: This appears to simply be an expression of Newton's second law in terms of body forces instead of point forces. Each term in any case of the Navier–Stokes equations is a body force. A shorter though less rigorous way to arrive at this result would be the application of the chain rule to acceleration: where . The reason why this is "less rigorous" is that we haven't shown that the choice of is correct; however it does make sense since with that choice of path the derivative is "following" a fluid "particle", and in order for Newton's second law to work, forces must be summed following a particle. For this reason the convective derivative is also known as the particle derivative. Cauchy momentum equation The generic density of the momentum source seen previously is made specific first by breaking it up into two new terms, one to describe internal stresses and one for external forces, such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that where is the Cauchy stress tensor, and accounts for body forces present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. is a rank two symmetric tensor given by its covariant components. In orthogonal coordinates in three dimensions it is represented as the 3 × 3 matrix: where the are normal stresses and shear stresses. This matrix is split up into two terms: where is the 3 × 3 identity matrix and is the deviatoric stress tensor. Note that the mechanical pressure is equal to the negative of the mean normal stress: The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor in the equation above must be zero for a fluid at rest. Note that is traceless. The Cauchy equation may now be written in another more explicit form: This equation is still incomplete. For completion, one must make hypotheses on the forms of and , that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families and on the pressure. Some of these hypotheses lead to the Euler equations (fluid dynamics), other ones lead to the Navier–Stokes equations. Additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation. Application to different fluids The general form of the equations of motion is not "ready for use", the stress tensor is still unknown so that more information is needed; this information is normally some knowledge of the viscous behavior of the fluid. For different types of fluid flow this results in specific forms of the Navier–Stokes equations. Newtonian fluid Compressible Newtonian fluid The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids, In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes: The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient. The fluid is isotropic. For a fluid at rest, must be zero (so that hydrostatic pressure results). The above list states the classic argument that the shear strain rate tensor (the (symmetric) shear part of the velocity gradient) is a pure shear tensor and does not include any inflow/outflow part (any compression/expansion part). This means that its trace is zero, and this is achieved by subtracting in a symmetric way from the diagonal elements of the tensor. The compressional contribution to viscous stress is added as a separate diagonal tensor. Applying these assumptions will lead to : or in tensor form That is, the deviatoric of the deformation rate tensor is identified to the deviatoric of the stress tensor, up to a factor . is the Kronecker delta. and are proportionality constants associated with the assumption that stress depends on strain linearly; is called the first coefficient of viscosity or shear viscosity (usually just called "viscosity") and is the second coefficient of viscosity or volume viscosity (and it is related to bulk viscosity). The value of , which produces a viscous effect associated with volume change, is very difficult to determine, not even its sign is known with absolute certainty. Even in compressible flows, the term involving is often negligible; however it can occasionally be important even in nearly incompressible flows and is a matter of controversy. When taken nonzero, the most common approximation is . A straightforward substitution of into the momentum conservation equation will yield the Navier–Stokes equations, describing a compressible Newtonian fluid: The body force has been decomposed into density and external acceleration, that is, . The associated mass continuity equation is: In addition to this equation, an equation of state and an equation for the conservation of energy is needed. The equation of state to use depends on context (often the ideal gas law), the conservation of energy will read: Here, is the specific enthalpy, is the temperature, and is a function representing the dissipation of energy due to viscous effects: With a good equation of state and good functions for the dependence of parameters (such as viscosity) on the variables, this system of equations seems to properly model the dynamics of all known gases and most liquids. Incompressible Newtonian fluid For the special (but very common) case of incompressible flow, the momentum equations simplify significantly. Using the following assumptions: Viscosity will now be a constant The second viscosity effect The simplified mass continuity equation This gives incompressible Navier-Stokes equations, describing incompressible Newtonian fluid: then looking at the viscous terms of the momentum equation for example we have: Similarly for the and momentum directions we have and . The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant. Non-Newtonian fluids A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is a function of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here. Bingham fluid In Bingham fluids, the situation is slightly different: These are fluids capable of bearing some stress before they start flowing. Some common examples are toothpaste and clay. Power-law fluid A power law fluid is an idealised fluid for which the shear stress, , is given by This form is useful for approximating all sorts of general fluids, including shear thinning (such as latex paint) and shear thickening (such as corn starch water mixture). Stream function formulation In the analysis of a flow, it is often desirable to reduce the number of equations and/or the number of variables. The incompressible Navier–Stokes equation with mass continuity (four equations in four unknowns) can be reduced to a single equation with a single dependent variable in 2D, or one vector equation in 3D. This is enabled by two vector calculus identities: for any differentiable scalar and vector . The first identity implies that any term in the Navier–Stokes equation that may be represented as the gradient of a scalar will disappear when the curl of the equation is taken. Commonly, pressure and external acceleration will be eliminated, resulting in (this is true in 2D as well as 3D): where it is assumed that all body forces are describable as gradients (for example it is true for gravity), and density has been divided so that viscosity becomes kinematic viscosity. The second vector calculus identity above states that the divergence of the curl of a vector field is zero. Since the (incompressible) mass continuity equation specifies the divergence of flow velocity being zero, we can replace the flow velocity with the curl of some vector so that mass continuity is always satisfied: So, as long as flow velocity is represented through , mass continuity is unconditionally satisfied. With this new dependent vector variable, the Navier–Stokes equation (with curl taken as above) becomes a single fourth order vector equation, no longer containing the unknown pressure variable and no longer dependent on a separate mass continuity equation: Apart from containing fourth order derivatives, this equation is fairly complicated, and is thus uncommon. Note that if the cross differentiation is left out, the result is a third order vector equation containing an unknown vector field (the gradient of pressure) that may be determined from the same boundary conditions that one would apply to the fourth order equation above. 2D flow in orthogonal coordinates The true utility of this formulation is seen when the flow is two dimensional in nature and the equation is written in a general orthogonal coordinate system, in other words a system where the basis vectors are orthogonal. Note that this by no means limits application to Cartesian coordinates, in fact most of the common coordinates systems are orthogonal, including familiar ones like cylindrical and obscure ones like toroidal. The 3D flow velocity is expressed as (note that the discussion not used coordinates so far): where are basis vectors, not necessarily constant and not necessarily normalized, and are flow velocity components; let also the coordinates of space be . Now suppose that the flow is 2D. This does not mean the flow is in a plane, rather it means that the component of flow velocity in one direction is zero and the remaining components are independent of the same direction. In that case (take component 3 to be zero): The vector function is still defined via: but this must simplify in some way also since the flow is assumed 2D. If orthogonal coordinates are assumed, the curl takes on a fairly simple form, and the equation above expanded becomes: Examining this equation shows that we can set and retain equality with no loss of generality, so that: the significance here is that only one component of remains, so that 2D flow becomes a problem with only one dependent variable. The cross differentiated Navier–Stokes equation becomes two equations and one meaningful equation. The remaining component is called the stream function. The equation for can simplify since a variety of quantities will now equal zero, for example: if the scale factors and also are independent of . Also, from the definition of the vector Laplacian Manipulating the cross differentiated Navier–Stokes equation using the above two equations and a variety of identities will eventually yield the 1D scalar equation for the stream function: where is the biharmonic operator. This is very useful because it is a single self-contained scalar equation that describes both momentum and mass conservation in 2D. The only other equations that this partial differential equation needs are initial and boundary conditions. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of the scalar stream function equation |- | Distributing the curl: Replacing curl of the curl with the Laplacian and expanding convection and viscosity: Above, the curl of a gradient is zero, and the divergence of is zero. Negating: Expanding the curl of the cross product into four terms: Only one of four terms of the expanded curl is nonzero. The second is zero because it is the dot product of orthogonal vectors, the third is zero because it contains the divergence of flow velocity, and the fourth is zero because the divergence of a vector with only component three is zero (since it is assumed that nothing (except maybe ) depends on component three). This vector equation is one meaningful scalar equation and two equations. |} The assumptions for the stream function equation are: The flow is incompressible and Newtonian. Coordinates are orthogonal. Flow is 2D: The first two scale factors of the coordinate system are independent of the last coordinate: , otherwise extra terms appear. The stream function has some useful properties: Since , the vorticity of the flow is just the negative of the Laplacian of the stream function. The level curves of the stream function are streamlines. The stress tensor The derivation of the Navier–Stokes equation involves the consideration of forces acting on fluid elements, so that a quantity called the stress tensor appears naturally in the Cauchy momentum equation. Since the divergence of this tensor is taken, it is customary to write out the equation fully simplified, so that the original appearance of the stress tensor is lost. However, the stress tensor still has some important uses, especially in formulating boundary conditions at fluid interfaces. Recalling that , for a Newtonian fluid the stress tensor is: If the fluid is assumed to be incompressible, the tensor simplifies significantly. In 3D cartesian coordinates for example: is the strain rate tensor, by definition: See also Derivation of Navier–Stokes equation from discrete LBE First law of thermodynamics (fluid mechanics) References Surface Tension Module , by John W. M. Bush, at MIT OCW Galdi, An Introduction to the Mathematical Theory of the Navier–Stokes Equations: Steady-State Problems. Springer 2011 Equations of fluid dynamics Aerodynamics Partial differential equations
0.778124
0.991735
0.771693
Hydrostatic equilibrium
In fluid mechanics, hydrostatic equilibrium (hydrostatic balance, hydrostasy) is the condition of a fluid or plastic solid at rest, which occurs when external forces, such as gravity, are balanced by a pressure-gradient force. In the planetary physics of Earth, the pressure-gradient force prevents gravity from collapsing the planetary atmosphere into a thin, dense shell, whereas gravity prevents the pressure-gradient force from diffusing the atmosphere into outer space. In general, it is what causes objects in space to be spherical. Hydrostatic equilibrium is the distinguishing criterion between dwarf planets and small solar system bodies, and features in astrophysics and planetary geology. Said qualification of equilibrium indicates that the shape of the object is symmetrically rounded, mostly due to rotation, into an ellipsoid, where any irregular surface features are consequent to a relatively thin solid crust. In addition to the Sun, there are a dozen or so equilibrium objects confirmed to exist in the Solar System. Mathematical consideration For a hydrostatic fluid on Earth: Derivation from force summation Newton's laws of motion state that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium. The fluid can be split into a large number of cuboid volume elements; by considering a single element, the action of the fluid can be derived. There are three forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is, from the definition of pressure, Similarly, the force on the volume element from the pressure of the fluid below pushing upwards is Finally, the weight of the volume element causes a force downwards. If the density is ρ, the volume is V and g the standard gravity, then: The volume of this cuboid is equal to the area of the top or bottom, times the height – the formula for finding the volume of a cube. By balancing these forces, the total force on the fluid is This sum equals zero if the fluid's velocity is constant. Dividing by A, Or, Ptop − Pbottom is a change in pressure, and h is the height of the volume element—a change in the distance above the ground. By saying these changes are infinitesimally small, the equation can be written in differential form. Density changes with pressure, and gravity changes with height, so the equation would be: Derivation from Navier–Stokes equations Note finally that this last equation can be derived by solving the three-dimensional Navier–Stokes equations for the equilibrium situation where Then the only non-trivial equation is the -equation, which now reads Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the Navier–Stokes equations. Derivation from general relativity By plugging the energy–momentum tensor for a perfect fluid into the Einstein field equations and using the conservation condition one can derive the Tolman–Oppenheimer–Volkoff equation for the structure of a static, spherically symmetric relativistic star in isotropic coordinates: In practice, Ρ and ρ are related by an equation of state of the form f(Ρ,ρ) = 0, with f specific to makeup of the star. M(r) is a foliation of spheres weighted by the mass density ρ(r), with the largest sphere having radius r: Per standard procedure in taking the nonrelativistic limit, we let , so that the factor Therefore, in the nonrelativistic limit the Tolman–Oppenheimer–Volkoff equation reduces to Newton's hydrostatic equilibrium: (we have made the trivial notation change h = r and have used f(Ρ,ρ) = 0 to express ρ in terms of P). A similar equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads: Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when treating stars, one chooses spherical coordinates as basis coordinates , the index i runs for the coordinates r and ). Applications Fluids The hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic balance is a particular balance for weighing substances in water. Hydrostatic balance allows the discovery of their specific gravities. This equilibrium is strictly applicable when an ideal fluid is in steady horizontal laminar flow, and when any fluid is at rest or in vertical motion at constant speed. It can also be a satisfactory approximation when flow speeds are low enough that acceleration is negligible. Astrophysics and planetary science From the time of Isaac Newton much work has been done on the subject of the equilibrium attained when a fluid rotates in space. This has application to both stars and objects like planets, which may have been fluid in the past or in which the solid material deforms like a fluid when subjected to very high stresses. In any given layer of a star there is a hydrostatic equilibrium between the outward-pushing pressure gradient and the weight of the material above pressing inward. One can also study planets under the assumption of hydrostatic equilibrium. A rotating star or planet in hydrostatic equilibrium is usually an oblate spheroid, that is, an ellipsoid in which two of the principal axes are equal and longer than the third. An example of this phenomenon is the star Vega, which has a rotation period of 12.5 hours. Consequently, Vega is about 20% larger at the equator than from pole to pole. In his 1687 Philosophiæ Naturalis Principia Mathematica Newton correctly stated that a rotating fluid of uniform density under the influence of gravity would take the form of a spheroid and that the gravity (including the effect of centrifugal force) would be weaker at the equator than at the poles by an amount equal (at least asymptotically) to five fourths the centrifugal force at the equator. In 1742, Colin Maclaurin published his treatise on fluxions, in which he showed that the spheroid was an exact solution. If we designate the equatorial radius by the polar radius by and the eccentricity by with he found that the gravity at the poles is where is the gravitational constant, is the (uniform) density, and is the total mass. The ratio of this to the gravity if the fluid is not rotating, is asymptotic to as goes to zero, where is the flattening: The gravitational attraction on the equator (not including centrifugal force) is Asymptotically we have: Maclaurin showed (still in the case of uniform density) that the component of gravity toward the axis of rotation depended only on the distance from the axis and was proportional to that distance, and the component in the direction toward the plane of the equator depended only on the distance from that plane and was proportional to that distance. Newton had already pointed out that the gravity felt on the equator (including the lightening due to centrifugal force) has to be in order to have the same pressure at the bottom of channels from the pole or from the equator to the centre, so the centrifugal force at the equator must be Defining the latitude to be the angle between a tangent to the meridian and axis of rotation, the total gravity felt at latitude (including the effect of centrifugal force) is This spheroid solution is stable up to a certain (critical) angular momentum (normalized by ), but in 1834 Carl Jacobi showed that it becomes unstable once the eccentricity reaches 0.81267 (or reaches 0.3302). Above the critical value the solution becomes a Jacobi, or scalene, ellipsoid (one with all three axes different). Henri Poincaré in 1885 found that at still higher angular momentum it will no longer be ellipsoidal but piriform or oviform. The symmetry drops from the 8-fold D point group to the 4-fold C, with its axis perpendicular to the axis of rotation. Other shapes satisfy the equations beyond that, but are not stable, at least not near the point of bifurcation. Poincaré was unsure what would happen at higher angular momentum, but concluded that eventually the blob would split in two. The assumption of uniform density may apply more or less to a molten planet or a rocky planet, but does not apply to a star or to a planet like the earth which has a dense metallic core. In 1737 Alexis Clairaut studied the case of density varying with depth. Clairaut's theorem states that the variation of the gravity (including centrifugal force) is proportional to the square of the sine of the latitude, with the proportionality depending linearly on the flattening and the ratio at the equator of centrifugal force to gravitational attraction. (Compare with the exact relation above for the case of uniform density.) Clairaut's theorem is a special case, for an oblate spheroid, of a connexion found later by Pierre-Simon Laplace between the shape and the variation of gravity. If the star has a massive nearby companion object then tidal forces come into play as well, distorting the star into a scalene shape when rotation alone would make it a spheroid. An example of this is Beta Lyrae. Hydrostatic equilibrium is also important for the intracluster medium, where it restricts the amount of fluid that can be present in the core of a cluster of galaxies. We can also use the principle of hydrostatic equilibrium to estimate the velocity dispersion of dark matter in clusters of galaxies. Only baryonic matter (or, rather, the collisions thereof) emits X-ray radiation. The absolute X-ray luminosity per unit volume takes the form where and are the temperature and density of the baryonic matter, and is some function of temperature and fundamental constants. The baryonic density satisfies the above equation The integral is a measure of the total mass of the cluster, with being the proper distance to the center of the cluster. Using the ideal gas law ( is the Boltzmann constant and is a characteristic mass of the baryonic gas particles) and rearranging, we arrive at Multiplying by and differentiating with respect to yields If we make the assumption that cold dark matter particles have an isotropic velocity distribution, then the same derivation applies to these particles, and their density satisfies the non-linear differential equation With perfect X-ray and distance data, we could calculate the baryon density at each point in the cluster and thus the dark matter density. We could then calculate the velocity dispersion of the dark matter, which is given by The central density ratio is dependent on the redshift of the cluster and is given by where is the angular width of the cluster and the proper distance to the cluster. Values for the ratio range from 0.11 to 0.14 for various surveys. Planetary geology The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is a planet, dwarf planet, or small Solar System body. According to the definition of planet adopted by the International Astronomical Union in 2006, one defining characteristic of planets and dwarf planets is that they are objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body will often have the differentiated interior and geology of a world (a planemo), though near-hydrostatic or formerly hydrostatic bodies such as the proto-planet 4 Vesta may also be differentiated and some hydrostatic bodies (notably Callisto) have not thoroughly differentiated since their formation. Often the equilibrium shape is an oblate spheroid, as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces create a scalene ellipsoid. Also, the purported dwarf planet is scalene due to its rapid rotation, though it may not currently be in equilibrium. Icy objects were previously believed to need less mass to attain hydrostatic equilibrium than rocky objects. The smallest object that appears to have an equilibrium shape is the icy moon Mimas at 396 km, whereas the largest icy object known to have an obviously non-equilibrium shape is the icy moon Proteus at 420 km, and the largest rocky bodies in an obviously non-equilibrium shape are the asteroids Pallas and Vesta at about 520 km. However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The smallest body confirmed to be in hydrostatic equilibrium is the dwarf planet Ceres, which is icy, at 945 km, whereas the largest known body to have a noticeable deviation from hydrostatic equilibrium is Iapetus being made of mostly permeable ice and almost no rock. At 1,469 km Iapetus is neither spherical nor ellipsoid. Instead, it is rather in a strange walnut-like shape due to its unique equatorial ridge. Some icy bodies may be in equilibrium at least partly due to a subsurface ocean, which is not the definition of equilibrium used by the IAU (gravity overcoming internal rigid-body forces). Even larger bodies deviate from hydrostatic equilibrium, although they are ellipsoidal: examples are Earth's Moon at 3,474 km (mostly rock), and the planet Mercury at 4,880 km (mostly metal). In 2024, Kiss et al. found that has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium, but that its shape became "frozen in" and did not change as it spun down due to tidal forces from its moon Weywot. If so, this would resemble the situation of Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless, though not always. Solid bodies have irregular surfaces, but local irregularities may be consistent with global equilibrium. For example, the massive base of the tallest mountain on Earth, Mauna Kea, has deformed and depressed the level of the surrounding crust, so that the overall distribution of mass approaches equilibrium. Atmospheric modeling In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an upward force called the pressure-gradient force. The force of gravity balances this out, keeping the atmosphere bound to Earth and maintaining pressure differences with altitude. See also List of gravitationally rounded objects of the Solar System; a list of objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium) Statics Two-balloon experiment References External links Strobel, Nick. (May, 2001). Nick Strobel's Astronomy Notes. by Richard Pogge, Ohio State University, Department of Astronomy Concepts in astrophysics Concepts in astronomy Definition of planet Fluid mechanics Hydrostatics
0.774248
0.996656
0.771659
Mathematical physics
Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics. Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world. Classical mechanics Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles). Partial differential equations Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. Quantum theory The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty. Relativity and quantum relativistic theories The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important. Statistical mechanics Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics. Usage The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being "the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature". Mathematical vs. theoretical physics The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics. On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians. Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation). The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory. Prominent mathematical physicists Before Newton There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance. In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion. An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having introduced experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, founding the central concepts of what would become today's classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object. René Descartes famously developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance brought the demise of Aristotelian physics. Descartes sought to formalize mathematical reasoning in science, and developed Cartesian coordinates for geometrically plotting locations in 3D space and marking their progressions along the flow of time. An older contemporary of Newton, Christiaan Huygens, was the first to idealize a physical problem by a set of parameters and the first to fully mathematize a mechanistic explanation of unobservable physical phenomena, and for these reasons Huygens is considered the first theoretical physicist and one of the founders of modern mathematical physics. Descartes, Newtonian physics and post Newtonian Descartes sought to formalize mathematical reasoning in science, and developed Cartesian coordinates for geometrically plotting locations in 3D space and marking their progressions along the flow of time. Before Descartes geometry and description of space followed the constructive model from ancient mathematical greeks. In such sense geometrical shapes formed the building block to describe and think about space, with time being an separate entity. Descartes introduced a new way to describe space using the algebra, until then, a mathematical tool used mostly for commercial transactions. Cartesian coordinates also introduced the idea of time on pair with space as just another axis of coordinates. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries. In this era, important concepts in calculus such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed some concepts in calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in physics. He was extremely successful in his application of calculus to the theory of motion. Newton's theory of motion, shown in his Mathematical Principles of Natural Philosophy, published in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity. In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms. Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism. A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field. The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics. Relativistic By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928]. In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared. Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object. Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.) Quantum Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron. List of prominent contributors to mathematical physics in the 20th century Prominent contributors to the 20th century's mathematical physics include (ordered by birth date): William Thomson (Lord Kelvin) (1824–1907) Oliver Heaviside (1850–1925) Jules Henri Poincaré (1854–1912) David Hilbert (1862–1943) Arnold Sommerfeld (1868–1951) Constantin Carathéodory (1873–1950) Albert Einstein (1879–1955) Emmy Noether (1882–1935) Max Born (1882–1970) George David Birkhoff (1884–1944) Hermann Weyl (1885–1955) Satyendra Nath Bose (1894–1974) Louis de Broglie (1892–1987) Norbert Wiener (1894–1964) John Lighton Synge (1897–1995) Mário Schenberg (1914–1990) Wolfgang Pauli (1900–1958) Paul Dirac (1902–1984) Eugene Wigner (1902–1995) Andrey Kolmogorov (1903–1987) Lars Onsager (1903–1976) John von Neumann (1903–1957) Sin-Itiro Tomonaga (1906–1979) Hideki Yukawa (1907–1981) Nikolay Nikolayevich Bogolyubov (1909–1992) Subrahmanyan Chandrasekhar (1910–1995) Mark Kac (1914–1984) Julian Schwinger (1918–1994) Richard Phillips Feynman (1918–1988) Irving Ezra Segal (1918–1998) Ryogo Kubo (1920–1995) Arthur Strong Wightman (1922–2013) Chen-Ning Yang (1922–) Rudolf Haag (1922–2016) Freeman John Dyson (1923–2020) Martin Gutzwiller (1925–2014) Abdus Salam (1926–1996) Jürgen Moser (1928–1999) Michael Francis Atiyah (1929–2019) Joel Louis Lebowitz (1930–) Roger Penrose (1931–) Elliott Hershel Lieb (1932–) Yakir Aharonov (1932–) Sheldon Glashow (1932–) Steven Weinberg (1933–2021) Ludvig Dmitrievich Faddeev (1934–2017) David Ruelle (1935–) Yakov Grigorevich Sinai (1935–) Vladimir Igorevich Arnold (1937–2010) Arthur Michael Jaffe (1937–) Roman Wladimir Jackiw (1939–) Leonard Susskind (1940–) Rodney James Baxter (1940–) Michael Victor Berry (1941–) Giovanni Gallavotti (1941–) Stephen William Hawking (1942–2018) Jerrold Eldon Marsden (1942–2010) Michael C. Reed (1942–) John Michael Kosterlitz (1943–) Israel Michael Sigal (1945–) Alexander Markovich Polyakov (1945–) Barry Simon (1946–) Herbert Spohn (1946–) John Lawrence Cardy (1947–) Giorgio Parisi (1948-) Abhay Ashtekar (1949-) Edward Witten (1951–) F. Duncan Haldane (1951–) Ashoke Sen (1956–) Juan Martín Maldacena (1968–) See also International Association of Mathematical Physics Notable publications in mathematical physics List of mathematical physics journals Gauge theory (mathematics) Relationship between mathematics and physics Theoretical, computational and philosophical physics Notes References Further reading Generic works Textbooks for undergraduate studies , (Mathematical Methods for Physicists, Solutions for Mathematical Methods for Physicists (7th ed.), archive.org) Hassani, Sadri (2009), Mathematical Methods for Students of Physics and Related Fields, (2nd ed.), New York, Springer, eISBN 978-0-387-09504-2 Textbooks for graduate studies Specialized texts in classical physics Specialized texts in modern physics External links
0.775244
0.995267
0.771575
Boussinesq approximation (buoyancy)
In fluid dynamics, the Boussinesq approximation (, named for Joseph Valentin Boussinesq) is used in the field of buoyancy-driven flow (also known as natural convection). It ignores density differences except where they appear in terms multiplied by , the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids. The existence of Sound waves in a Boussinesq fluid is not possible as sound is the result of density fluctuations within a fluid. Boussinesq flows are common in nature (such as atmospheric fronts, oceanic circulation, katabatic winds), industry (dense gas dispersion, fume cupboard ventilation), and the built environment (natural ventilation, central heating). The approximation can be used to simplify the equations describing such flows, whilst still describing the flow behaviour to a high degree of accuracy. Formulation The Boussinesq approximation is applied to problems where the fluid varies in temperature (or composition) from one place to another, driving a flow of fluid and heat transfer (or mass transfer). The fluid satisfies conservation of mass, conservation of momentum and conservation of energy. In the Boussinesq approximation, variations in fluid properties other than density are ignored, and density only appears when it is multiplied by , the gravitational acceleration. If is the local velocity of a parcel of fluid, the continuity equation for conservation of mass is If density variations are ignored, this reduces to The general expression for conservation of momentum of an incompressible, Newtonian fluid (the Navier–Stokes equations) is where (nu) is the kinematic viscosity and is the sum of any body forces such as gravity. In this equation, density variations are assumed to have a fixed part and another part that has a linear dependence on temperature: where is the coefficient of thermal expansion. The Boussinesq approximation states that the density variation is only important in the buoyancy term. If is the gravitational body force, the resulting conservation equation is In the equation for heat flow in a temperature gradient, the heat capacity per unit volume, , is assumed constant and the dissipation term is ignored. The resulting equation is where is the rate per unit volume of internal heat production and is the thermal conductivity. The three numbered equations are the basic convection equations in the Boussinesq approximation. Advantages The advantage of the approximation arises because when considering a flow of, say, warm and cold water of density and one needs only to consider a single density : the difference is negligible. Dimensional analysis shows that, under these circumstances, the only sensible way that acceleration due to gravity should enter into the equations of motion is in the reduced gravity where (Note that the denominator may be either density without affecting the result because the change would be of order .) The most generally used dimensionless number would be the Richardson number and Rayleigh number. The mathematics of the flow is therefore simpler because the density ratio , a dimensionless number, does not affect the flow; the Boussinesq approximation states that it may be assumed to be exactly one. Inversions One feature of Boussinesq flows is that they look the same when viewed upside-down, provided that the identities of the fluids are reversed. The Boussinesq approximation is inaccurate when the dimensionless density difference is approximately 1, i.e. . For example, consider an open window in a warm room. The warm air inside is less dense than the cold air outside, which flows into the room and down towards the floor. Now imagine the opposite: a cold room exposed to warm outside air. Here the air flowing in moves up toward the ceiling. If the flow is Boussinesq (and the room is otherwise symmetrical), then viewing the cold room upside down is exactly the same as viewing the warm room right-way-round. This is because the only way density enters the problem is via the reduced gravity which undergoes only a sign change when changing from the warm room flow to the cold room flow. An example of a non-Boussinesq flow is bubbles rising in water. The behaviour of air bubbles rising in water is very different from the behaviour of water falling in air: in the former case rising bubbles tend to form hemispherical shells, while water falling in air splits into raindrops (at small length scales surface tension enters the problem and confuses the issue). References Further reading Fluid dynamics Buoyancy
0.781088
0.987801
0.77156
Non-contact force
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it. All four known fundamental interactions are non-contact forces: Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes. Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions. Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force. See also Tension Body force Surface force Action at a distance References Concepts in physics Force
0.779496
0.989745
0.771503
James Prescott Joule
James Prescott Joule (; 24 December 1818 11 October 1889) was an English physicist, mathematician and brewer, born in Salford, Lancashire. Joule studied the nature of heat, and discovered its relationship to mechanical work. This led to the law of conservation of energy, which in turn led to the development of the first law of thermodynamics. The SI derived unit of energy, the joule, is named after him. He worked with Lord Kelvin to develop an absolute thermodynamic temperature scale, which came to be called the Kelvin scale. Joule also made observations of magnetostriction, and he found the relationship between the current through a resistor and the heat dissipated, which is also called Joule's first law. His experiments about energy transformations were first published in 1843. Early years James Joule was born in 1818, the son of Benjamin Joule (1784–1858), a wealthy brewer, and his wife, Alice Prescott, on New Bailey Street in Salford. Joule was tutored as a young man by the famous scientist John Dalton and was strongly influenced by chemist William Henry and Manchester engineers Peter Ewart and Eaton Hodgkinson. He was fascinated by electricity, and he and his brother experimented by giving electric shocks to each other and to the family's servants. As an adult, Joule managed the brewery. Science was merely a serious hobby. Sometime around 1840, he started to investigate the feasibility of replacing the brewery's steam engines with the newly invented electric motor. His first scientific papers on the subject were contributed to William Sturgeon's Annals of Electricity. Joule was a member of the London Electrical Society, established by Sturgeon and others. Motivated in part by a businessman's desire to quantify the economics of the choice, and in part by his scientific inquisitiveness, he set out to determine which prime mover was more efficient. He discovered Joule's first law in 1841, that "the heat which is evolved by the proper action of any voltaic current is proportional to the square of the intensity of that current, multiplied by the resistance to conduction which it experiences". He went on to realize that burning a pound of coal in a steam engine was more economical than a costly pound of zinc consumed in an electric battery. Joule captured the output of the alternative methods in terms of a common standard, the ability to raise a mass weighing one pound to a height of one foot, the foot-pound. However, Joule's interest diverted from the narrow financial question to that of how much work could be extracted from a given source, leading him to speculate about the convertibility of energy. In 1843 he published results of experiments showing that the heating effect he had quantified in 1841 was due to generation of heat in the conductor and not its transfer from another part of the equipment. This was a direct challenge to the caloric theory which held that heat could neither be created nor destroyed. Caloric theory had dominated thinking in the science of heat since introduced by Antoine Lavoisier in 1783. Lavoisier's prestige and the practical success of Sadi Carnot's caloric theory of the heat engine since 1824 ensured that the young Joule, working outside either academia or the engineering profession, had a difficult road ahead. Supporters of the caloric theory readily pointed to the symmetry of the Peltier–Seebeck effect to claim that heat and current were convertible in an, at least approximately, reversible process. The mechanical equivalent of heat Further experiments and measurements with his electric motor led Joule to estimate the mechanical equivalent of heat as 4.1868 joules per calorie of work to raise the temperature of one gram of water by one kelvin. He announced his results at a meeting of the chemical section of the British Association for the Advancement of Science in Cork in August 1843 and was met by silence. Joule was undaunted and started to seek a purely mechanical demonstration of the conversion of work into heat. By forcing water through a perforated cylinder, he could measure the slight viscous heating of the fluid. He obtained a mechanical equivalent of . The fact that the values obtained both by electrical and purely mechanical means were in agreement to at least two significant digits was, to Joule, compelling evidence of the reality of the convertibility of work into heat. Joule now tried a third route. He measured the heat generated against the work done in compressing a gas. He obtained a mechanical equivalent of . In many ways, this experiment offered the easiest target for Joule's critics but Joule disposed of the anticipated objections by clever experimentation. Joule read his paper to the Royal Society on 20 June 1844, but his paper was rejected for publication by the Royal Society and he had to be content with publishing in the Philosophical Magazine in 1845. In the paper he was forthright in his rejection of the caloric reasoning of Carnot and Émile Clapeyron, a rejection partly theologically driven: Joule here adopts the language of vis viva (energy), possibly because Hodgkinson had read a review of Ewart's On the measure of moving force to the Literary and Philosophical Society in April 1844. In June 1845, Joule read his paper On the Mechanical Equivalent of Heat to the British Association meeting in Cambridge. In this work, he reported his best-known experiment, involving the use of a falling weight, in which gravity does the mechanical work, to spin a paddle wheel in an insulated barrel of water which increased the temperature. He now estimated a mechanical equivalent of . He wrote a letter to the Philosophical Magazine, published in September 1845 describing his experiment. In 1850, Joule published a refined measurement of , closer to twentieth century estimates. Reception and priority Much of the initial resistance to Joule's work stemmed from its dependence upon extremely precise measurements. He claimed to be able to measure temperatures to within of a degree Fahrenheit (3 mK). Such precision was certainly uncommon in contemporary experimental physics but his doubters may have neglected his experience in the art of brewing and his access to its practical technologies. He was also ably supported by scientific instrument-maker John Benjamin Dancer. Joule's experiments complemented the theoretical work of Rudolf Clausius, who is considered by some to be the coinventor of the energy concept. Joule was proposing a kinetic theory of heat (he believed it to be a form of rotational, rather than translational, kinetic energy), and this required a conceptual leap: if heat was a form of molecular motion, why did the motion of the molecules not gradually die out? Joule's ideas required one to believe that the collisions of molecules were perfectly elastic. Importantly, the very existence of atoms and molecules was not widely accepted for another 50 years. Although it may be hard today to understand the allure of the caloric theory, at the time it seemed to have some clear advantages. Carnot's successful theory of heat engines had also been based on the caloric assumption, and only later was it proved by Lord Kelvin that Carnot's mathematics were equally valid without assuming a caloric fluid. However, in Germany, Hermann Helmholtz became aware both of Joule's work and the similar 1842 work of Julius Robert von Mayer. Though both men had been neglected since their respective publications, Helmholtz's definitive 1847 declaration of the conservation of energy credited them both. Also in 1847, another of Joule's presentations at the British Association in Oxford was attended by George Gabriel Stokes, Michael Faraday, and the precocious and maverick William Thomson, later to become Lord Kelvin, who had just been appointed professor of natural philosophy at the University of Glasgow. Stokes was "inclined to be a Joulite" and Faraday was "much struck with it" though he harboured doubts. Thomson was intrigued but sceptical. Unanticipated, Thomson and Joule met later that year in Chamonix. Joule married Amelia Grimes on 18 August and the couple went on honeymoon. Marital enthusiasm notwithstanding, Joule and Thomson arranged to attempt an experiment a few days later to measure the temperature difference between the top and bottom of the Cascade de Sallanches waterfall, though this subsequently proved impractical. Though Thomson felt that Joule's results demanded theoretical explanation, he retreated into a spirited defence of the Carnot–Clapeyron school. In his 1848 account of absolute temperature, Thomson wrote that "the conversion of heat (or caloric) into mechanical effect is probably impossible, certainly undiscovered" – but a footnote signalled his first doubts about the caloric theory, referring to Joule's "very remarkable discoveries". Surprisingly, Thomson did not send Joule a copy of his paper but when Joule eventually read it he wrote to Thomson on 6 October, claiming that his studies had demonstrated conversion of heat into work but that he was planning further experiments. Thomson replied on the 27th, revealing that he was planning his own experiments and hoping for a reconciliation of their two views. Though Thomson conducted no new experiments, over the next two years he became increasingly dissatisfied with Carnot's theory and convinced of Joule's. In his 1851 paper, Thomson was willing to go no further than a compromise and declared "the whole theory of the motive power of heat is founded on two propositions, due respectively to Joule, and to Carnot and Clausius". As soon as Joule read the paper he wrote to Thomson with his comments and questions. Thus began a fruitful, though largely epistolary, collaboration between the two men, Joule conducting experiments, Thomson analysing the results and suggesting further experiments. The collaboration lasted from 1852 to 1856, its discoveries including the Joule–Thomson effect, and the published results did much to bring about general acceptance of Joule's work and the kinetic theory. Kinetic theory Kinetics is the science of motion. Joule was a pupil of Dalton and it is no surprise that he had learned a firm belief in the atomic theory, even though there were many scientists of his time who were still skeptical. He had also been one of the few people receptive to the neglected work of John Herapath on the kinetic theory of gases. He was further profoundly influenced by Peter Ewart's 1813 paper "On the measure of moving force". Joule perceived the relationship between his discoveries and the kinetic theory of heat. His laboratory notebooks reveal that he believed heat to be a form of rotational, rather than translational motion. Joule could not resist finding antecedents of his views in Francis Bacon, Sir Isaac Newton, John Locke, Benjamin Thompson (Count Rumford) and Sir Humphry Davy. Though such views are justified, Joule went on to estimate a value for the mechanical equivalent of heat of 1,034 foot-pound from Rumford's publications. Some modern writers have criticised this approach on the grounds that Rumford's experiments in no way represented systematic quantitative measurements. In one of his personal notes, Joule contends that Mayer's measurement was no more accurate than Rumford's, perhaps in the hope that Mayer had not anticipated his own work. Joule has been attributed with explaining the sunset green flash phenomenon in a letter to the Manchester Literary and Philosophical Society in 1869; actually, he merely noted (with a sketch) the last glimpse as bluish green, without attempting to explain the cause of the phenomenon. Published work Read before the British Association at Cambridge, June 1845. Honours Joule died at home in Sale and is buried in Brooklands cemetery there. His gravestone is inscribed with the number "772.55", his climacteric 1878 measurement of the mechanical equivalent of heat, in which he found that this amount of foot-pounds of work must be expended at sea level to raise the temperature of one pound of water from to . There is also a quotation from the Gospel of John: "I must work the work of him that sent me, while it is day: the night cometh, when no man can work". The Wetherspoon's pub in Sale, the town of his death, is named "The J. P. Joule" after him. Joule's many honours and commendations include: Fellow of the Royal Society (1850) Royal Medal (1852), 'For his paper on the mechanical equivalent of heat, printed in the Philosophical Transactions for 1850' Copley Medal (1870), 'For his experimental researches on the dynamical theory of heat' President of Manchester Literary and Philosophical Society (1860) President of the British Association for the Advancement of Science (1872, 1887) Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland (1857) Honorary degrees: LL.D., Trinity College, Dublin (1857) DCL, University of Oxford (1860) LL.D., University of Edinburgh (1871) Joule received a civil list pension of £200 per annum in 1878 for services to science Albert Medal of the Royal Society of Arts (1880), 'for having established, after most laborious research, the true relation between heat, electricity and mechanical work, thus affording to the engineer a sure guide in the application of science to industrial pursuits' There is a memorial to Joule in the north choir aisle of Westminster Abbey, though he is not buried there, contrary to what some biographies state. A statue of Joule by Alfred Gilbert stands in Manchester Town Hall, opposite that of Dalton. Family Joule married Amelia Grimes in 1847. She died in 1854, seven years after their wedding. They had three children together: a son, Benjamin Arthur Joule (1850–1922), a daughter, Alice Amelia (1852–1899), and a second son, Joe (born 1854, died three weeks later). See also Latent heat Sensible heat Internal energy References Footnotes Citations Sources Further reading Fox, R, "James Prescott Joule, 1818–1889", in External links The scientific papers of James Prescott Joule (1884) – annotated by Joule The joint scientific papers of James Prescott Joule (1887) – annotated by Joule Classic papers of 1845 and 1847 at ChemTeam website On the Mechanical Equivalent of Heat and On the Existence of an Equivalent Relation between Heat and the ordinary Forms of Mechanical Power Joule's water friction apparatus at London Science Museum Some Remarks on Heat and the Constitution of Elastic Fluids, Joule's 1851 estimate of the speed of a gas molecule Joule Manuscripts at the University of Manchester Library. University of Manchester material on Joule – includes photographs of Joule's house and gravesite Joule Physics Laboratory at the University of Salford 1818 births 1889 deaths 19th-century British physicists English physicists Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Foreign associates of the National Academy of Sciences Fluid dynamicists History of Greater Manchester People associated with electricity People associated with energy Scientists from Salford Recipients of the Copley Medal Royal Medal winners Thermodynamicists Manchester Literary and Philosophical Society
0.77513
0.995295
0.771483
Velocity
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Definition Average velocity The average velocity of an object over a period of time is its change in position, , divided by the duration of the period, , given mathematically as Instantaneous velocity The instantaneous velocity of an object is the limit average velocity as the time interval approaches zero. At any particular time , it can be calculated as the derivative of the position with respect to time: From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time ( vs. graph) is the displacement, . In calculus terms, the integral of the velocity function is the displacement function . In the figure, this corresponds to the yellow area under the curve. Although the concept of an instantaneous velocity might at first seem counter-intuitive, it may be thought of as the velocity that the object would continue to travel at if it stopped accelerating at that moment. Difference between speed and velocity While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an object's speed and direction. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Units Since the derivative of the position with respect to time gives the change in position (in metres) divided by the change in time (in seconds), velocity is measured in metres per second (m/s). Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some time period . Average velocity can be calculated as: The average velocity is always less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, displacement can increase or decrease in magnitude as well as change direction. In terms of a displacement-time ( vs. ) graph, the instantaneous velocity (or, simply, velocity) can be thought of as the slope of the tangent line to the curve at any point, and the average velocity as the slope of the secant line between two points with coordinates equal to the boundaries of the time period for the average velocity. Special cases When a particle moves with different uniform speeds v1, v2, v3, ..., vn in different time intervals t1, t2, t3, ..., tn respectively, then average speed over the total time of journey is given as If , then average speed is given by the arithmetic mean of the speeds When a particle moves different distances s1, s2, s3,..., sn with speeds v1, v2, v3,..., vn respectively, then the average speed of the particle over the total distance is given as If , then average speed is given by the harmonic mean of the speeds Relationship to acceleration Although velocity is defined as the rate of change of position, it is often common to start with an expression for an object's acceleration. As seen by the three green tangent lines in the figure, an object's instantaneous acceleration at a point in time is the slope of the line tangent to the curve of a graph at that point. In other words, instantaneous acceleration is defined as the derivative of velocity with respect to time: From there, velocity is expressed as the area under an acceleration vs. time graph. As above, this is done using the concept of the integral: Constant acceleration In the special case of constant acceleration, velocity can be studied using the suvat equations. By considering a as being equal to some arbitrary constant vector, this shows with as the velocity at time and as the velocity at time . By combining this equation with the suvat equation , it is possible to relate the displacement and the average velocity by It is also possible to derive an expression for the velocity independent of time, known as the Torricelli equation, as follows: where etc. The above equations are valid for both Newtonian mechanics and special relativity. Where Newtonian mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in Newtonian mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. In other words, only relative velocity can be calculated. Quantities that are dependent on velocity Momentum In classical mechanics, Newton's second law defines momentum, p, as a vector that is the product of an object's mass and velocity, given mathematically aswhere m is the mass of the object. Kinetic energy The kinetic energy of a moving object is dependent on its velocity and is given by the equationwhere Ek is the kinetic energy. Kinetic energy is a scalar quantity as it depends on the square of the velocity. Drag (fluid resistance) In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. The drag force, , is dependent on the square of velocity and is given aswhere is the density of the fluid, is the speed of the object relative to the fluid, is the cross sectional area, and is the drag coefficient – a dimensionless number. Escape velocity Escape velocity is the minimum speed a ballistic object needs to escape from a massive body such as Earth. It represents the kinetic energy that, when added to the object's gravitational potential energy (which is always negative), is equal to zero. The general formula for the escape velocity of an object at a distance r from the center of a planet with mass M iswhere G is the gravitational constant and g is the gravitational acceleration. The escape velocity from Earth's surface is about 11 200 m/s, and is irrespective of the direction of the object. This makes "escape velocity" somewhat of a misnomer, as the more correct term would be "escape speed": any object attaining a velocity of that magnitude, irrespective of atmosphere, will leave the vicinity of the base body as long as it does not intersect with something in its path. The Lorentz factor of special relativity In special relativity, the dimensionless Lorentz factor appears frequently, and is given bywhere γ is the Lorentz factor and c is the speed of light. Relative velocity Relative velocity is a measurement of velocity between two objects as determined in a single coordinate system. Relative velocity is fundamental in both classical and modern physics, since many systems in physics deal with the relative motion of two or more particles. Consider an object A moving with velocity vector v and an object B with velocity vector w; these absolute velocities are typically expressed in the same inertial reference frame. Then, the velocity of object A object B is defined as the difference of the two velocity vectors: Similarly, the relative velocity of object B moving with velocity w, relative to object A moving with velocity v is: Usually, the inertial frame chosen is that in which the latter of the two mentioned objects is in rest. In Newtonian mechanics, the relative velocity is independent of the chosen inertial reference frame. This is not the case anymore with special relativity in which velocities depend on the choice of reference frame. Scalar velocities In the one-dimensional case, the velocities are scalars and the equation is either: if the two objects are moving in opposite directions, or: if the two objects are moving in the same direction. Coordinate systems Cartesian coordinates In multi-dimensional Cartesian coordinate systems, velocity is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding velocity components are defined as The two-dimensional velocity vector is then defined as . The magnitude of this vector represents speed and is found by the distance formula as In three-dimensional systems where there is an additional z-axis, the corresponding velocity component is defined as The three-dimensional velocity vector is defined as with its magnitude also representing speed and being determined by While some textbooks use subscript notation to define Cartesian components of velocity, others use , , and for the -, -, and -axes respectively. Polar coordinates In polar coordinates, a two-dimensional velocity is described by a radial velocity, defined as the component of velocity away from or toward the origin, and a transverse velocity, perpendicular to the radial one. Both arise from angular velocity, which is the rate of rotation about the origin (with positive quantities representing counter-clockwise rotation and negative quantities representing clockwise rotation, in a right-handed coordinate system). The radial and traverse velocities can be derived from the Cartesian velocity and displacement vectors by decomposing the velocity vector into radial and transverse components. The transverse velocity is the component of velocity along a circle centered at the origin. where is the transverse velocity is the radial velocity. The radial speed (or magnitude of the radial velocity) is the dot product of the velocity vector and the unit vector in the radial direction. where is position and is the radial direction. The transverse speed (or magnitude of the transverse velocity) is the magnitude of the cross product of the unit vector in the radial direction and the velocity vector. It is also the dot product of velocity and transverse direction, or the product of the angular speed and the radius (the magnitude of the position). such that Angular momentum in scalar form is the mass times the distance to the origin times the transverse velocity, or equivalently, the mass times the distance squared times the angular speed. The sign convention for angular momentum is the same as that for angular velocity. where is mass The expression is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion. See also Notes Robert Resnick and Jearl Walker, Fundamentals of Physics, Wiley; 7 Sub edition (June 16, 2004). . References External links Velocity and Acceleration Introduction to Mechanisms (Carnegie Mellon University) Motion (physics) Kinematics Temporal rates
0.772388
0.998818
0.771475
A Short History of Nearly Everything
A Short History of Nearly Everything by American-British author Bill Bryson is a popular science book that explains some areas of science, using easily accessible language that appeals more to the general public than many other books dedicated to the subject. It was one of the bestselling popular science books of 2005 in the United Kingdom, selling over 300,000 copies. A Short History deviates from Bryson's popular travel book genre, instead describing general sciences such as chemistry, paleontology, astronomy, and particle physics. In it, he explores time from the Big Bang to the discovery of quantum mechanics, via evolution and geology. Background Bill Bryson wrote this book because he was dissatisfied with his scientific knowledge—that was, not much at all. He writes that science was a distant, unexplained subject at school. Textbooks and teachers alike did not ignite the passion for knowledge in him, mainly because they never delved into the whys, hows, and whens. Contents Bryson describes graphically and in layperson's terms the size of the universe and that of atoms and subatomic particles. He then explores the history of geology and biology and traces life from its first appearance to today's modern humans, emphasizing the development of the modern Homo sapiens. Furthermore, he discusses the possibility of the Earth being struck by a meteorite and reflects on human capabilities of spotting a meteor before it impacts the Earth, and the extensive damage that such an event would cause. He also describes some of the most recent destructive disasters of volcanic origin in the history of our planet, including Krakatoa and Yellowstone National Park. A large part of the book is devoted to relating humorous stories about the scientists behind the research and discoveries and their sometimes eccentric behaviours. Bryson also speaks about modern scientific views on human effects on the Earth's climate and livelihood of other species, and the magnitude of natural disasters such as earthquakes, volcanoes, tsunamis, hurricanes, and the mass extinctions caused by some of these events. An illustrated edition of the book was released in November 2005. A few editions in audiobook form are also available, including an abridged version read by the author, and at least three unabridged versions. Awards and reviews The book received generally favourable reviews, with reviewers citing the book as informative, well-written, and entertaining. In 2004, this book won Bryson The Aventis Prizes for Science Books for best general science book. Bryson later donated the GBP£10,000 prize to the Great Ormond Street Hospital children's charity. In 2005, the book won the EU Descartes Prize for science communication. It was shortlisted for the Samuel Johnson Prize for the same year. See also Big History References External links Bill Bryson – A short history of nearly everything presentation at the Royal Society Interview with Mariella Frostrup (BBC Radio 4) A list of errata in A Short History of Nearly Everything 2003 non-fiction books Books by Bill Bryson Books about the history of science
0.775267
0.995059
0.771437
Geomorphology
Geomorphology (from Ancient Greek: , , 'earth'; , , 'form'; and , , 'study') is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field. Overview Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere. The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes. In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets. Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps. Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection. Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets. History Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development. Ancient geomorphology The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering. Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees. Early modern geomorphology The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'. An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature. In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection. Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions. During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one. Climatic geomorphology During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion. Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true. Quantitative and process geomorphology Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred. Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition. In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography". Contemporary geomorphology Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include: 1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature. 2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results. According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies. Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field. Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively. Processes Geomorphically relevant processes generally fall into (1) the production of regolith by weathering and erosion, (2) the transport of that material, and (3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact. Aeolian processes Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts. Biological processes The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars. Fluvial processes Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements. As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces. Glacial processes Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin. The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost. Hillslope processes Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus. Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas. On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes. Igneous processes Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces. Tectonic processes Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production. Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth. Marine processes Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology. Overlap with other fields There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology. See also Bioerosion Biogeology Biogeomorphology Biorhexistasy British Society for Geomorphology Coastal biogeomorphology Coastal erosion Concepts and Techniques in Modern Geography Drainage system (geomorphology) Erosion prediction Geologic modelling Geomorphometry Geotechnics Hack's law Hydrologic modeling, behavioral modeling in hydrology List of landforms Orogeny Physiographic regions of the world Sediment transport Soil morphology Soils retrogression and degradation Stream capture Thermochronology References Further reading Ialenti, Vincent. "Envisioning Landscapes of Our Very Distant Future" NPR Cosmos & Culture. 9/2014. Bierman, P.R.; Montgomery, D.R. Key Concepts in Geomorphology. New York: W. H. Freeman, 2013. . Ritter, D.F.; Kochel, R.C.; Miller, J.R.. Process Geomorphology. London: Waveland Pr Inc, 2011. . Hargitai H., Page D., Canon-Tapia E. and Rodrigue C.M..; Classification and Characterization of Planetary Landforms. in: Hargitai H, Kereszturi Á, eds, Encyclopedia of Planetary Landforms. Cham: Springer 2015 External links The Geographical Cycle, or the Cycle of Erosion (1899) Geomorphology from Space (NASA) British Society for Geomorphology Earth sciences Geology Geological processes Gravity Physical geography Planetary science Seismology Topography
0.774499
0.996041
0.771432
Anaerobic exercise
Anaerobic exercise is a type of exercise that breaks down glucose in the body without using oxygen; anaerobic means "without oxygen". This type of exercise leads to a buildup of lactic acid. In practical terms, this means that anaerobic exercise is more intense, but shorter in duration than aerobic exercise. The biochemistry of anaerobic exercise involves a process called glycolysis, in which glucose is converted to adenosine triphosphate (ATP), the primary source of energy for cellular reactions. Anaerobic exercise may be used to help build endurance, muscle strength, and power. Metabolism Anaerobic metabolism is a natural part of metabolic energy expenditure. Fast twitch muscles (as compared to slow twitch muscles) operate using anaerobic metabolic systems, such that any use of fast twitch muscle fibers leads to increased anaerobic energy expenditure. Intense exercise lasting upwards of four minutes (e.g. a mile race) may still have considerable anaerobic energy expenditure. An example is high-intensity interval training, an exercise strategy that is performed under anaerobic conditions at intensities that reach an excess of 90% of the maximum heart rate. Anaerobic energy expenditure is difficult to accurately quantify. Some methods estimate the anaerobic component of an exercise by determining the maximum accumulated oxygen deficit or measuring the lactic acid formation in muscle mass. In contrast, aerobic exercise includes lower intensity activities performed for longer periods of time. Activities such as walking, jogging, rowing, and cycling require oxygen to generate the energy needed for prolonged exercise (i.e., aerobic energy expenditure). For sports that require repeated short bursts of exercise, the aerobic system acts to replenish and store energy during recovery periods to fuel the next energy burst. Therefore, training strategies for many sports demand that both aerobic and anaerobic systems be developed. The benefits of adding anaerobic exercise include improving cardiovascular endurance as well as build and maintaining muscle strength and losing weight. The anaerobic energy systems are: The alactic anaerobic system, which consists of high energy phosphates, adenosine triphosphate, and creatine phosphate; and The lactic anaerobic system, which features anaerobic glycolysis. High energy phosphates are stored in limited quantities within muscle cells. Anaerobic glycolysis exclusively uses glucose (and glycogen) as a fuel in the absence of oxygen, or more specifically, when ATP is needed at rates that exceed those provided by aerobic metabolism. The consequence of such rapid glucose breakdown is the formation of lactic acid (or more appropriately, its conjugate base lactate at biological pH levels). Physical activities that last up to about thirty seconds rely primarily on the former ATP-CP phosphagen system. Beyond this time, both aerobic and anaerobic glycolysis-based metabolic systems are used. The by-product of anaerobic glycolysis—lactate—has traditionally been thought to be detrimental to muscle function. However, this appears likely only when lactate levels are very high. Elevated lactate levels are only one of many changes that occur within and around muscle cells during intense exercise that can lead to fatigue. Fatigue, which is muscle failure, is a complex subject that depends on more than just changes to lactate concentration. Energy availability, oxygen delivery, perception to pain, and other psychological factors all contribute to muscular fatigue. Elevated muscle and blood lactate concentrations are a natural consequence of any physical exertion. The effectiveness of anaerobic activity can be improved through training. Anaerobic exercise also increases an individual's basal metabolic rate (BMR). Examples Anaerobic exercises are high-intensity workouts completed over shorter durations, while aerobic exercises include variable-intensity workouts completed over longer durations. Some examples of anaerobic exercises include sprints, high-intensity interval training (HIIT), and strength training. See also Aerobic exercise Bioenergetic systems Margaria-Kalamen power test Strength training Weight training Cori cycle Citric acid cycle References Exercise biochemistry Exercise physiology Physical exercise Bodybuilding
0.77293
0.998055
0.771426
Potential energy surface
A potential energy surface (PES) or energy landscape describes the energy of a system, especially a collection of atoms, in terms of certain parameters, normally the positions of the atoms. The surface might define the energy as a function of one or more coordinates; if there is only one coordinate, the surface is called a potential energy curve or energy profile. An example is the Morse/Long-range potential. It is helpful to use the analogy of a landscape: for a system with two degrees of freedom (e.g. two bond lengths), the value of the energy (analogy: the height of the land) is a function of two bond lengths (analogy: the coordinates of the position on the ground). The PES concept finds application in fields such as physics, chemistry and biochemistry, especially in the theoretical sub-branches of these subjects. It can be used to theoretically explore properties of structures composed of atoms, for example, finding the minimum energy shape of a molecule or computing the rates of a chemical reaction. It can be used to describe all possible conformations of a molecular entity, or the spatial positions of interacting molecules in a system, or parameters and their corresponding energy levels, typically Gibbs free energy. Geometrically, the energy landscape is the graph of the energy function across the configuration space of the system. The term is also used more generally in geometric perspectives to mathematical optimization, when the domain of the loss function is the parameter space of some system. Mathematical definition and computation The geometry of a set of atoms can be described by a vector, , whose elements represent the atom positions. The vector could be the set of the Cartesian coordinates of the atoms, or could also be a set of inter-atomic distances and angles. Given , the energy as a function of the positions, , is the value of for all of interest. Using the landscape analogy from the introduction, E gives the height on the "energy landscape" so that the concept of a potential energy surface arises. To study a chemical reaction using the PES as a function of atomic positions, it is necessary to calculate the energy for every atomic arrangement of interest. Methods of calculating the energy of a particular atomic arrangement of atoms are well described in the computational chemistry article, and the emphasis here will be on finding approximations of to yield fine-grained energy-position information. For very simple chemical systems or when simplifying approximations are made about inter-atomic interactions, it is sometimes possible to use an analytically derived expression for the energy as a function of the atomic positions. An example is the London-Eyring-Polanyi-Sato potential for the system H + H2 as a function of the three H-H distances. For more complicated systems, calculation of the energy of a particular arrangement of atoms is often too computationally expensive for large scale representations of the surface to be feasible. For these systems a possible approach is to calculate only a reduced set of points on the PES and then use a computationally cheaper interpolation method, for example Shepard interpolation, to fill in the gaps. Application A PES is a conceptual tool for aiding the analysis of molecular geometry and chemical reaction dynamics. Once the necessary points are evaluated on a PES, the points can be classified according to the first and second derivatives of the energy with respect to position, which respectively are the gradient and the curvature. Stationary points (or points with a zero gradient) have physical meaning: energy minima correspond to physically stable chemical species and saddle points correspond to transition states, the highest energy point on the reaction coordinate (which is the lowest energy pathway connecting a chemical reactant to a chemical product). The term is useful when examining protein folding; while a protein can theoretically exist in a nearly infinite number of conformations along its energy landscape, in reality proteins fold (or "relax") into secondary and tertiary structures that possess the lowest possible free energy. The key concept in the energy landscape approach to protein folding is the folding funnel hypothesis. In catalysis, when designing new catalysts or refining existing ones, energy landscapes are considered to avoid low-energy or high-energy intermediates that could halt the reaction or demand excessive energy to reach the final products. In glassing models, the local minima of an energy landscape correspond to metastable low temperature states of a thermodynamic system. In machine learning, artificial neural networks may be analyzed using analogous approaches. For example, a neural network may be able to perfectly fit the training set, corresponding to a global minimum of zero loss, but overfitting the model ("learning the noise" or "memorizing the training set"). Understanding when this happens can be studied using the geometry of the corresponding energy landscape. Attractive and repulsive surfaces Potential energy surfaces for chemical reactions can be classified as attractive or repulsive by comparing the extensions of the bond lengths in the activated complex relative to those of the reactants and products. For a reaction of type A + B—C → A—B + C, the bond length extension for the newly formed A—B bond is defined as R*AB = RAB − R0AB, where RAB is the A—B bond length in the transition state and R0AB in the product molecule. Similarly for the bond which is broken in the reaction, R*BC = RBC − R0BC, where R0BC refers to the reactant molecule. For exothermic reactions, a PES is classified as attractive (or early-downhill) if R*AB > R*BC, so that the transition state is reached while the reactants are approaching each other. After the transition state, the A—B bond length continues to decrease, so that much of the liberated reaction energy is converted into vibrational energy of the A—B bond. An example is the harpoon reaction K + Br2 → K—Br + Br, in which the initial long-range attraction of the reactants leads to an activated complex resembling K+•••Br−•••Br. The vibrationally excited populations of product molecules can be detected by infrared chemiluminescence. In contrast the PES for the reaction H + Cl2 → HCl + Cl is repulsive (or late-downhill) because R*HCl < R*ClCl and the transition state is reached when the products are separating. For this reaction in which the atom A (here H) is lighter than B and C, the reaction energy is released primarily as translational kinetic energy of the products. For a reaction such as F + H2 → HF + H in which atom A is heavier than B and C, there is mixed energy release, both vibrational and translational, even though the PES is repulsive. For endothermic reactions, the type of surface determines the type of energy which is most effective in bringing about reaction. Translational energy of the reactants is most effective at inducing reactions with an attractive surface, while vibrational excitation (to higher vibrational quantum number v) is more effective for reactions with a repulsive surface. As an example of the latter case, the reaction F + HCl(v=1) → Cl + HF is about five times faster than F + HCl(v=0) → Cl + HF for the same total energy of HCl. History The concept of a potential energy surface for chemical reactions was first suggested by the French physicist René Marcelin in 1913. The first semi-empirical calculation of a potential energy surface was proposed for the H + H2 reaction by Henry Eyring and Michael Polanyi in 1931. Eyring used potential energy surfaces to calculate reaction rate constants in the transition state theory in 1935. H + H2 two-dimensional PES Potential energy surfaces are commonly shown as three-dimensional graphs, but they can also be represented by two-dimensional graphs, in which the advancement of the reaction is plotted by the use of isoenergetic lines. The collinear system H + H2 is a simple reaction that allows a two-dimension PES to be plotted in an easy and understandable way. In this reaction, a hydrogen atom (H) reacts with a dihydrogen molecule (H2) by forming a new bond with one atom from the molecule, which in turn breaks the bond of the original molecule. This is symbolized as Ha + Hb–Hc → Ha–Hb + Hc. The progression of the reaction from reactants (H+H₂) to products (H-H-H), as well as the energy of the species that take part in the reaction, are well defined in the corresponding potential energy surface. Energy profiles describe potential energy as a function of geometrical variables (PES in any dimension are independent of time and temperature). We have different relevant elements in the 2-D PES: The 2-D plot shows the minima points where we find reactants, the products and the saddle point or transition state. The transition state is a maximum in the reaction coordinate and a minimum in the coordinate perpendicular to the reaction path. The advance of time describes a trajectory in every reaction. Depending on the conditions of the reaction the process will show different ways to get to the product formation plotted between the 2 axes. See also Computational chemistry Energy minimization (or geometry optimization) Energy profile (chemistry) Potential well Reaction coordinate References Quantum mechanics Potential theory Quantum chemistry
0.785006
0.982674
0.771405
Classical field theory
A classical field theory is a physical theory that predicts how one or more fields in physics interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature. A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change. The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles. History Michael Faraday coined the term "field" and lines of forces to explain electric and magnetic phenomena. Lord Kelvin in 1851 formalized the concept of field in different areas of physics. Non-relativistic field theories Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun. Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m: Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M. According to Newton's law of universal gravitation, F(r) is given by where is a unit vector pointing along the line from M to m, and G is Newton's gravitational constant. Therefore, the gravitational field of M is The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. For a discrete collection of masses, Mi, located at points, ri, the gravitational field at a point r due to the masses is If we have a continuous mass distribution ρ instead, the sum is replaced by an integral, Note that the direction of the field points from the position r to the position of the masses ri; this is ensured by the minus sign. In a nutshell, this means all masses attract. In the integral form Gauss's law for gravity is while in differential form it is Therefore, the gravitational field g can be written in terms of the gradient of a gravitational potential : This is a consequence of the gravitational force F being conservative. Electromagnetism Electrostatics A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E generated by the source charge Q so that : Using this and Coulomb's law the electric field due to a single charged particle is The electric field is conservative, and hence is given by the gradient of a scalar potential, Gauss's law for electricity is in integral form while in differential form Magnetostatics A steady current I flowing along a path ℓ will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is where B(r) is the magnetic field, which is determined from I by the Biot–Savart law: The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): Gauss's law for magnetism in integral form is while in differential form it is The physical interpretation is that there are no magnetic monopoles. Electrodynamics In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) ρ and current density (electric current per unit area) J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations Continuum mechanics Fluid dynamics Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid, if the density , pressure , deviatoric stress tensor of the fluid, as well as external body forces b, are all given. The velocity field u is the vector field to solve for. Other examples In 1839, James MacCullagh presented field equations to describe reflection and refraction in "An essay toward a dynamical theory of crystalline reflection and refraction". Potential theory The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is where σ is a source function (as a density, a quantity per unit volume) and ø the scalar potential to solve for. In Newtonian gravitation; masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials so substituting these into Gauss' law for each case obtains where ρg is the mass density, ρe the charge density, G the gravitational constant and ke = 1/4πε0 the electric force constant. Incidentally, this similarity arises from the similarity between Newton's law of gravitation and Coulomb's law. In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation: For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the nth term in the series can be viewed as a potential arising from the 2n-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations. Relativistic field theory Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived. Throughout we use units such that the speed of light in vacuum is 1, i.e. c = 1. Lagrangian dynamics Given a field tensor , a scalar called the Lagrangian density can be constructed from and its derivatives. From this density, the action functional can be constructed by integrating over spacetime, Where is the volume form in curved spacetime. Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space. Then by enforcing the action principle, the Euler–Lagrange equations are obtained Relativistic fields Two of the most well-known Lorentz-covariant classical field theories are now described. Electromagnetism Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used. The electromagnetic four-potential is defined to be , and the electromagnetic four-current . The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor The Lagrangian To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have We can use gauge field theory to get the interaction term, and this gives us The equations To obtain the field equations, the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential A, and it's this potential which enters the Euler-Lagrange equations. The EM field F is not varied in the EL equations. Therefore, Evaluating the derivative of the Lagrangian density with respect to the field components and the derivatives of the field components obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that F is the 4-curl of A, or, in other words, from the fact that the Bianchi identity holds for the electromagnetic field tensor. where the comma indicates a partial derivative. Gravitation After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equations, describe how this curvature is produced by matter and radiation, where Gab is the Einstein tensor, written in terms of the Ricci tensor Rab and Ricci scalar , is the stress–energy tensor and is a constant. In the absence of matter and radiation (including sources) the 'vacuum field equations, can be derived by varying the Einstein–Hilbert action, with respect to the metric, where g is the determinant of the metric tensor gab''. Solutions of the vacuum field equations are called vacuum solutions. An alternative interpretation, due to Arthur Eddington, is that is fundamental, is merely one aspect of , and is forced by the choice of units. Further examples Further examples of Lorentz-covariant classical field theories are Klein-Gordon theory for real or complex scalar fields Dirac theory for a Dirac spinor field Yang–Mills theory for a non-abelian gauge field Unification attempts Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years between the two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington, Gustav Mie and Ernst Reichenbacher. Early attempts to create such theory were based on incorporation of electromagnetic fields into the geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl. In 1919, the idea of a five-dimensional approach was suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time. There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introducing other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl. Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of two other fundamental forces of nature, the strong and weak nuclear force which act on the subatomic level. See also Relativistic wave equations Quantum field theory Classical unified field theories Variational methods in general relativity Higgs field (classical) Lagrangian (field theory) Hamiltonian field theory Covariant Hamiltonian field theory Notes References Citations Sources External links Mathematical physics Lagrangian mechanics Equations
0.777457
0.992207
0.771398
Accelerating change
In futures studies and the history of technology, accelerating change is the observed exponential nature of the rate of technological change in recent history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change. Early observations In 1910, during the town planning conference of London, Daniel Burnham noted, "But it is not merely in the number of facts or sorts of knowledge that progress lies: it is still more in the geometric ratio of sophistication, in the geometric widening of the sphere of knowledge, which every year is taking in a larger percentage of people as time goes on." And later on, "It is the argument with which I began, that a mighty change having come about in fifty years, and our pace of development having immensely accelerated, our sons and grandsons are going to demand and get results that would stagger us." In 1938, Buckminster Fuller introduced the word ephemeralization to describe the trends of "doing more with less" in chemistry, health and other areas of industrial development. In 1946, Fuller published a chart of the discoveries of the chemical elements over time to highlight the development of accelerating acceleration in human knowledge acquisition. In 1958, Stanislaw Ulam wrote in reference to a conversation with John von Neumann: Moravec's Mind Children In a series of published articles from 1974 to 1979, and then in his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore's law to make predictions about the future of artificial life. Moore's law describes an exponential growth pattern in the complexity of integrated semiconductor circuits. Moravec extends this to include technologies from long before the integrated circuit to future forms of technology. Moravec outlines a timeline and a scenario in which robots will evolve into a new series of artificial species, starting around 2030–2040. In Robot: Mere Machine to Transcendent Mind, published in 1998, Moravec further considers the implications of evolving robot intelligence, generalizing Moore's law to technologies predating the integrated circuit, and also plotting the exponentially increasing computational power of the brains of animals in evolutionary history. Extrapolating these trends, he speculates about a coming "mind fire" of rapidly expanding superintelligence similar to the explosion of intelligence predicted by Vinge. James Burke's Connections In his TV series Connections (1978)—and sequels Connections² (1994) and Connections³ (1997)—James Burke explores an "Alternative View of Change" (the subtitle of the series) that rejects the conventional linear and teleological view of historical progress. Burke contends that one cannot consider the development of any particular piece of the modern world in isolation. Rather, the entire gestalt of the modern world is the result of a web of interconnected events, each one consisting of a person or group acting for reasons of their own motivations (e.g., profit, curiosity, religious) with no concept of the final, modern result to which the actions of either them or their contemporaries would lead. The interplay of the results of these isolated events is what drives history and innovation, and is also the main focus of the series and its sequels. Burke also explores three corollaries to his initial thesis. The first is that, if history is driven by individuals who act only on what they know at the time, and not because of any idea as to where their actions will eventually lead, then predicting the future course of technological progress is merely conjecture. Therefore, if we are astonished by the connections Burke is able to weave among past events, then we will be equally surprised to what the events of today eventually will lead, especially events we were not even aware of at the time. The second and third corollaries are explored most in the introductory and concluding episodes, and they represent the downside of an interconnected history. If history progresses because of the synergistic interaction of past events and innovations, then as history does progress, the number of these events and innovations increases. This increase in possible connections causes the process of innovation to not only continue, but to accelerate. Burke poses the question of what happens when this rate of innovation, or more importantly change itself, becomes too much for the average person to handle, and what this means for individual power, liberty, and privacy. Gerald Hawkins' Mindsteps In his book Mindsteps to the Cosmos (HarperCollins, August 1983), Gerald S. Hawkins elucidated his notion of mindsteps, dramatic and irreversible changes to paradigms or world views. He identified five distinct mindsteps in human history, and the technology that accompanied these "new world views": the invention of imagery, writing, mathematics, printing, the telescope, rocket, radio, TV, computer... "Each one takes the collective mind closer to reality, one stage further along in its understanding of the relation of humans to the cosmos." He noted: "The waiting period between the mindsteps is getting shorter. One can't help noticing the acceleration." Hawkins' empirical 'mindstep equation' quantified this, and gave dates for (to him) future mindsteps. The date of the next mindstep (5; the series begins at 0) he cited as 2021, with two further, successively closer mindsteps in 2045 and 2051, until the limit of the series in 2053. His speculations ventured beyond the technological: Vinge's exponentially accelerating change The mathematician Vernor Vinge popularized his ideas about exponentially accelerating technological change in the science fiction novel Marooned in Realtime (1986), set in a world of rapidly accelerating progress leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time intervals, until a point beyond human comprehension is reached. His subsequent Hugo award-winning novel A Fire Upon the Deep (1992) starts with an imaginative description of the evolution of a superintelligence passing through exponentially accelerating developmental stages ending in a transcendent, almost omnipotent power unfathomable by mere humans. His already mentioned influential 1993 paper on the technological singularity compactly summarizes the basic ideas. Kurzweil's Law of Accelerating Returns In his 1999 book The Age of Spiritual Machines, Ray Kurzweil proposed "The Law of Accelerating Returns", according to which the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially. He gave further focus to this issue in a 2001 essay entitled "The Law of Accelerating Returns". In it, Kurzweil, after Moravec, argued for extending Moore's Law to describe exponential growth of diverse forms of technological progress. Whenever a technology approaches some kind of a barrier, according to Kurzweil, a new technology will be invented to allow us to cross that barrier. He cites numerous past examples of this to substantiate his assertions. He predicts that such paradigm shifts have and will continue to become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". He believes the Law of Accelerating Returns implies that a technological singularity will occur before the end of the 21st century, around 2045. The essay begins: The Law of Accelerating Returns has in many ways altered public perception of Moore's law. It is a common (but mistaken) belief that Moore's law makes predictions regarding all forms of technology, when really it only concerns semiconductor circuits. Many futurists still use the term "Moore's law" to describe ideas like those put forth by Moravec, Kurzweil and others. According to Kurzweil, since the beginning of evolution, more complex life forms have been evolving exponentially faster, with shorter and shorter intervals between the emergence of radically new life forms, such as human beings, who have the capacity to engineer (i.e. intentionally design with efficiency) a new trait which replaces relatively blind evolutionary mechanisms of selection for efficiency. By extension, the rate of technical progress amongst humans has also been exponentially increasing: as we discover more effective ways to do things, we also discover more effective ways to learn, e.g. language, numbers, written language, philosophy, scientific method, instruments of observation, tallying devices, mechanical calculators, computers; each of these major advances in our ability to account for information occurs increasingly close to the previous. Already within the past sixty years, life in the industrialized world has changed almost beyond recognition except for living memories from the first half of the 20th century. This pattern will culminate in unimaginable technological progress in the 21st century, leading to a singularity. Kurzweil elaborates on his views in his books The Age of Spiritual Machines and The Singularity Is Near. Limits of accelerating change In the natural sciences, it is typical that processes characterized by exponential acceleration in their initial stages go into the saturation phase. This clearly makes it possible to realize that if an increase with acceleration is observed over a certain period of time, this does not mean an endless continuation of this process. On the contrary, in many cases this means an early exit to the plateau of speed. The processes occurring in natural science allow us to suggest that the observed picture of accelerating scientific and technological progress, after some time (in physical processes, as a rule, is short) will be replaced by a slowdown and a complete stop. Despite the possible termination / attenuation of the acceleration of the progress of science and technology in the foreseeable future, progress itself, and as a result, social transformations, will not stop or even slow down - it will continue with the achieved (possibly huge) speed, which has become constant. Accelerating change may not be restricted to the Anthropocene Epoch, but a general and predictable developmental feature of the universe. The physical processes that generate an acceleration such as Moore's law are positive feedback loops giving rise to exponential or superexponential technological change. These dynamics lead to increasingly efficient and dense configurations of Space, Time, Energy, and Matter (STEM efficiency and density, or STEM "compression"). At the physical limit, this developmental process of accelerating change leads to black hole density organizations, a conclusion also reached by studies of the ultimate physical limits of computation in the universe. Applying this vision to the search for extraterrestrial intelligence leads to the idea that advanced intelligent life reconfigures itself into a black hole. Such advanced life forms would be interested in inner space, rather than outer space and interstellar expansion. They would thus in some way transcend reality, not be observable and it would be a solution to Fermi's paradox called the "transcension hypothesis". Another solution is that the black holes we observe could actually be interpreted as intelligent super-civilizations feeding on stars, or "stellivores". This dynamics of evolution and development is an invitation to study the universe itself as evolving, developing. If the universe is a kind of superorganism, it may possibly tend to reproduce, naturally or artificially, with intelligent life playing a role. Other estimates Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, then one would expect the economy to double at least quarterly and possibly on a weekly basis. In his 1981 book Critical Path, futurist and inventor R. Buckminster Fuller estimated that if we took all the knowledge that mankind had accumulated and transmitted by the year One CE as equal to one unit of information, it probably took about 1500 years (or until the sixteenth century) for that amount of knowledge to double. The next doubling of knowledge from two to four 'knowledge units' took only 250 years, until about 1750 CE. By 1900, one hundred and fifty years later, knowledge had doubled again to 8 units. The observed speed at which information doubled was getting faster and faster. In modern times, exponential knowledge progressions therefore change at an ever-increasing rate. Depending on the progression, this tends to lead toward explosive growth at some point. A simple exponential curve that represents this accelerating change phenomenon could be modeled by a doubling function. This fast rate of knowledge doubling leads up to the basic proposed hypothesis of the technological singularity: the rate at which technology progression surpasses human biological evolution. Criticisms Both Theodore Modis and Jonathan Huebner have argued—each from different perspectives—that the rate of technological innovation has not only ceased to rise, but is actually now declining. See also Notes References TechCast Article Series, Al Leedahl, Accelerating Change History & Mathematics: Historical Dynamics and Development of Complex Societies. Edited by Peter Turchin, Leonid Grinin, Andrey Korotayev, and Victor C. de Munck. Moscow: KomKniga, 2006. Kurzweil, Ray (2001), Essay: The Law of Accelerating Returns . Further reading Link, Stefan J. Forging Global Fordism: Nazi Germany, Soviet Russia, and the Contest over the Industrial Order (2020) excerpt External links Accelerating Change, TechCast Article Series, Al Leedahl. Kurzweil's official site The Law of Accelerating Returns by Ray Kurzweil Is History Converging? Again? by Juergen Schmidhuber: singularity predictions as a side-effect of memory compression? Secular Cycles and Millennial Trends The Royal Mail Coach: Metaphor for a Changing World Evolution Futures studies Social change History of technology Sociological theories Technological change Linear theories
0.778161
0.991195
0.771309
Entropy (order and disorder)
In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression: where = motional energy ("heat") that is transferred reversibly to the system from the surroundings and = the absolute temperature at which the transfer occurs. In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding. In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'. History This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired by it that he spent much of his long and distinguished life developing the subject further. Later, Boltzmann, in efforts to develop a kinetic theory for the behavior of a gas, applied the laws of probability to Maxwell's and Clausius' molecular interpretation of entropy so as to begin to interpret entropy in terms of order and disorder. Similarly, in 1882 Hermann von Helmholtz used the word "Unordnung" (disorder) to describe entropy. Overview To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy: A measure of the unavailability of a system's energy to do work; also a measure of disorder; the higher the entropy the greater the disorder. A measure of disorder; the higher the entropy the greater the disorder. In thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy. A measure of disorder in the universe or of the unavailability of the energy in a system to do work. Entropy and disorder also have associations with equilibrium. Technically, entropy, from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an "ordered" distribution and has zero entropy, while the "disordered" kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value. In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, "that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives." In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism. The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula, , which relates entropy S to the number of possible states W in which a system can be found. As an example, consider a box that is divided into two sections. What is the probability that a certain number, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is larger because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, "it is obvious that entropy is a measure of order or, most likely, disorder in the system." In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that: Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the "ordering" process and operation of evolution in relation to Clausius' most famous version of the second law, which states that the universe is headed towards maximal "disorder". In the recent 2003 book SYNC – the Emerging Science of Spontaneous Order by Steven Strogatz, for example, we find "Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves." The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of "isolated" situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass. Phase change Owing to these early developments, the typical example of entropy change ΔS is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect "order" and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered. From his famous 1896 Lectures on Gas Theory, Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a "rest position". According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules (see: history of the molecule). According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature: Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more molar-disordered distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy. Entropy-driven order Entropy has been historically, e.g. by Clausius and Helmholtz, associated with disorder. However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. This commonplace notion of order is described quantitatively by Landau theory. In Landau theory, the development of order in the everyday sense coincides with the change in the value of a mathematical quantity, a so-called order parameter. An example of an order parameter for crystallization is "bond orientational order" describing the development of preferred directions (the crystallographic axes) in space. For many systems, phases with more structural (e.g. crystalline) order exhibit less entropy than fluid phases under the same thermodynamic conditions. In these cases, labeling phases as ordered or disordered according to the relative amount of entropy (per the Clausius/Helmholtz notion of order/disorder) or via the existence of structural regularity (per the Landau notion of order/disorder) produces matching labels. However, there is a broad class of systems that manifest entropy-driven order, in which phases with organization or structural regularity, e.g. crystals, have higher entropy than structurally disordered (e.g. fluid) phases under the same thermodynamic conditions. In these systems phases that would be labeled as disordered by virtue of their higher entropy (in the sense of Clausius or Helmholtz) are ordered in both the everyday sense and in Landau theory. Under suitable thermodynamic conditions, entropy has been predicted or discovered to induce systems to form ordered liquid-crystals, crystals, and quasicrystals. In many systems, directional entropic forces drive this behavior. More recently, it has been shown it is possible to precisely engineer particles for target ordered structures. Adiabatic demagnetization In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered "initial" state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the "final" state: The "disorder" and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. ΔS = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy increase? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must decrease by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium. Difficulties with the term "disorder" In recent years the long-standing use of term "disorder" to discuss entropy has met with some criticism. Critics of the terminology state that entropy is not a measure of 'disorder' or 'chaos', but rather a measure of energy's diffusion or dispersal to more microstates. Shannon's use of the term 'entropy' in information theory refers to the most compressed, or least dispersed, amount of code needed to encompass the content of a signal. See also Entropy Entropy production Entropy rate History of entropy Entropy of mixing Entropy (information theory) Entropy (computing) Entropy (energy dispersal) Second law of thermodynamics Entropy (statistical thermodynamics) Entropy (classical thermodynamics) References External links Lambert, F. L. Entropy Sites — A Guide Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms – Examples of Entropy Increase? Nonsense! Journal of Chemical Education Thermodynamic entropy State functions
0.781273
0.987225
0.771293
Ballistics
Ballistics is the field of mechanics concerned with the launching, flight behaviour and impact effects of projectiles, especially ranged weapon munitions such as bullets, unguided bombs, rockets or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance. A ballistic body is a free-moving body with momentum which can be subject to forces such as the forces exerted by pressurized gases from a gun barrel or a propelling nozzle, normal force by rifling, and gravity and air drag during flight. A ballistic missile is a missile that is guided only during the relatively brief initial phase of powered flight and the trajectory is subsequently governed by the laws of classical mechanics; in contrast to (for example) a cruise missile which is aerodynamically guided in powered flight like a fixed-wing aircraft. History and prehistory The earliest known ballistic projectiles were stones and spears, and the throwing stick. The oldest evidence of stone-tipped projectiles, which may or may not have been propelled by a bow (c.f. atlatl), dating to c. 280,000 years ago, were found in Ethiopia, present day-East-Africa. The oldest evidence of the use of bows to shoot arrows dates to about 10,000 years ago; it is based on pinewood arrows found in the Ahrensburg valley north of Hamburg. They had shallow grooves on the base, indicating that they were shot from a bow. The oldest bow so far recovered is about 8,000 years old, found in the Holmegård swamp in Denmark. Archery seems to have arrived in the Americas with the Arctic small tool tradition, about 4,500 years ago. The first devices identified as guns appeared in China around 1000 AD, and by the 12th century the technology was spreading through the rest of Asia, and into Europe by the 13th century. After millennia of empirical development, the discipline of ballistics was initially studied and developed by Italian mathematician Niccolò Tartaglia in 1531, although he continued to use segments of straight-line motion, conventions established by the Greek philosopher Aristotle and Albert of Saxony, but with the innovation that he connected the straight lines by a circular arc. Galileo established the principle of compound motion in 1638, using the principle to derive the parabolic form of the ballistic trajectory. Ballistics was put on a solid scientific and mathematical basis by Isaac Newton, with the publication of Philosophiæ Naturalis Principia Mathematica in 1687. This gave mathematical laws of motion and gravity which for the first time made it possible to successfully predict trajectories. The word ballistics comes from the Greek , meaning "to throw". Projectiles A projectile is any object projected into space (empty or not) by the exertion of a force. Although any object in motion through space (for example a thrown baseball) is a projectile, the term most commonly refers to a ranged weapon. Mathematical equations of motion are used to analyze projectile trajectory. Examples of projectiles include balls, arrows, bullets, artillery shells, wingless rockets, etc. Projectile launchers Throwing Throwing is the launching of a projectile by hand. Although some other animals can throw, humans are unusually good throwers due to their high dexterity and good timing capabilities, and it is believed that this is an evolved trait. Evidence of human throwing dates back 2 million years. The 90 mph throwing speed found in many athletes far exceeds the speed at which chimpanzees can throw things, which is about 20 mph. This ability reflects the ability of the human shoulder muscles and tendons to store elasticity until it is needed to propel an object. Sling A sling is a projectile weapon typically used to throw a blunt projectile such as a stone, clay or lead "sling-bullet". A sling has a small cradle or pouch in the middle of two lengths of cord. The sling stone is placed in the pouch. The middle finger or thumb is placed through a loop on the end of one cord, and a tab at the end of the other cord is placed between the thumb and forefinger. The sling is swung in an arc, and the tab released at a precise moment. This frees the projectile to fly to the target. Bow A bow is a flexible piece of material which shoots aerodynamic projectiles called arrows. The arrow is perhaps the first lethal projectile ever described in discussion of ballistics. A string joins the two ends and when the string is drawn back, the ends of the stick are flexed. When the string is released, the potential energy of the flexed stick is transformed into the velocity of the arrow. Archery is the art or sport of shooting arrows from bows. Catapult A catapult is a device used to launch a projectile a great distance without the aid of explosive devices – particularly various types of ancient and medieval siege engines. The catapult has been used since ancient times, because it was proven to be one of the most effective mechanisms during warfare. The word "catapult" comes from the Latin , which in turn comes from the Greek , itself from , "against” and , "to toss, to hurl". Catapults were invented by the ancient Greeks. Gun A gun is a normally tubular weapon or other device designed to discharge projectiles or other material. The projectile may be solid, liquid, gas, or energy and may be free, as with bullets and artillery shells, or captive as with Taser probes and whaling harpoons. The means of projection varies according to design but is usually effected by the action of gas pressure, either produced through the rapid combustion of a propellant or compressed and stored by mechanical means, operating on the projectile inside an open-ended tube in the fashion of a piston. The confined gas accelerates the movable projectile down the length of the tube imparting sufficient velocity to sustain the projectile's travel once the action of the gas ceases at the end of the tube or muzzle. Alternatively, acceleration via electromagnetic field generation may be employed in which case the tube may be dispensed with and a guide rail substituted. A weapons engineer or armourer who applies the scientific principles of ballistics to design cartridges are often called a ballistician. Rocket A rocket is a missile, spacecraft, aircraft or other vehicle that obtains thrust from a rocket engine. Rocket engine exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forward simply by throwing their exhaust backwards extremely fast. While comparatively inefficient for low speed use, rockets are relatively lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency. Rockets are not reliant on the atmosphere and work very well in space. Rockets for military and recreational uses date back to at least 13th century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the Moon. Rockets are now used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight, and space exploration. Chemical rockets are the most common type of high performance rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks. Subfields Ballistics is often broken down into the following four categories: Internal ballistics the study of the processes originally accelerating projectiles Transition ballistics the study of projectiles as they transition to unpowered flight External ballistics the study of the passage of the projectile (the trajectory) in flight Terminal ballistics the study of the projectile and its effects as it ends its flight Internal ballistics Internal ballistics (also interior ballistics), a sub-field of ballistics, is the study of the propulsion of a projectile. In guns internal ballistics covers the time from the propellant's ignition until the projectile exits the gun barrel. The study of internal ballistics is important to designers and users of firearms of all types, from small-bore rifles and pistols, to high-tech artillery. For rocket propelled projectiles, internal ballistics covers the period during which a rocket engine is providing thrust. Transitional ballistics Transitional ballistics, also known as intermediate ballistics, is the study of a projectile's behavior from the time it leaves the muzzle until the pressure behind the projectile is equalized, so it lies between internal ballistics and external ballistics. External ballistics External ballistics is the part of the science of ballistics that deals with the behaviour of a non-powered projectile in flight. External ballistics is frequently associated with firearms, and deals with the unpowered free-flight phase of the bullet after it exits the gun barrel and before it hits the target, so it lies between transitional ballistics and terminal ballistics. However, external ballistics is also concerned with the free-flight of rockets and other projectiles, such as balls, arrows etc. Terminal ballistics Terminal ballistics is the study of the behavior and effects of a projectile when it hits its target. Terminal ballistics is relevant both for small caliber projectiles as well as for large caliber projectiles (fired from artillery). The study of extremely high velocity impacts is still very new and is as yet mostly applied to spacecraft design. Applications Forensic ballistics Forensic ballistics involves analysis of bullets and bullet impacts to determine information of use to a court or other part of a legal system. Separately from ballistics information, firearm and tool mark examinations ("ballistic fingerprinting") involve analyzing firearm, ammunition, and tool mark evidence in order to establish whether a certain firearm or tool was used in the commission of a crime. Astrodynamics Astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and Newton's law of universal gravitation. It is a core discipline within space mission design and control. See also Notes References , κατά πάλλω . External links Association of Firearm and Tool Mark Examiners Ballistic Trajectories by Jeff Bryant, The Wolfram Demonstrations Project Forensic Firearms and Tool Marks Time Line International Ballistics Society The Bullet's Flight from Powder to TargetFranklin Weston Mann Articles containing video clips
0.774675
0.995632
0.771291
Conjugate variables
Conjugate variables are pairs of variables mathematically defined in such a way that they become Fourier transform duals, or more generally are related through Pontryagin duality. The duality relations lead naturally to an uncertainty relation—in physics called the Heisenberg uncertainty principle—between them. In mathematical terms, conjugate variables are part of a symplectic basis, and the uncertainty relation corresponds to the symplectic form. Also, conjugate variables are related by Noether's theorem, which states that if the laws of physics are invariant with respect to a change in one of the conjugate variables, then the other conjugate variable will not change with time (i.e. it will be conserved). Conjugate variables in thermodynamics are widely used. Examples There are many types of conjugate variables, depending on the type of work a certain system is doing (or is being subjected to). Examples of canonically conjugate variables include the following: Time and frequency: the longer a musical note is sustained, the more precisely we know its frequency, but it spans a longer duration and is thus a more-distributed event or 'instant' in time. Conversely, a very short musical note becomes just a click, and so is more temporally-localized, but one can't determine its frequency very accurately. Doppler and range: the more we know about how far away a radar target is, the less we can know about the exact velocity of approach or retreat, and vice versa. In this case, the two dimensional function of doppler and range is known as a radar ambiguity function or radar ambiguity diagram. Surface energy: γ dA (γ = surface tension; A = surface area). Elastic stretching: F dL (F = elastic force; L length stretched). Energy and Time: Units being Kg Derivatives of action In classical physics, the derivatives of action are conjugate variables to the quantity with respect to which one is differentiating. In quantum mechanics, these same pairs of variables are related by the Heisenberg uncertainty principle. The energy of a particle at a certain event is the negative of the derivative of the action along a trajectory of that particle ending at that event with respect to the time of the event. The linear momentum of a particle is the derivative of its action with respect to its position. The angular momentum of a particle is the derivative of its action with respect to its orientation (angular position). The mass-moment of a particle is the negative of the derivative of its action with respect to its rapidity. The electric potential (φ, voltage) at an event is the negative of the derivative of the action of the electromagnetic field with respect to the density of (free) electric charge at that event. The magnetic potential (A) at an event is the derivative of the action of the electromagnetic field with respect to the density of (free) electric current at that event. The electric field (E) at an event is the derivative of the action of the electromagnetic field with respect to the electric polarization density at that event. The magnetic induction (B) at an event is the derivative of the action of the electromagnetic field with respect to the magnetization at that event. The Newtonian gravitational potential at an event is the negative of the derivative of the action of the Newtonian gravitation field with respect to the mass density at that event. Quantum theory In quantum mechanics, conjugate variables are realized as pairs of observables whose operators do not commute. In conventional terminology, they are said to be incompatible observables. Consider, as an example, the measurable quantities given by position and momentum . In the quantum-mechanical formalism, the two observables and correspond to operators and , which necessarily satisfy the canonical commutation relation: For every non-zero commutator of two operators, there exists an "uncertainty principle", which in our present example may be expressed in the form: In this ill-defined notation, and denote "uncertainty" in the simultaneous specification of and . A more precise, and statistically complete, statement involving the standard deviation reads: More generally, for any two observables and corresponding to operators and , the generalized uncertainty principle is given by: Now suppose we were to explicitly define two particular operators, assigning each a specific mathematical form, such that the pair satisfies the aforementioned commutation relation. It's important to remember that our particular "choice" of operators would merely reflect one of many equivalent, or isomorphic, representations of the general algebraic structure that fundamentally characterizes quantum mechanics. The generalization is provided formally by the Heisenberg Lie algebra , with a corresponding group called the Heisenberg group . Fluid mechanics In Hamiltonian fluid mechanics and quantum hydrodynamics, the action itself (or velocity potential) is the conjugate variable of the density (or ''probability density). See also Canonical coordinates Notes Classical mechanics Quantum mechanics
0.784178
0.983539
0.77127
Inertial measurement unit
An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs. IMUs are typically used to maneuver modern vehicles including motorcycles, missiles, aircraft (an attitude and heading reference system), including uncrewed aerial vehicles (UAVs), among many others, and spacecraft, including satellites and landers. Recent developments allow for the production of IMU-enabled GPS devices. An IMU allows a GPS receiver to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present. IMUs are used in VR headsets and smartphones, and also in motion tracked game controllers like the Wii Remote. Operational principles An inertial measurement unit works by detecting linear acceleration using one or more accelerometers and rotational rate using one or more gyroscopes. Some also include a magnetometer which is commonly used as a heading reference. Some IMUs, like Adafruit's 9-DOF IMU, include additional sensors like temperature. Typical configurations contain one accelerometer, gyro, and magnetometer per axis for each of the three principal axes: pitch, roll and yaw. Uses IMUs are often incorporated into Inertial Navigation Systems, which utilize the raw IMU measurements to calculate attitude, angular rates, linear velocity, and position relative to a global reference frame. The IMU equipped INS forms the backbone for the navigation and control of many commercial and military vehicles, such as crewed aircraft, missiles, ships, submarines, and satellites. IMUs are also essential components in the guidance and control of uncrewed systems such as UAVs, UGVs, and UUVs. Simpler versions of INSs termed Attitude and Heading Reference Systems utilize IMUs to calculate vehicle attitude with heading relative to magnetic north. The data collected from the IMU's sensors allows a computer to track craft's position, using a method known as dead reckoning. This data is usually presented in Euler vectors representing the angles of rotation in the three primary axis or a quaternion. In land vehicles, an IMU can be integrated into GPS based automotive navigation systems or vehicle tracking systems, giving the system a dead reckoning capability and the ability to gather as much accurate data as possible about the vehicle's current speed, turn rate, heading, inclination and acceleration, in combination with the vehicle's wheel speed sensor output and, if available, reverse gear signal, for purposes such as better traffic collision analysis. Besides navigational purposes, IMUs serve as orientation sensors in many consumer products. Almost all smartphones and tablets contain IMUs as orientation sensors. Fitness trackers and other wearables may also include IMUs to measure motion, such as running. IMUs also have the ability to determine developmental levels of individuals when in motion by identifying specificity and sensitivity of specific parameters associated with running. Some gaming systems such as the remote controls for the Nintendo Wii use IMUs to measure motion. Low-cost IMUs have enabled the proliferation of the consumer drone industry. They are also frequently used for sports technology (technique training), and animation applications. They are a competing technology for use in motion capture technology. An IMU is at the heart of the balancing technology used in the Segway Personal Transporter. In navigation In a navigation system, the data reported by the IMU is fed into a processor which calculates altitude, velocity and position. A typical implementation referred to as a Strap Down Inertial System integrates angular rate from the gyroscope to calculate angular position. This is fused with the gravity vector measured by the accelerometers in a Kalman filter to estimate attitude. The attitude estimate is used to transform acceleration measurements into an inertial reference frame (hence the term inertial navigation) where they are integrated once to get linear velocity, and twice to get linear position. For example, if an IMU installed in an aeroplane moving along a certain direction vector were to measure a plane's acceleration as 5 m/s2 for 1 second, then after that 1 second the guidance computer would deduce that the plane must be traveling at 5 m/s and must be 2.5 m from its initial position (assuming v0=0 and known starting position coordinates x0, y0, z0). If combined with a mechanical paper map or a digital map archive (systems whose output is generally known as a moving map display since the guidance system position output is often taken as the reference point, resulting in a moving map), the guidance system could use this method to show a pilot where the plane is located geographically in a certain moment, as with a GPS navigation system, but without the need to communicate with or receive communication from any outside components, such as satellites or land radio transponders, though external sources are still used in order to correct drift errors, and since the position update frequency allowed by inertial navigation systems can be higher than the vehicle motion on the map display can be perceived as smooth. This method of navigation is called dead reckoning. One of the earliest units was designed and built by Ford Instrument Company for the USAF to help aircraft navigate in flight without any input from outside the aircraft. Called the Ground-Position Indicator, once the pilot entered in the aircraft longitude and latitude at takeoff, the unit would show the pilot the longitude and latitude of the aircraft in relation to the ground. Positional tracking systems like GPS can be used to continually correct drift errors (an application of the Kalman filter). A major disadvantage of using IMUs for navigation is that they typically suffer from accumulated error. Because the guidance system is continually integrating acceleration with respect to time to calculate velocity and position (see dead reckoning), any measurement errors, however small, are accumulated over time. This leads to 'drift': an ever-increasing difference between where the system thinks it is located and the actual location. Due to integration a constant error in acceleration results in a linear error growth in velocity and a quadratic error growth in position. A constant error in attitude rate (gyro) results in a quadratic error growth in velocity and a cubic error growth in position. Performance A very wide variety of IMUs exists, depending on application types, with performance ranging: From 0.1°/s to 0.001°/h for gyroscope From 100 mg to 10 μg for accelerometers. To get a rough idea, this means that, for a single, uncorrected accelerometer, the cheapest (at 100 mg) loses its ability to give 50-meter accuracy after around 10 seconds, while the best accelerometer (at 10 μg) loses its 50-meter accuracy after around 17 minutes. The accuracy of the inertial sensors inside a modern inertial measurement unit (IMU) has a more complex impact on the performance of an inertial navigation system (INS). Gyroscope and accelerometer sensor behavior is often represented by a model based on the following errors, assuming they have the proper measurement range and bandwidth: Offset error: this error can be split between stability performance (drift while the sensor remains in invariant conditions) and repeatability (error between two measurements in similar conditions separated by varied conditions in between) Scale factor error: errors on first order sensitivity due to non repeatabilities and nonlinearities Misalignment error: due to imperfect mechanical mounting Cross axis sensitivity: parasitic measurement induced by solicitation along an axis orthogonal to sensor axis Noise: dependent on desired dynamic performance Environment sensitivity: primarily sensitivity to thermal gradients and accelerations All these errors depend on various physical phenomena specific to each sensor technology. Depending on the targeted applications and to be able to make the proper sensor choice, it is very important to consider the needs regarding stability, repeatability, and environment sensitivity (mainly thermal and mechanical environments), on both short and long terms. Targeted performance for applications is, most of the time, better than a sensor's absolute performance. However, sensor performance is repeatable over time, with more or less accuracy, and therefore can be assessed and compensated to enhance its performance. This real-time performance enhancement is based on both sensors and IMU models. Complexity for these models will then be chosen according to the needed performance and the type of application considered. Ability to define this model is part of sensors and IMU manufacturers know-how. Sensors and IMU models are computed in factories through a dedicated calibration sequence using multi-axis turntables and climatic chambers. They can either be computed for each individual product or generic for the whole production. Calibration will typically improve a sensor's raw performance by at least two decades. Assembly High performance IMUs, or IMUs designed to operate under harsh conditions, are very often suspended by shock absorbers. These shock absorbers are required to master three effects: reduce sensor errors due to mechanical environment solicitations protect sensors as they can be damaged by shocks or vibrations contain parasitic IMU movement within a limited bandwidth, where processing will be able to compensate for them. Suspended IMUs can offer very high performance, even when submitted to harsh environments. However, to reach such performance, it is necessary to compensate for three main resulting behaviors: coning: a parasitic effect induced by two orthogonal rotations sculling: a parasitic effect induced by an acceleration orthogonal to a rotation centrifugal accelerations effects. Decreasing these errors tends to push IMU designers to increase processing frequencies, which becomes easier using recent digital technologies. However, developing algorithms able to cancel these errors requires deep inertial knowledge and strong intimacy with sensors/IMU design. On the other hand, if suspension is likely to enable IMU performance increase, it has a side effect on size and mass. A wireless IMU is known as a WIMU. See also References Aircraft instruments Inertial navigation Navigational equipment
0.77371
0.996835
0.771261
Mechanical, electrical, and plumbing
Mechanical, electrical and plumbing (MEP) refers to the installation of services which provide a functional and comfortable space for the building occupants. In residential and commercial buildings, these elements are often designed by specialized MEP engineers. MEP's design is important for planning, decision-making, accurate documentation, performance- and cost-estimation, construction, and operating/maintaining the resulting facilities. MEP specifically encompasses the in-depth design and selection of these systems, as opposed to a tradesperson simply installing equipment. For example, a plumber may select and install a commercial hot water system based on common practice and regulatory codes. A team of MEP engineers will research the best design according to the principles of engineering, and supply installers with the specifications they develop. As a result, engineers working in the MEP field must understand a broad range of disciplines, including dynamics, mechanics, fluids, thermodynamics, heat transfer, chemistry, electricity, and computers. Design and documentation As with other aspect of buildings, MEP drafting, design and documentation were traditionally done manually. Computer-aided design has some advantages over this, and often incorporates 3D modeling which is otherwise impractical. Building information modeling provides holistic design and parametric change management of the MEP design. Maintaining documentation of MEP services may also require the use of a geographical information system or asset management system. Components of MEP Mechanical The mechanical component of MEP is an important superset of HVAC services. Thus, it incorporates the control of environmental factors (psychrometrics), either for human comfort or for the operation of machines. Heating, cooling, ventilation and exhaustion are all key areas to consider in the mechanical planning of a building. In special cases, water cooling/heating, humidity control or air filtration may also be incorporated. For example, Google's data centres make extensive use of heat exchangers to cool their servers. This system creates an additional overhead of 12% of initial energy consumption. This is a vast improvement from traditional active cooling units which have an overhead of 30-70%. However, this novel and complicated method requires careful and expensive planning from mechanical engineers, who must work closely with the engineers designing the electrical and plumbing systems for a building. A major concern for people designing HVAC systems is the efficiency, i.e., the consumption of electricity and water. Efficiency is optimised by changing the design of the system on both large and small scales. Heat pumps and evaporative cooling are efficient alternatives to traditional systems, however they may be more expensive or harder to implement. The job of an MEP engineer is to compare these requirements and choose the most suitable design for the task. Electricians and plumbers usually have little to do with each other, other than keeping services out of each other's way. The introduction of mechanical systems requires the integration of the two so that plumbing may be controlled by electrics and electrics may be serviced by plumbing. Thus, the mechanical component of MEP unites the three fields. Electrical Alternating current Virtually all modern buildings integrate some form of AC mains electricity for powering domestic and everyday appliances. Such systems typically run between 100 and 500 volts, however their classifications and specifications vary greatly by geographical area (see Mains electricity by country). Mains power is typically distributed through insulated copper wire concealed in the building's subfloor, wall cavities and ceiling cavity. These cables are terminated into sockets mounted to walls, floors or ceilings. Similar techniques are used for lights ("luminaires"), however the two services are usually separated into different circuits with different protection devices at the distribution board. Whilst the wiring for lighting is exclusively managed by electricians, the selection of luminaires or light fittings may be left to building owners or interior designers in some cases. Three-phase power is commonly used for industrial machines, particularly motors and high-load devices. Provision for three-phase power must be considered early in the design stage of a building because it has different regulations to domestic power supplies, and may affect aspects such as cable routes, switchboard location, large external transformers and connection from the street. Information technology Advances in technology and the advent of computer networking have led to the emergence of a new facet of electrical systems incorporating data and telecommunications wiring. As of 2019, several derivative acronyms have been suggested for this area, including MEPIT (mechanical, electrical, plumbing and information technology) and MEPI (an abbreviation of MEPIT). Equivalent names are "low voltage", "data", and "telecommunications" or "comms". A low voltage system used for telecommunications networking is not the same as a low voltage network. The information technology sector of electrical installations is used for computer networking, telephones, television, security systems, audio distribution, healthcare systems, robotics, and more. These services are typically installed by different tradespeople to the higher-voltage mains wiring and are often contracted out to very specific trades, e.g. security installers or audio integrators. Regulations on low voltage wiring are often less strict or less important to human safety. As a result, it is more common for this wiring to be installed or serviced by competent amateurs, despite constant attempts from the electrical industry to discourage this. Plumbing Competent design of plumbing systems is necessary to prevent conflicts with other trades, and to avoid expensive rework or surplus supplies. The scope of standard residential plumbing usually covers mains pressure potable water, heated water (in conjunction with mechanical and/or electrical engineers), sewerage, stormwater, natural gas, and sometimes rainwater collection and storage. In commercial environments, these distribution systems expand to accommodate many more users, as well as the addition of other plumbing services such as hydroponics, irrigation, fuels, oxygen, vacuum/compressed air, solids transfer, and more. Plumbing systems also service air distribution/control, and therefore contribute to the mechanical part of MEP. Plumbing for HVAC systems involves the transfer of coolant, pressurized air, water, and occasionally other substances. Ducting for air transfer may also be consider plumbing, but is generally installed by different tradespeople. See also Architectural engineering Drainage Electrical wiring Heating, ventilation, and air conditioning Plumbing Telecommunication Fire protection engineering References Building engineering Electrical engineering Mechanical engineering
0.775667
0.994281
0.771231
Photon
A photon is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that always move at the speed of light measured in vacuum. The photon belongs to the class of boson particles. As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. Nomenclature The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a "discrete quantity composed of an integral number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets. He called such a wave-packet a light quantum (German: ein Lichtquant). The name photon derives from the Greek word for light, (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on 18 December 1926. The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971). The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted by most physicists very soon after Compton used it. In physics, a photon is usually denoted by the symbol (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where is the Planck constant and the Greek letter (nu) is the photon's frequency. Physical properties The photon has no electric charge, is generally considered to have zero rest mass and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−50 kg; its lifetime would be more than 1018 years. For comparison the age of the universe is about years. In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon obeys Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle and more than one can occupy the same bound quantum state. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). Relativistic energy and momentum In empty space, the photon moves at (the speed of light) and its energy and momentum are related by , where is the magnitude of the momentum vector . This derives from the following relativistic relation, with : The energy and momentum of a photon depend only on its frequency or inversely, its wavelength: where is the wave vector, where   is the wave number, and   is the angular frequency, and   is the reduced Planck constant. Since points in the direction of the photon's propagation, the magnitude of its momentum is Polarization and spin angular momentum The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light). The angular momentum of the photon has two possible values, either or . These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta. The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and S. Bhagavantam in 1931. Antiparticle annihilation The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of . Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term mAA would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of . The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of (the equivalent of ) given by the Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of from the test of Coulomb's law is valid. Historical development In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity. At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question then, was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See and , below.) Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that and The and are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Quantum field theory Quantization of the electromagnetic field In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider. In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. As a gauge boson The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states. In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally. Hadronic properties Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electrical charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon, which interacts only with electric charges, and vector mesons, which mediate the residual nuclear force. However, if experimentally probed at very short distances, the intrinsic structure of the photon appears to have as components a charge-neutral flux of quarks and gluons, quasi-free according to asymptotic freedom in QCD. That flux is described by the photon structure function. A review by presented a comprehensive comparison of data with theoretical predictions. Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei). This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium. Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves. In matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polaritons. Polaritons have a nonzero effective mass, which means that they cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering. Photons can be scattered by matter. For example, photons engage in so many collisions on the way from the core of the Sun that radiant energy can take about a million years to reach the surface; however, once in open space, a photon takes only 8.3 minutes to reach Earth. Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry. Technological applications Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas. Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations. Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy. In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins. Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1". Quantum optics and computation Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography. Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons. See also Notes References Further reading By date of publication Education with single photons External links Bosons Gauge bosons Elementary particles Electromagnetism Optics Quantum electrodynamics Photons Force carriers Subatomic particles with spin 1
0.771771
0.99926
0.7712
Solutions of the Einstein field equations
Solutions of the Einstein field equations are metrics of spacetimes that result from solving the Einstein field equations (EFE) of general relativity. Solving the field equations gives a Lorentz manifold. Solutions are broadly classed as exact or non-exact. The Einstein field equations are where is the Einstein tensor, is the cosmological constant (sometimes taken to be zero for simplicity), is the metric tensor, is a constant, and is the stress–energy tensor. The Einstein field equations relate the Einstein tensor to the stress–energy tensor, which represents the distribution of energy, momentum and stress in the spacetime manifold. The Einstein tensor is built up from the metric tensor and its partial derivatives; thus, given the stress–energy tensor, the Einstein field equations are a system of ten partial differential equations in which the metric tensor can be solved for. Where appropriate, this article will use the abstract index notation. Solving the equations It is important to realize that the Einstein field equations alone are not enough to determine the evolution of a gravitational system in many cases. They depend on the stress–energy tensor, which depends on the dynamics of matter and energy (such as trajectories of moving particles), which in turn depends on the gravitational field. If one is only interested in the weak field limit of the theory, the dynamics of matter can be computed using special relativity methods and/or Newtonian laws of gravity and then the resulting stress–energy tensor can be plugged into the Einstein field equations. But if the exact solution is required or a solution describing strong fields, the evolution of the metric and the stress–energy tensor must be solved for together. To obtain solutions, the relevant equations are the above quoted EFE (in either form) plus the continuity equation (to determine evolution of the stress–energy tensor): This is clearly not enough, as there are only 14 equations (10 from the field equations and 4 from the continuity equation) for 20 unknowns (10 metric components and 10 stress–energy tensor components). Equations of state are missing. In the most general case, it's easy to see that at least 6 more equations are required, possibly more if there are internal degrees of freedom (such as temperature) which may vary throughout spacetime. In practice, it is usually possible to simplify the problem by replacing the full set of equations of state with a simple approximation. Some common approximations are: Vacuum: Perfect fluid: where Here is the mass–energy density measured in a momentary co-moving frame, is the fluid's 4-velocity vector field, and is the pressure. Non-interacting dust ( a special case of perfect fluid ): For a perfect fluid, another equation of state relating density and pressure must be added. This equation will often depend on temperature, so a heat transfer equation is required or the postulate that heat transfer can be neglected. Next, notice that only 10 of the original 14 equations are independent, because the continuity equation is a consequence of Einstein's equations. This reflects the fact that the system is gauge invariant (in general, absent some symmetry, any choice of a curvilinear coordinate net on the same system would correspond to a numerically different solution.) A "gauge fixing" is needed, i.e. we need to impose 4 (arbitrary) constraints on the coordinate system in order to obtain unequivocal results. These constraints are known as coordinate conditions. A popular choice of gauge is the so-called "De Donder gauge", also known as the harmonic condition or harmonic gauge In numerical relativity, the preferred gauge is the so-called "3+1 decomposition", based on the ADM formalism. In this decomposition, metric is written in the form , where and are functions of spacetime coordinates and can be chosen arbitrarily in each point. The remaining physical degrees of freedom are contained in , which represents the Riemannian metric on 3-hypersurfaces with constant . For example, a naive choice of , , would correspond to a so-called synchronous coordinate system: one where t-coordinate coincides with proper time for any comoving observer (particle that moves along a fixed trajectory.) Once equations of state are chosen and the gauge is fixed, the complete set of equations can be solved. Unfortunately, even in the simplest case of gravitational field in the vacuum (vanishing stress–energy tensor), the problem is too complex to be exactly solvable. To get physical results, we can either turn to numerical methods, try to find exact solutions by imposing symmetries, or try middle-ground approaches such as perturbation methods or linear approximations of the Einstein tensor. Exact solutions Exact solutions are Lorentz metrics that are conformable to a physically realistic stress–energy tensor and which are obtained by solving the EFE exactly in closed form. External reference Scholarpedia article on the subject written by Malcolm MacCallum Non-exact solutions The solutions that are not exact are called non-exact solutions. Such solutions mainly arise due to the difficulty of solving the EFE in closed form and often take the form of approximations to ideal systems. Many non-exact solutions may be devoid of physical content, but serve as useful counterexamples to theoretical conjectures. Applications There are practical as well as theoretical reasons for studying solutions of the Einstein field equations. From a purely mathematical viewpoint, it is interesting to know the set of solutions of the Einstein field equations. Some of these solutions are parametrised by one or more parameters. From a physical standpoint, knowing the solutions of the Einstein Field Equations allows highly-precise modelling of astrophysical phenomena, including black holes, neutron stars, and stellar systems. Predictions can be made analytically about the system analyzed; such predictions include the perihelion precession of Mercury, the existence of a co-rotating region inside spinning black holes, and the orbits of objects around massive bodies. See also Ricci calculus Albert Einstein References General relativity Albert Einstein
0.78003
0.988663
0.771187
Introduction to general relativity
General relativity is a theory of gravitation developed by Albert Einstein between 1907 and 1915. The theory of general relativity says that the observed gravitational effect between masses results from their warping of spacetime. By the beginning of the 20th century, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. In Newton's model, gravity is the result of an attractive force between massive objects. Although even Newton was troubled by the unknown nature of that force, the basic framework was extremely successful at describing motion. Experiments and observations show that Einstein's description of gravitation accounts for several effects that are unexplained by Newton's law, such as minute anomalies in the orbits of Mercury and other planets. General relativity also predicts novel effects of gravity, such as gravitational waves, gravitational lensing and an effect of gravity on time known as gravitational time dilation. Many of these predictions have been confirmed by experiment or observation, most recently gravitational waves. General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where the gravitational effect is strong enough that even light cannot escape. Their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects (such as active galactic nuclei or microquasars). General relativity is also part of the framework of the standard Big Bang model of cosmology. Although general relativity is not the only relativistic theory of gravity, it is the simplest one that is consistent with the experimental data. Nevertheless, a number of open questions remain, the most fundamental of which is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. From special to general relativity In September 1905, Albert Einstein published his theory of special relativity, which reconciles Newton's laws of motion with electrodynamics (the interaction between objects with electric charge). Special relativity introduced a new framework for all of physics by proposing new concepts of space and time. Some then-accepted physical theories were inconsistent with that framework; a key example was Newton's theory of gravity, which describes the mutual attraction experienced by bodies due to their mass. Several physicists, including Einstein, searched for a theory that would reconcile Newton's law of gravity and special relativity. Only Einstein's theory proved to be consistent with experiments and observations. To understand the theory's basic ideas, it is instructive to follow Einstein's thinking between 1907 and 1915, from his simple thought experiment involving an observer in free fall to his fully geometric theory of gravity. Equivalence principle A person in a free-falling elevator experiences weightlessness; objects either float motionless or drift at constant speed. Since everything in the elevator is falling together, no gravitational effect can be observed. In this way, the experiences of an observer in free fall are indistinguishable from those of an observer in deep space, far from any significant source of gravity. Such observers are the privileged ("inertial") observers Einstein described in his theory of special relativity: observers for whom light travels along straight lines at constant speed. Einstein hypothesized that the similar experiences of weightless observers and inertial observers in special relativity represented a fundamental property of gravity, and he made this the cornerstone of his theory of general relativity, formalized in his equivalence principle. Roughly speaking, the principle states that a person in a free-falling elevator cannot tell that they are in free fall. Every experiment in such a free-falling environment has the same results as it would for an observer at rest or moving uniformly in deep space, far from all sources of gravity. Gravity and acceleration Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be produced by an accelerated frame of reference. An observer in a closed room cannot tell which of the following two scenarios is true: Objects are falling to the floor because the room is resting on the surface of the Earth and the objects are being pulled down by gravity. Objects are falling to the floor because the room is aboard a rocket in space, which is accelerating at 9.81 m/s2, the standard gravity on Earth, and is far from any source of gravity. The objects are being pulled towards the floor by the same "inertial force" that presses the driver of an accelerating car into the back of their seat. Conversely, any effect observed in an accelerated reference frame should also be observed in a gravitational field of corresponding strength. This principle allowed Einstein to predict several novel effects of gravity in 1907 . An observer in an accelerated reference frame must introduce what physicists call fictitious forces to account for the acceleration experienced by the observer and objects around them. In the example of the driver being pressed into their seat, the force felt by the driver is one example; another is the force one can feel while pulling the arms up and out if attempting to spin around like a top. Einstein's master insight was that the constant, familiar pull of the Earth's gravitational field is fundamentally the same as these fictitious forces. The apparent magnitude of the fictitious forces always appears to be proportional to the mass of any object on which they actfor instance, the driver's seat exerts just enough force to accelerate the driver at the same rate as the car. By analogy, Einstein proposed that an object in a gravitational field should feel a gravitational force proportional to its mass, as embodied in Newton's law of gravitation. Physical consequences In 1907, Einstein was still eight years away from completing the general theory of relativity. Nonetheless, he was able to make a number of novel, testable predictions that were based on his starting point for developing his new theory: the equivalence principle. The first new effect is the gravitational frequency shift of light. Consider two observers aboard an accelerating rocket-ship. Aboard such a ship, there is a natural concept of "up" and "down": the direction in which the ship accelerates is "up", and free-floating objects accelerate in the opposite direction, falling "downward". Assume that one of the observers is "higher up" than the other. When the lower observer sends a light signal to the higher observer, the acceleration of the ship causes the light to be red-shifted, as may be calculated from special relativity; the second observer will measure a lower frequency for the light than the first sent out. Conversely, light sent from the higher observer to the lower is blue-shifted, that is, shifted towards higher frequencies. Einstein argued that such frequency shifts must also be observed in a gravitational field. This is illustrated in the figure at left, which shows a light wave that is gradually red-shifted as it works its way upwards against the gravitational acceleration. This effect has been confirmed experimentally, as described below. This gravitational frequency shift corresponds to a gravitational time dilation: Since the "higher" observer measures the same light wave to have a lower frequency than the "lower" observer, time must be passing faster for the higher observer. Thus, time runs more slowly for observers the lower they are in a gravitational field. It is important to stress that, for each observer, there are no observable changes of the flow of time for events or processes that are at rest in his or her reference frame. Five-minute-eggs as timed by each observer's clock have the same consistency; as one year passes on each clock, each observer ages by that amount; each clock, in short, is in perfect agreement with all processes happening in its immediate vicinity. It is only when the clocks are compared between separate observers that one can notice that time runs more slowly for the lower observer than for the higher. This effect is minute, but it too has been confirmed experimentally in multiple experiments, as described below. In a similar way, Einstein predicted the gravitational deflection of light: in a gravitational field, light is deflected downward, to the center of the gravitational field. Quantitatively, his results were off by a factor of two; the correct derivation requires a more complete formulation of the theory of general relativity, not just the equivalence principle. Tidal effects The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction. A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth, with a similar position and velocity. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's center of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term "tidal effect" is used for this phenomenon. The equivalence between inertia and gravity cannot explain tidal effects – it cannot explain variations in the gravitational field. For that, a theory is needed which describes the way that matter (such as the large mass of the Earth) affects the inertial environment around it. From acceleration to geometry While Einstein was exploring the equivalence of gravity and acceleration as well as the role of tidal forces, he discovered several analogies with the geometry of surfaces. An example is the transition from an inertial reference frame (in which free particles coast along straight paths at constant speeds) to a rotating reference frame (in which fictitious forces have to be introduced in order to explain particle motion): this is analogous to the transition from a Cartesian coordinate system (in which the coordinate lines are straight lines) to a curved coordinate system (where coordinate lines need not be straight). A deeper analogy relates tidal forces with a property of surfaces called curvature. For gravitational fields, the absence or presence of tidal forces determines whether or not the influence of gravity can be eliminated by choosing a freely falling reference frame. Similarly, the absence or presence of curvature determines whether or not a surface is equivalent to a plane. In the summer of 1912, inspired by these analogies, Einstein searched for a geometric formulation of gravity. The elementary objects of geometry – points, lines, triangles – are traditionally defined in three-dimensional space or on two-dimensional surfaces. In 1907, Hermann Minkowski, Einstein's former mathematics professor at the Swiss Federal Polytechnic, introduced Minkowski space, a geometric formulation of Einstein's special theory of relativity where the geometry included not only space but also time. The basic entity of this new geometry is four-dimensional spacetime. The orbits of moving bodies are curves in spacetime; the orbits of bodies moving at constant speed without changing direction correspond to straight lines. The geometry of general curved surfaces was developed in the early 19th century by Carl Friedrich Gauss. This geometry had in turn been generalized to higher-dimensional spaces in Riemannian geometry introduced by Bernhard Riemann in the 1850s. With the help of Riemannian geometry, Einstein formulated a geometric description of gravity in which Minkowski's spacetime is replaced by distorted, curved spacetime, just as curved surfaces are a generalization of ordinary plane surfaces. Embedding Diagrams are used to illustrate curved spacetime in educational contexts. After he had realized the validity of this geometric analogy, it took Einstein a further three years to find the missing cornerstone of his theory: the equations describing how matter influences spacetime's curvature. Having formulated what are now known as Einstein's equations (or, more precisely, his field equations of gravity), he presented his new theory of gravity at several sessions of the Prussian Academy of Sciences in late 1915, culminating in his final presentation on November 25, 1915. Geometry and gravitation Paraphrasing John Wheeler, Einstein's geometric theory of gravity can be summarized as: spacetime tells matter how to move; matter tells spacetime how to curve. What this means is addressed in the following three sections, which explore the motion of so-called test particles, examine which properties of matter serve as a source for gravity, and, finally, introduce Einstein's equations, which relate these matter properties to the curvature of spacetime. Probing the gravitational field In order to map a body's gravitational influence, it is useful to think about what physicists call probe or test particles: particles that are influenced by gravity, but are so small and light that we can neglect their own gravitational effect. In the absence of gravity and other external forces, a test particle moves along a straight line at a constant speed. In the language of spacetime, this is equivalent to saying that such test particles move along straight world lines in spacetime. In the presence of gravity, spacetime is non-Euclidean, or curved, and in curved spacetime straight world lines may not exist. Instead, test particles move along lines called geodesics, which are "as straight as possible", that is, they follow the shortest path between starting and ending points, taking the curvature into consideration. A simple analogy is the following: In geodesy, the science of measuring Earth's size and shape, a geodesic is the shortest route between two points on the Earth's surface. Approximately, such a route is a segment of a great circle, such as a line of longitude or the equator. These paths are certainly not straight, simply because they must follow the curvature of the Earth's surface. But they are as straight as is possible subject to this constraint. The properties of geodesics differ from those of straight lines. For example, on a plane, parallel lines never meet, but this is not so for geodesics on the surface of the Earth: for example, lines of longitude are parallel at the equator, but intersect at the poles. Analogously, the world lines of test particles in free fall are spacetime geodesics, the straightest possible lines in spacetime. But still there are crucial differences between them and the truly straight lines that can be traced out in the gravity-free spacetime of special relativity. In special relativity, parallel geodesics remain parallel. In a gravitational field with tidal effects, this will not, in general, be the case. If, for example, two bodies are initially at rest relative to each other, but are then dropped in the Earth's gravitational field, they will move towards each other as they fall towards the Earth's center. Compared with planets and other astronomical bodies, the objects of everyday life (people, cars, houses, even mountains) have little mass. Where such objects are concerned, the laws governing the behavior of test particles are sufficient to describe what happens. Notably, in order to deflect a test particle from its geodesic path, an external force must be applied. A chair someone is sitting on applies an external upwards force preventing the person from falling freely towards the center of the Earth and thus following a geodesic, which they would otherwise be doing without the chair there, or any other matter in between them and the center point of the Earth. In this way, general relativity explains the daily experience of gravity on the surface of the Earth not as the downwards pull of a gravitational force, but as the upwards push of external forces. These forces deflect all bodies resting on the Earth's surface from the geodesics they would otherwise follow. For objects massive enough that their own gravitational influence cannot be neglected, the laws of motion are somewhat more complicated than for test particles, although it remains true that spacetime tells matter how to move. Sources of gravity In Newton's description of gravity, the gravitational force is caused by matter. More precisely, it is caused by a specific property of material objects: their mass. In Einstein's theory and related theories of gravitation, curvature at every point in spacetime is also caused by whatever matter is present. Here, too, mass is a key property in determining the gravitational influence of matter. But in a relativistic theory of gravity, mass cannot be the only source of gravity. Relativity links mass with energy, and energy with momentum. The equivalence between mass and energy, as expressed by the formula E = mc, is the most famous consequence of special relativity. In relativity, mass and energy are two different ways of describing one physical quantity. If a physical system has energy, it also has the corresponding mass, and vice versa. In particular, all properties of a body that are associated with energy, such as its temperature or the binding energy of systems such as nuclei or molecules, contribute to that body's mass, and hence act as sources of gravity. In special relativity, energy is closely connected to momentum. In special relativity, just as space and time are different aspects of a more comprehensive entity called spacetime, energy and momentum are merely different aspects of a unified, four-dimensional quantity that physicists call four-momentum. In consequence, if energy is a source of gravity, momentum must be a source as well. The same is true for quantities that are directly related to energy and momentum, namely internal pressure and tension. Taken together, in general relativity it is mass, energy, momentum, pressure and tension that serve as sources of gravity: they are how matter tells spacetime how to curve. In the theory's mathematical formulation, all these quantities are but aspects of a more general physical quantity called the energy–momentum tensor. Einstein's equations Einstein's equations are the centerpiece of general relativity. They provide a precise formulation of the relationship between spacetime geometry and the properties of matter, using the language of mathematics. More concretely, they are formulated using the concepts of Riemannian geometry, in which the geometric properties of a space (or a spacetime) are described by a quantity called a metric. The metric encodes the information needed to compute the fundamental geometric notions of distance and angle in a curved space (or spacetime). A spherical surface like that of the Earth provides a simple example. The location of any point on the surface can be described by two coordinates: the geographic latitude and longitude. Unlike the Cartesian coordinates of the plane, coordinate differences are not the same as distances on the surface, as shown in the diagram on the right: for someone at the equator, moving 30 degrees of longitude westward (magenta line) corresponds to a distance of roughly , while for someone at a latitude of 55 degrees, moving 30 degrees of longitude westward (blue line) covers a distance of merely . Coordinates therefore do not provide enough information to describe the geometry of a spherical surface, or indeed the geometry of any more complicated space or spacetime. That information is precisely what is encoded in the metric, which is a function defined at each point of the surface (or space, or spacetime) and relates coordinate differences to differences in distance. All other quantities that are of interest in geometry, such as the length of any given curve, or the angle at which two curves meet, can be computed from this metric function. The metric function and its rate of change from point to point can be used to define a geometrical quantity called the Riemann curvature tensor, which describes exactly how the Riemannian manifold, the spacetime in the theory of relativity, is curved at each point. As has already been mentioned, the matter content of the spacetime defines another quantity, the energy–momentum tensor T, and the principle that "spacetime tells matter how to move, and matter tells spacetime how to curve" means that these quantities must be related to each other. Einstein formulated this relation by using the Riemann curvature tensor and the metric to define another geometrical quantity G, now called the Einstein tensor, which describes some aspects of the way spacetime is curved. Einstein's equation then states that i.e., up to a constant multiple, the quantity G (which measures curvature) is equated with the quantity T (which measures matter content). Here, G is the gravitational constant of Newtonian gravity, and c is the speed of light from special relativity. This equation is often referred to in the plural as Einstein's equations, since the quantities G and T are each determined by several functions of the coordinates of spacetime, and the equations equate each of these component functions. A solution of these equations describes a particular geometry of spacetime; for example, the Schwarzschild solution describes the geometry around a spherical, non-rotating mass such as a star or a black hole, whereas the Kerr solution describes a rotating black hole. Still other solutions can describe a gravitational wave or, in the case of the Friedmann–Lemaître–Robertson–Walker solution, an expanding universe. The simplest solution is the uncurved Minkowski spacetime, the spacetime described by special relativity. Experiments No scientific theory is self-evidently true; each is a model that must be checked by experiment. Newton's law of gravity was accepted because it accounted for the motion of planets and moons in the Solar System with considerable accuracy. As the precision of experimental measurements gradually improved, some discrepancies with Newton's predictions were observed, and these were accounted for in the general theory of relativity. Similarly, the predictions of general relativity must also be checked with experiment, and Einstein himself devised three tests now known as the classical tests of the theory: Newtonian gravity predicts that the orbit which a single planet traces around a perfectly spherical star should be an ellipse. Einstein's theory predicts a more complicated curve: the planet behaves as if it were travelling around an ellipse, but at the same time, the ellipse as a whole is rotating slowly around the star. In the diagram on the right, the ellipse predicted by Newtonian gravity is shown in red, and part of the orbit predicted by Einstein in blue. For a planet orbiting the Sun, this deviation from Newton's orbits is known as the anomalous perihelion shift. The first measurement of this effect, for the planet Mercury, dates back to 1859. The most accurate results for Mercury and for other planets to date are based on measurements which were undertaken between 1966 and 1990, using radio telescopes. General relativity predicts the correct anomalous perihelion shift for all planets where this can be measured accurately (Mercury, Venus and the Earth). According to general relativity, light does not travel along straight lines when it propagates in a gravitational field. Instead, it is deflected in the presence of massive bodies. In particular, starlight is deflected as it passes near the Sun, leading to apparent shifts of up to 1.75 arc seconds in the stars' positions in the sky (an arc second is equal to 1/3600 of a degree). In the framework of Newtonian gravity, a heuristic argument can be made that leads to light deflection by half that amount. The different predictions can be tested by observing stars that are close to the Sun during a solar eclipse. In this way, a British expedition to West Africa in 1919, directed by Arthur Eddington, confirmed that Einstein's prediction was correct, and the Newtonian predictions wrong, via observation of the May 1919 eclipse. Eddington's results were not very accurate; subsequent observations of the deflection of the light of distant quasars by the Sun, which utilize highly accurate techniques of radio astronomy, have confirmed Eddington's results with significantly better precision (the first such measurements date from 1967, the most recent comprehensive analysis from 2004). Gravitational redshift was first measured in a laboratory setting in 1959 by Pound and Rebka. It is also seen in astrophysical measurements, notably for light escaping the white dwarf Sirius B. The related gravitational time dilation effect has been measured by transporting atomic clocks to altitudes of between tens and tens of thousands of kilometers (first by Hafele and Keating in 1971; most accurately to date by Gravity Probe A launched in 1976). Of these tests, only the perihelion advance of Mercury was known prior to Einstein's final publication of general relativity in 1916. The subsequent experimental confirmation of his other predictions, especially the first measurements of the deflection of light by the sun in 1919, catapulted Einstein to international stardom. These three experiments justified adopting general relativity over Newton's theory and, incidentally, over a number of alternatives to general relativity that had been proposed. Further tests of general relativity include precision measurements of the Shapiro effect or gravitational time delay for light, measured in 2002 by the Cassini space probe. One set of tests focuses on effects predicted by general relativity for the behavior of gyroscopes travelling through space. One of these effects, geodetic precession, has been tested with the Lunar Laser Ranging Experiment (high-precision measurements of the orbit of the Moon). Another, which is related to rotating masses, is called frame-dragging. The geodetic and frame-dragging effects were both tested by the Gravity Probe B satellite experiment launched in 2004, with results confirming relativity to within 0.5% and 15%, respectively, as of December 2008. By cosmic standards, gravity throughout the solar system is weak. Since the differences between the predictions of Einstein's and Newton's theories are most pronounced when gravity is strong, physicists have long been interested in testing various relativistic effects in a setting with comparatively strong gravitational fields. This has become possible thanks to precision observations of binary pulsars. In such a star system, two highly compact neutron stars orbit each other. At least one of them is a pulsar – an astronomical object that emits a tight beam of radiowaves. These beams strike the Earth at very regular intervals, similarly to the way that the rotating beam of a lighthouse means that an observer sees the lighthouse blink, and can be observed as a highly regular series of pulses. General relativity predicts specific deviations from the regularity of these radio pulses. For instance, at times when the radio waves pass close to the other neutron star, they should be deflected by the star's gravitational field. The observed pulse patterns are impressively close to those predicted by general relativity. One particular set of observations is related to eminently useful practical applications, namely to satellite navigation systems such as the Global Positioning System that are used for both precise positioning and timekeeping. Such systems rely on two sets of atomic clocks: clocks aboard satellites orbiting the Earth, and reference clocks stationed on the Earth's surface. General relativity predicts that these two sets of clocks should tick at slightly different rates, due to their different motions (an effect already predicted by special relativity) and their different positions within the Earth's gravitational field. In order to ensure the system's accuracy, either the satellite clocks are slowed down by a relativistic factor, or that same factor is made part of the evaluation algorithm. In turn, tests of the system's accuracy (especially the very thorough measurements that are part of the definition of universal coordinated time) are testament to the validity of the relativistic predictions. A number of other tests have probed the validity of various versions of the equivalence principle; strictly speaking, all measurements of gravitational time dilation are tests of the weak version of that principle, not of general relativity itself. So far, general relativity has passed all observational tests. Astrophysical applications Models based on general relativity play an important role in astrophysics; the success of these models is further testament to the theory's validity. Gravitational lensing Since light is deflected in a gravitational field, it is possible for the light of a distant object to reach an observer along two or more paths. For instance, light of a very distant object such as a quasar can pass along one side of a massive galaxy and be deflected slightly so as to reach an observer on Earth, while light passing along the opposite side of that same galaxy is deflected as well, reaching the same observer from a slightly different direction. As a result, that particular observer will see one astronomical object in two different places in the night sky. This kind of focussing is well known when it comes to optical lenses, and hence the corresponding gravitational effect is called gravitational lensing. Observational astronomy uses lensing effects as an important tool to infer properties of the lensing object. Even in cases where that object is not directly visible, the shape of a lensed image provides information about the mass distribution responsible for the light deflection. In particular, gravitational lensing provides one way to measure the distribution of dark matter, which does not give off light and can be observed only by its gravitational effects. One particularly interesting application are large-scale observations, where the lensing masses are spread out over a significant fraction of the observable universe, and can be used to obtain information about the large-scale properties and evolution of our cosmos. Gravitational waves Gravitational waves, a direct consequence of Einstein's theory, are distortions of geometry that propagate at the speed of light, and can be thought of as ripples in spacetime. They should not be confused with the gravity waves of fluid dynamics, which are a different concept. In February 2016, the Advanced LIGO team announced that they had directly observed gravitational waves from a black hole merger. Indirectly, the effect of gravitational waves had been detected in observations of specific binary stars. Such pairs of stars orbit each other and, as they do so, gradually lose energy by emitting gravitational waves. For ordinary stars like the Sun, this energy loss would be too small to be detectable, but this energy loss was observed in 1974 in a binary pulsar called PSR1913+16. In such a system, one of the orbiting stars is a pulsar. This has two consequences: a pulsar is an extremely dense object known as a neutron star, for which gravitational wave emission is much stronger than for ordinary stars. Also, a pulsar emits a narrow beam of electromagnetic radiation from its magnetic poles. As the pulsar rotates, its beam sweeps over the Earth, where it is seen as a regular series of radio pulses, just as a ship at sea observes regular flashes of light from the rotating light in a lighthouse. This regular pattern of radio pulses functions as a highly accurate "clock". It can be used to time the double star's orbital period, and it reacts sensitively to distortions of spacetime in its immediate neighborhood. The discoverers of PSR1913+16, Russell Hulse and Joseph Taylor, were awarded the Nobel Prize in Physics in 1993. Since then, several other binary pulsars have been found. The most useful are those in which both stars are pulsars, since they provide accurate tests of general relativity. Currently, a number of land-based gravitational wave detectors are in operation, and a mission to launch a space-based detector, LISA, is currently under development, with a precursor mission (LISA Pathfinder) which was launched in 2015. Gravitational wave observations can be used to obtain information about compact objects such as neutron stars and black holes, and also to probe the state of the early universe fractions of a second after the Big Bang. Black holes When mass is concentrated into a sufficiently compact region of space, general relativity predicts the formation of a black hole – a region of space with a gravitational effect so strong that not even light can escape. Certain types of black holes are thought to be the final state in the evolution of massive stars. On the other hand, supermassive black holes with the mass of millions or billions of Suns are assumed to reside in the cores of most galaxies, and they play a key role in current models of how galaxies have formed over the past billions of years. Matter falling onto a compact object is one of the most efficient mechanisms for releasing energy in the form of radiation, and matter falling onto black holes is thought to be responsible for some of the brightest astronomical phenomena imaginable. Notable examples of great interest to astronomers are quasars and other types of active galactic nuclei. Under the right conditions, falling matter accumulating around a black hole can lead to the formation of jets, in which focused beams of matter are flung away into space at speeds near that of light. There are several properties that make black holes the most promising sources of gravitational waves. One reason is that black holes are the most compact objects that can orbit each other as part of a binary system; as a result, the gravitational waves emitted by such a system are especially strong. Another reason follows from what are called black-hole uniqueness theorems: over time, black holes retain only a minimal set of distinguishing features (these theorems have become known as "no-hair" theorems), regardless of the starting geometric shape. For instance, in the long term, the collapse of a hypothetical matter cube will not result in a cube-shaped black hole. Instead, the resulting black hole will be indistinguishable from a black hole formed by the collapse of a spherical mass. In its transition to a spherical shape, the black hole formed by the collapse of a more complicated shape will emit gravitational waves. Cosmology One of the most important aspects of general relativity is that it can be applied to the universe as a whole. A key point is that, on large scales, our universe appears to be constructed along very simple lines: all current observations suggest that, on average, the structure of the cosmos should be approximately the same, regardless of an observer's location or direction of observation: the universe is approximately homogeneous and isotropic. Such comparatively simple universes can be described by simple solutions of Einstein's equations. The current cosmological models of the universe are obtained by combining these simple solutions to general relativity with theories describing the properties of the universe's matter content, namely thermodynamics, nuclear- and particle physics. According to these models, our present universe emerged from an extremely dense high-temperature state – the Big Bang – roughly 14 billion years ago and has been expanding ever since. Einstein's equations can be generalized by adding a term called the cosmological constant. When this term is present, empty space itself acts as a source of attractive (or, less commonly, repulsive) gravity. Einstein originally introduced this term in his pioneering 1917 paper on cosmology, with a very specific motivation: contemporary cosmological thought held the universe to be static, and the additional term was required for constructing static model universes within the framework of general relativity. When it became apparent that the universe is not static, but expanding, Einstein was quick to discard this additional term. Since the end of the 1990s, however, astronomical evidence indicating an accelerating expansion consistent with a cosmological constant – or, equivalently, with a particular and ubiquitous kind of dark energy – has steadily been accumulating. Modern research General relativity is very successful in providing a framework for accurate models which describe an impressive array of physical phenomena. On the other hand, there are many interesting open questions, and in particular, the theory as a whole is almost certainly incomplete. In contrast to all other modern theories of fundamental interactions, general relativity is a classical theory: it does not include the effects of quantum physics. The quest for a quantum version of general relativity addresses one of the most fundamental open questions in physics. While there are promising candidates for such a theory of quantum gravity, notably string theory and loop quantum gravity, there is at present no consistent and complete theory. It has long been hoped that a theory of quantum gravity would also eliminate another problematic feature of general relativity: the presence of spacetime singularities. These singularities are boundaries ("sharp edges") of spacetime at which geometry becomes ill-defined, with the consequence that general relativity itself loses its predictive power. Furthermore, there are so-called singularity theorems which predict that such singularities must exist within the universe if the laws of general relativity were to hold without any quantum modifications. The best-known examples are the singularities associated with the model universes that describe black holes and the beginning of the universe. Other attempts to modify general relativity have been made in the context of cosmology. In the modern cosmological models, most energy in the universe is in forms that have never been detected directly, namely dark energy and dark matter. There have been several controversial proposals to remove the need for these enigmatic forms of matter and energy, by modifying the laws governing gravity and the dynamics of cosmic expansion, for example modified Newtonian dynamics. Beyond the challenges of quantum effects and cosmology, research on general relativity is rich with possibilities for further exploration: mathematical relativists explore the nature of singularities and the fundamental properties of Einstein's equations, and ever more comprehensive computer simulations of specific spacetimes (such as those describing merging black holes) are run. More than one hundred years after the theory was first published, research is more active than ever. See also General relativity Introduction to the mathematics of general relativity Special relativity History of general relativity Tests of general relativity Numerical relativity Derivations of the Lorentz transformations List of books on general relativity References Bibliography . External links Additional resources, including more advanced material, can be found in General relativity resources. Einstein Online. Website featuring articles on a variety of aspects of relativistic physics for a general audience, hosted by the Max Planck Institute for Gravitational Physics NCSA Spacetime Wrinkles. Website produced by the numerical relativity group at the National Center for Supercomputing Applications, featuring an elementary introduction to general relativity, black holes and gravitational waves Gravity
0.779029
0.989882
0.771146
Gauss's law
In physics (specifically electromagnetism), Gauss's law, also known as Gauss's flux theorem (or sometimes Gauss's theorem), is one of Maxwell's equations. It is an application of the divergence theorem, and it relates the distribution of electric charge to the resulting electric field. Definition In its integral form, it states that the flux of the electric field out of an arbitrary closed surface is proportional to the electric charge enclosed by the surface, irrespective of how that charge is distributed. Even though the law alone is insufficient to determine the electric field across a surface enclosing any charge distribution, this may be possible in cases where symmetry mandates uniformity of the field. Where no such symmetry exists, Gauss's law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge. The law was first formulated by Joseph-Louis Lagrange in 1773, followed by Carl Friedrich Gauss in 1835, both in the context of the attraction of ellipsoids. It is one of Maxwell's equations, which forms the basis of classical electrodynamics. Gauss's law can be used to derive Coulomb's law, and vice versa. Qualitative description In words, Gauss's law states: The net electric flux through any hypothetical closed surface is equal to times the net electric charge enclosed within that closed surface. The closed surface is also referred to as Gaussian surface. Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any inverse-square law can be formulated in a way similar to Gauss's law: for example, Gauss's law itself is essentially equivalent to the Coulomb's law, and Gauss's law for gravity is essentially equivalent to the Newton's law of gravity, both of which are inverse-square laws. The law can be expressed mathematically using vector calculus in integral form and differential form; both are equivalent since they are related by the divergence theorem, also called Gauss's theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field and the total electric charge, or in terms of the electric displacement field and the free electric charge. Equation involving the field Gauss's law can be stated using either the electric field or the electric displacement field . This section shows some of the forms with ; the form with is below, as are other forms with . Integral form Gauss's law may be expressed as: where is the electric flux through a closed surface enclosing any volume , is the total charge enclosed within , and is the electric constant. The electric flux is defined as a surface integral of the electric field: where is the electric field, is a vector representing an infinitesimal element of area of the surface, and represents the dot product of two vectors. In a curved spacetime, the flux of an electromagnetic field through a closed surface is expressed as where is the speed of light; denotes the time components of the electromagnetic tensor; is the determinant of metric tensor; is an orthonormal element of the two-dimensional surface surrounding the charge ; indices and do not match each other. Since the flux is defined as an integral of the electric field, this expression of Gauss's law is called the integral form. In problems involving conductors set at known potentials, the potential away from them is obtained by solving Laplace's equation, either analytically or numerically. The electric field is then calculated as the potential's negative gradient. Gauss's law makes it possible to find the distribution of electric charge: The charge in any given region of the conductor can be deduced by integrating the electric field to find the flux through a small box whose sides are perpendicular to the conductor's surface and by noting that the electric field is perpendicular to the surface, and zero inside the conductor. The reverse problem, when the electric charge distribution is known and the electric field must be computed, is much more difficult. The total flux through a given surface gives little information about the electric field, and can go in and out of the surface in arbitrarily complicated patterns. An exception is if there is some symmetry in the problem, which mandates that the electric field passes through the surface in a uniform way. Then, if the total flux is known, the field itself can be deduced at every point. Common examples of symmetries which lend themselves to Gauss's law include: cylindrical symmetry, planar symmetry, and spherical symmetry. See the article Gaussian surface for examples where these symmetries are exploited to compute electric fields. Differential form By the divergence theorem, Gauss's law can alternatively be written in the differential form: where is the divergence of the electric field, is the vacuum permittivity and is the total volume charge density (charge per unit volume). Equivalence of integral and differential forms The integral and differential forms are mathematically equivalent, by the divergence theorem. Here is the argument more specifically. Equation involving the field Free, bound, and total charge The electric charge that arises in the simplest textbook situations would be classified as "free charge"—for example, the charge which is transferred in static electricity, or the charge on a capacitor plate. In contrast, "bound charge" arises only in the context of dielectric (polarizable) materials. (All materials are polarizable to some extent.) When such materials are placed in an external electric field, the electrons remain bound to their respective atoms, but shift a microscopic distance in response to the field, so that they're more on one side of the atom than the other. All these microscopic displacements add up to give a macroscopic net charge distribution, and this constitutes the "bound charge". Although microscopically all charge is fundamentally the same, there are often practical reasons for wanting to treat bound charge differently from free charge. The result is that the more fundamental Gauss's law, in terms of (above), is sometimes put into the equivalent form below, which is in terms of and the free charge only. Integral form This formulation of Gauss's law states the total charge form: where is the -field flux through a surface which encloses a volume , and is the free charge contained in . The flux is defined analogously to the flux of the electric field through : Differential form The differential form of Gauss's law, involving free charge only, states: where is the divergence of the electric displacement field, and is the free electric charge density. Equivalence of total and free charge statements Equation for linear materials In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between and : where is the permittivity of the material. For the case of vacuum (aka free space), . Under these circumstances, Gauss's law modifies to for the integral form, and for the differential form. Relation to Coulomb's law Deriving Gauss's law from Coulomb's law Strictly speaking, Gauss's law cannot be derived from Coulomb's law alone, since Coulomb's law gives the electric field due to an individual, electrostatic point charge only. However, Gauss's law can be proven from Coulomb's law if it is assumed, in addition, that the electric field obeys the superposition principle. The superposition principle states that the resulting field is the vector sum of fields generated by each particle (or the integral, if the charges are distributed smoothly in space). Since Coulomb's law only applies to stationary charges, there is no reason to expect Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law does hold for moving charges, and, in this respect, Gauss's law is more general than Coulomb's law. Deriving Coulomb's law from Gauss's law Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). See also Method of image charges Uniqueness theorem for Poisson's equation List of examples of Stigler's law Notes Citations References Digital version David J. Griffiths (6th ed.) External links MIT Video Lecture Series (30 x 50 minute lectures)- Electricity and Magnetism Taught by Professor Walter Lewin. section on Gauss's law in an online textbook MISN-0-132 Gauss's Law for Spherical Symmetry (PDF file) by Peter Signell for Project PHYSNET. MISN-0-133 Gauss's Law Applied to Cylindrical and Planar Charge Distributions (PDF file) by Peter Signell for Project PHYSNET. Electrostatics Eponymous laws of physics Vector calculus Maxwell's equations Law Electromagnetism
0.772291
0.9985
0.771133
Physical geography
Physical geography (also known as physiography) is one of the three main branches of geography. Physical geography is the branch of natural science which deals with the processes and patterns in the natural environment such as the atmosphere, hydrosphere, biosphere, and geosphere. This focus is in contrast with the branch of human geography, which focuses on the built environment, and technical geography, which focuses on using, studying, and creating tools to obtain, analyze, interpret, and understand spatial information. The three branches have significant overlap, however. Sub-branches Physical geography can be divided into several branches or related fields, as follows: Geomorphology is concerned with understanding the surface of the Earth and the processes by which it is shaped, both at the present as well as in the past. Geomorphology as a field has several sub-fields that deal with the specific landforms of various environments, e.g. desert geomorphology and fluvial geomorphology; however, these sub-fields are united by the core processes which cause them, mainly tectonic or climatic processes. Geomorphology seeks to understand landform history and dynamics, and predict future changes through a combination of field observation, physical experiment, and numerical modeling (Geomorphometry). Early studies in geomorphology are the foundation for pedology, one of two main branches of soil science. Hydrology is predominantly concerned with the amounts and quality of water moving and accumulating on the land surface and in the soils and rocks near the surface and is typified by the hydrological cycle. Thus the field encompasses water in rivers, lakes, aquifers and to an extent glaciers, in which the field examines the process and dynamics involved in these bodies of water. Hydrology has historically had an important connection with engineering and has thus developed a largely quantitative method in its research; however, it does have an earth science side that embraces the systems approach. Similar to most fields of physical geography it has sub-fields that examine the specific bodies of water or their interaction with other spheres e.g. limnology and ecohydrology. Glaciology is the study of glaciers and ice sheets, or more commonly the cryosphere or ice and phenomena that involve ice. Glaciology groups the latter (ice sheets) as continental glaciers and the former (glaciers) as alpine glaciers. Although research in the areas is similar to research undertaken into both the dynamics of ice sheets and glaciers, the former tends to be concerned with the interaction of ice sheets with the present climate and the latter with the impact of glaciers on the landscape. Glaciology also has a vast array of sub-fields examining the factors and processes involved in ice sheets and glaciers e.g. snow hydrology and glacial geology. Biogeography is the science which deals with geographic patterns of species distribution and the processes that result in these patterns. Biogeography emerged as a field of study as a result of the work of Alfred Russel Wallace, although the field prior to the late twentieth century had largely been viewed as historic in its outlook and descriptive in its approach. The main stimulus for the field since its founding has been that of evolution, plate tectonics and the theory of island biogeography. The field can largely be divided into five sub-fields: island biogeography, paleobiogeography, phylogeography, zoogeography and phytogeography. Climatology is the study of the climate, scientifically defined as weather conditions averaged over a long period of time. Climatology examines both the nature of micro (local) and macro (global) climates and the natural and anthropogenic influences on them. The field is also sub-divided largely into the climates of various regions and the study of specific phenomena or time periods e.g. tropical cyclone rainfall climatology and paleoclimatology. Soil geography deals with the distribution of soils across the terrain. This discipline, between geography and soil science, is fundamental to both physical geography and pedology. Pedology is the study of soils in their natural environment. It deals with pedogenesis, soil morphology, soil classification. Soil geography studies the spatial distribution of soils as it relates to topography, climate (water, air, temperature), soil life (micro-organisms, plants, animals) and mineral materials within soils (biogeochemical cycles). Palaeogeography is a cross-disciplinary study that examines the preserved material in the stratigraphic record to determine the distribution of the continents through geologic time. Almost all the evidence for the positions of the continents comes from geology in the form of fossils or paleomagnetism. The use of these data has resulted in evidence for continental drift, plate tectonics, and supercontinents. This, in turn, has supported palaeogeographic theories such as the Wilson cycle. Coastal geography is the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology, and oceanography) and the human geography of the coast. It involves an understanding of coastal weathering processes, particularly wave action, sediment movement and weathering, and also the ways in which humans interact with the coast. Coastal geography, although predominantly geomorphological in its research, is not just concerned with coastal landforms, but also the causes and influences of sea level change. Oceanography is the branch of physical geography that studies the Earth's oceans and seas. It covers a wide range of topics, including marine organisms and ecosystem dynamics (biological oceanography); ocean currents, waves, and geophysical fluid dynamics (physical oceanography); plate tectonics and the geology of the sea floor (geological oceanography); and fluxes of various chemical substances and physical properties within the ocean and across its boundaries (chemical oceanography). These diverse topics reflect multiple disciplines that oceanographers blend to further knowledge of the world ocean and understanding of processes within it. Quaternary science is an interdisciplinary field of study focusing on the Quaternary period, which encompasses the last 2.6 million years. The field studies the last ice age and the recent interstadial the Holocene and uses proxy evidence to reconstruct the past environments during this period to infer the climatic and environmental changes that have occurred. Landscape ecology is a sub-discipline of ecology and geography that address how spatial variation in the landscape affects ecological processes such as the distribution and flow of energy, materials, and individuals in the environment (which, in turn, may influence the distribution of landscape "elements" themselves such as hedgerows). The field was largely funded by the German geographer Carl Troll. Landscape ecology typically deals with problems in an applied and holistic context. The main difference between biogeography and landscape ecology is that the latter is concerned with how flows or energy and material are changed and their impacts on the landscape whereas the former is concerned with the spatial patterns of species and chemical cycles. Geomatics is the field of gathering, storing, processing, and delivering geographic information, or spatially referenced information. Geomatics includes geodesy (scientific discipline that deals with the measurement and representation of the earth, its gravitational field, and other geodynamic phenomena, such as crustal motion, oceanic tides, and polar motion), cartography, geographical information science (GIS) and remote sensing (the short or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing devices that are not in physical or intimate contact with the object). Environmental geography is a branch of geography that analyzes the spatial aspects of interactions between humans and the natural world. The branch bridges the divide between human and physical geography and thus requires an understanding of the dynamics of geology, meteorology, hydrology, biogeography, and geomorphology, as well as the ways in which human societies conceptualize the environment. Although the branch was previously more visible in research than at present with theories such as environmental determinism linking society with the environment. It has largely become the domain of the study of environmental management or anthropogenic influences. Journals and literature Main category: Geography Journals Mental geography and earth science journals communicate and document the results of research carried out in universities and various other research institutions. Most journals cover a specific publish the research within that field, however unlike human geographers, physical geographers tend to publish in inter-disciplinary journals rather than predominantly geography journal; the research is normally expressed in the form of a scientific paper. Additionally, textbooks, books, and communicate research to laypeople, although these tend to focus on environmental issues or cultural dilemmas. Examples of journals that publish articles from physical geographers are: Historical evolution of the discipline From the birth of geography as a science during the Greek classical period and until the late nineteenth century with the birth of anthropogeography (human geography), geography was almost exclusively a natural science: the study of location and descriptive gazetteer of all places of the known world. Several works among the best known during this long period could be cited as an example, from Strabo (Geography), Eratosthenes (Geographika) or Dionysius Periegetes (Periegesis Oiceumene) in the Ancient Age. In more modern times, these works include the Alexander von Humboldt (Kosmos) in the nineteenth century, in which geography is regarded as a physical and natural science through the work Summa de Geografía of Martín Fernández de Enciso from the early sixteenth century, which indicated for the first time the New World. During the eighteenth and nineteenth centuries, a controversy exported from geology, between supporters of James Hutton (uniformitarianism thesis) and Georges Cuvier (catastrophism) strongly influenced the field of geography, because geography at this time was a natural science. Two historical events during the nineteenth century had a great effect on the further development of physical geography. The first was the European colonial expansion in Asia, Africa, Australia and even America in search of raw materials required by industries during the Industrial Revolution. This fostered the creation of geography departments in the universities of the colonial powers and the birth and development of national geographical societies, thus giving rise to the process identified by Horacio Capel as the institutionalization of geography. The exploration of Siberia is an example. In the mid-eighteenth century, many geographers were sent to perform geographical surveys in the area of Arctic Siberia. Among these is who is considered the patriarch of Russian geography, Mikhail Lomonosov. In the mid-1750s Lomonosov began working in the Department of Geography, Academy of Sciences to conduct research in Siberia. They showed the organic origin of soil and developed a comprehensive law on the movement of the ice, thereby founding a new branch of geography: glaciology. In 1755 on his initiative was founded Moscow University where he promoted the study of geography and the training of geographers. In 1758 he was appointed director of the Department of Geography, Academy of Sciences, a post from which would develop a working methodology for geographical survey guided by the most important long expeditions and geographical studies in Russia. The contributions of the Russian school became more frequent through his disciples, and in the nineteenth century we have great geographers such as Vasily Dokuchaev who performed works of great importance as a "principle of comprehensive analysis of the territory" and "Russian Chernozem". In the latter, he introduced the geographical concept of soil, as distinct from a simple geological stratum, and thus found a new geographic area of study: pedology. Climatology also received a strong boost from the Russian school by Wladimir Köppen whose main contribution, climate classification, is still valid today. However, this great geographer also contributed to the paleogeography through his work "The climates of the geological past" which is considered the father of paleoclimatology. Russian geographers who made great contributions to the discipline in this period were: NM Sibirtsev, Pyotr Semyonov, K.D. Glinka, Neustrayev, among others. The second important process is the theory of evolution by Darwin in mid-century (which decisively influenced the work of Friedrich Ratzel, who had academic training as a zoologist and was a follower of Darwin's ideas) which meant an important impetus in the development of Biogeography. Another major event in the late nineteenth and early twentieth centuries took place in the United States. William Morris Davis not only made important contributions to the establishment of discipline in his country but revolutionized the field to develop cycle of erosion theory which he proposed as a paradigm for geography in general, although in actually served as a paradigm for physical geography. His theory explained that mountains and other landforms are shaped by factors that are manifested cyclically. He explained that the cycle begins with the lifting of the relief by geological processes (faults, volcanism, tectonic upheaval, etc.). Factors such as rivers and runoff begin to create V-shaped valleys between the mountains (the stage called "youth"). During this first stage, the terrain is steeper and more irregular. Over time, the currents can carve wider valleys ("maturity") and then start to wind, towering hills only ("senescence"). Finally, everything comes to what is a plain flat plain at the lowest elevation possible (called "baseline") This plain was called by Davis' "peneplain" meaning "almost plain" Then river rejuvenation occurs and there is another mountain lift and the cycle continues. Although Davis's theory is not entirely accurate, it was absolutely revolutionary and unique in its time and helped to modernize and create a geography subfield of geomorphology. Its implications prompted a myriad of research in various branches of physical geography. In the case of the Paleogeography, this theory provided a model for understanding the evolution of the landscape. For hydrology, glaciology, and climatology as a boost investigated as studying geographic factors shape the landscape and affect the cycle. The bulk of the work of William Morris Davis led to the development of a new branch of physical geography: Geomorphology whose contents until then did not differ from the rest of geography. Shortly after this branch would present a major development. Some of his disciples made significant contributions to various branches of physical geography such as Curtis Marbut and his invaluable legacy for Pedology, Mark Jefferson, Isaiah Bowman, among others. Notable physical geographers Eratosthenes (276194 BC) who invented the discipline of geography. He made the first known reliable estimation of the Earth's size. He is considered the father of mathematical geography and geodesy. Ptolemy (c. 90c. 168), who compiled Greek and Roman knowledge to produce the book Geographia. Abū Rayhān Bīrūnī (9731048 AD), considered the father of geodesy. Ibn Sina (Avicenna, 980–1037), who formulated the law of superposition and concept of uniformitarianism in Kitāb al-Šifāʾ (also called The Book of Healing). Muhammad al-Idrisi (Dreses, 1100), who drew the Tabula Rogeriana, the most accurate world map in pre-modern times. Piri Reis (1465c. 1554), whose Piri Reis map is the oldest surviving world map to include the Americas and possibly Antarctica Gerardus Mercator (1512–1594), an innovative cartographer and originator of the Mercator projection. Bernhardus Varenius (1622–1650), Wrote his important work "General Geography" (1650), first overview of the geography, the foundation of modern geography. Mikhail Lomonosov (1711–1765), father of Russian geography and founded the study of glaciology. Alexander von Humboldt (1769–1859), considered the father of modern geography. Published Cosmos and founded the study of biogeography. Arnold Henry Guyot (1807–1884), who noted the structure of glaciers and advanced the understanding of glacial motion, especially in fast ice flow. Louis Agassiz (1807–1873), the author of a glacial theory which disputed the notion of a steady-cooling Earth. Alfred Russel Wallace (1823–1913), founder of modern biogeography and the Wallace line. Vasily Dokuchaev (1840–1903), patriarch of Russian geography and founder of pedology. Wladimir Peter Köppen (1846–1940), developer of most important climate classification and founder of Paleoclimatology. William Morris Davis (1850–1934), father of American geography, founder of Geomorphology and developer of the geographical cycle theory. John Francon Williams FRGS (1854-1911), wrote his seminal work Geography of the Oceans published in 1881. Walther Penck (1888–1923), proponent of the cycle of erosion and the simultaneous occurrence of uplift and denudation. Sir Ernest Shackleton (1874–1922), Antarctic explorer during the Heroic Age of Antarctic Exploration. Robert E. Horton (1875–1945), founder of modern hydrology and concepts such as infiltration capacity and overland flow. J Harlen Bretz (1882–1981), pioneer of research into the shaping of landscapes by catastrophic floods, most notably the Bretz (Missoula) floods. Luis García Sáinz (1894–1965), pioneer of physical geography in Spain. Willi Dansgaard (1922–2011), palaeoclimatologist and quaternary scientist, instrumental in the use of oxygen-isotope dating and co-identifier of Dansgaard-Oeschger events. Hans Oeschger (1927–1998), palaeoclimatologist and pioneer in ice core research, co-identifier of Dansgaard-Orschger events. Richard Chorley (1927–2002), a key contributor to the quantitative revolution and the use of systems theory in geography. Sir Nicholas Shackleton (1937–2006), who demonstrated that oscillations in climate over the past few million years could be correlated with variations in the orbital and positional relationship between the Earth and the Sun. See also Areography Atmosphere of Earth Concepts and Techniques in Modern Geography Earth system science Environmental science Environmental studies Geographic information science Geographic information system Geophysics Geostatistics Global Positioning System Planetary science Physiographic regions of the world Selenography Technical geography References Further reading Pidwirny, Michael. (2014). Glossary of Terms for Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play. Pidwirny, Michael. (2014). Understanding Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play. Reynolds, Stephen J. et al. (2015). Exploring Physical Geography. [A Visual Textbook, Featuring more than 2500 Photographies & Illustrations]. McGraw-Hill Education, New York. External links Physiography by T.X. Huxley, 1878, full text, physical geography of the Thames River Basin Fundamentals of Physical Geography, 2nd Edition, by M. Pidwirny, 2006, full text Physical Geography for Students and Teachers, UK National Grid For Learning Earth sciences
0.772931
0.997665
0.771126
Integrated electric propulsion
Integrated electric propulsion (IEP), full electric propulsion (FEP) or integrated full electric propulsion (IFEP) is an arrangement of marine propulsion systems such that gas turbines or diesel generators or both generate three-phase electricity which is then used to power electric motors turning either propellers or waterjet impellors. It is a modification of the combined diesel-electric and gas propulsion system for ships which eliminates the need for clutches and reduces or eliminates the need for gearboxes by using electrical transmission rather than mechanical transmission of energy, so it is a series hybrid electric propulsion, instead of parallel. Some newer nuclear-powered warships also use a form of IEP. A nuclear power plant produces the steam to operate turbine generators; these in turn power electric propulsion motors. Integrated system Eliminating the mechanical connection between the engines and the propulsion has several advantages including increased freedom of placement of the engines, acoustical decoupling of the engines from the hull which makes the ship less noisy, and a reduction of weight and volume. Reducing acoustic signature is particularly important to naval vessels seeking to avoid detection and to cruise ships seeking to provide passengers with a pleasant voyage, but is of less benefit to cargo ships. Because ships require electricity even when not underway, having all of the engines produce electricity reduces the number of engines needed compared to more traditional arrangements in which one pool of engines provides electricity and another pool of engines provides propulsion, reducing capital costs and maintenance costs. A typical integrated electric propulsion arrangement on larger (e.g. cruise ships) and naval vessels includes both diesel generators and gas turbines. On smaller vessels (which make up the majority of IEP vessels) the engines are typically just diesel. The advantages of gas turbines include much lower weight. and smaller size than diesels of similar power, and much less noise and vibration, but they are efficient only at or near maximum power. Diesel generators have the advantage of high efficiency over a wide range of power levels. Using them in combination allows for the benefits of a full range of operational efficiency, a low-vibration quiet mode of operation, and some reduction in weight and volume relative to a diesel-only arrangement. In naval vessels, a pool of diesel generators are typically used to provide a base load and enough power to achieve cruise speed. The gas turbines are used to provide peak power for higher speeds and may be required to operate weapon systems with high power demands. In passenger ships, one or more gas turbines are used for fast cruising. The diesels provide reliable redundancy and an efficient source of electricity when in port, at anchor, or drifting. A diesel-electric system is an integrated electric propulsion system in which no gas turbines are used and all of the engines are diesel. A turbine-electric system is also possible using gas turbine generators. Some yachts use only gas turbines for integrated electric propulsion without any diesel engines. If electric propulsion is used via electric motor on shaft, or integrated into the main reduction gear driving the shaft, greater power available is realized faster than using diesels. In addition, an on-shaft permanent magnet motor drive system also utilizing gas turbine Prime Movers on the main reduction gear, can also provide electricity when driven by the Prime Movers. The on-shaft permanent magnet electric motors provide propulsion at lower speeds via on-board electrical power generation gas turbine or diesel, at significant fuel savings. If a fleet wide usage is analyzed, significant logistical advantages are realized over time. Compared to diesel, it increases flexibility, versatility and efficiency, with capability of transforming to provide propulsion or electrical power more rapidly, which ever the situation dictates. Reducing pollution In Norway gas electric hybrid propulsion is being used on a fleet of fully electric and plug-in hybrid ferries that cross fjords. Capable of travelling at 17-18 knots, these ships reduce total NOx by 8,000 tonnes per year and CO2 emissions by 300,000 tonnes per annum. Saving a million litres of diesel per year per ferry, the ferries recharge their batteries overnight and top them up from shorepower at each port of call. List of IEP ships DDG(X) (US Navy) Type 45 destroyer (Royal Navy) (US Navy) (Cunard Line) (US Navy) (Royal Navy) LHD (Spanish Navy) Multi Role Combat Vessel (Republic Of SIngapore Navy) LHD (Royal Australian Navy) (Japan Maritime Self-Defense Force) Nichinan-class oceanographic survey ship (Japan Maritime Self Defence Force) Shounan-class oceanographic survey ship (Japan Maritime Self Defence Force) Leeuwin-class Hydrographic Ship (Royal Australian Navy) Type 076 landing helicopter dock (People's Liberation Army Navy) INS Anvesh (A41) (Indian Navy & Defence Research and Development Organisation) Project 18 class destroyers (Indian Navy) See also Electric boat DC distribution system (ship propulsion) References Electric boats Marine propulsion Hybrid electric vehicles
0.77914
0.98968
0.7711
Oculesics
Oculesics, a subcategory of kinesics, is the study of eye movement, behavior, gaze, and eye-related nonverbal communication. The term's specific designation slightly varies apropos of the field of study (e.g., medicine or social science). Communication scholars use the term "oculesics" to refer to the investigation of culturally-fluctuating propensities and appreciations of visual attention, gaze and other implicitly effusive elements of the eyes. Comparatively, medical professionals may ascribe the same appellation to the measurement of a patient's ocular faculty, especially subsequent a cerebral or other injury (e.g., a concussion). Nonverbal communication Oculesics is one form of nonverbal communication, which is the transmission and reception of meaning between communicators without the use of words. Nonverbal communication can include the environment around the communicators, the physical attributes or characteristics of the communicators, and the communicators' behavior of the communicators. The four nonverbal communication cues are knows as spatial, temporal, visual, and vocal. Each cue relates to one or more forms of nonverbal communication: Chronemics – the study of time Haptics – the study of touch Kinesics – the study of movement Oculesics – the study of eye behavior Olfactics – the study of scent Paralanguage – the study of voice communication outside of language Proxemics – the study of space Dimensions of oculesics There are four aspects involved with oculesics: Dimension 1: eye contact There are two methods of assessing eye contact: Direct assessment Indirect assessment Dimension 2: eye movement Eye movement can occur either voluntarily or involuntarily. Various types of eye movement include changing eye direction, changing focus, or following objects with the eyes. The 5 types of this movement include saccades, smooth pursuit, vergence, vestibulo-ocular, and optokinetic movements. Dimension 3: pupil dilation Pupillary response refers to the voluntary or involuntary change in the size of the pupil. The pupils may enlarge or dilate in response to the appearance of real or perceived new objects of focus, or at the real or perceived indication of such appearances. Dimension 4: gaze direction Gazing deals with communicating and feeling intense desire with the eye, voluntarily or involuntarily. Theorists and studies Many theorists and studies are associated with nonverbal communication, including the study of oculesics. Ray Birdwhistell Professor Ray Birdwhistell was one of the earliest theorists of nonverbal communication. As an anthropologist, he coined the term kinesics, and defined it as communication and perceived meaning from facial expressions and body gestures. Birdwhistell spent over fifty years analyzing kinesics. He wrote two books on the subject: Introduction to Kinesics (1952) and Kinesics and Context (1970). He also created films of people communicating and studied their methods of nonverbal communication in slow motion. He published his results in attempt to make general translations of gestures and expressions, although he later acknowledged it was impossible to equate each form of body language with a specific meaning. Birdwhistell's study of oculesics was greatly enhanced by his use of film. In one study, he filmed which directions and at what objects children looked as they learned activities from their parents. Paul Ekman Dr. Paul Ekman is a psychologist with over five decades of experience researching nonverbal communication, especially with facial expressions. He has written, co-authored, and edited over a dozen books and published over 100 articles on oculesics. He also served as an advisor for the television show Lie to Me, and currently works with the Dalai Lama on increasing awareness of the influence of emotion on behavior to help people achieve peace of mind. Ekman's work in facial expressions includes studies looking for connections between oculesics and other facial movements, eye behavior and physically covering the eyes when recalling personal traumatic events, and on his self-coined phrase, "the Duchenne smile" (named after Guillaume Duchenne), which relates to involuntary movements of the orbicularis oculi, pars orbitalis when smiling sincerely. Most prominently, oculesics play a major role in the Facial Action Coding System (FACS), which is a micro-expression database created by Dr. Ekman and his colleagues. Robert Plutchik Professor Robert Plutchik was a psychologist who specialized in communicating emotion with expressions and gestures. Many of his articles and books discuss the influence of emotion on nonverbal communication as well as the effect of those expressions and gestures on emotions. Professor Plutchik's work on oculesics includes studies on the "synthesis of facial expressions," which look for connections between expressions in the eye along with expressions from the forehead and mouth. Eye Movement Desensitization and Reprocessing Dr. Francine Shapiro developed Eye Movement Desensitization and Reprocessing (EMDR) treatment to address diseases such as Post-traumatic Stress Disorder (PTSD). EMDR communicates with the subject through eye movement in an attempt to re-create meaning and processing of prior traumatic events. Theory of Non-Competitive Stare Theory proposed by psychologist and psychotherapist Carlos Prada which suggests the existence of specific pathways in the visual system through which dominance is transmitted and processed. These pathways go from the dominant eye to the visual cortex and from there to the specific cognitive module for processing. More precisely and depending on the specific lateralization of brain function: In right-handed: right eye → optic nerve (through optic chiasm) → visual cortex → cognitive module. In left-handed: left eye → optic nerve → visual cortex → cognitive module. In ambidextrous: through any two pathways, according lateralization of brain function. Despite the scientific nature of the proposal, the author emphasises the benefits for interpersonal relationships of avoiding looking directly at the dominant eye through which the transmission of ocular dominance is initiated (as proposed by the theory). As a struggle for power and dominance is established through eye contact, and at the same time, as maintaining eye contact is considered to be a proof of sincerity, self-confidence and credibility, he suggests that eye contact should be maintained staring at the non-dominant eye, thus avoiding the specific routes of dominance transmission. This will mean a substantial improvement in interpersonal relationships. From the experience that empirical evidence provides and valuing the characteristics of lateralization of brain function between individuals, he proposes that the appropriate technique consists in staring at the left eye (non-dominant) of right-handed people, and at the right eye (non-dominant) of left-handed people. The improvement in interpersonal relationships would take place as much in the case of establishing new relations as in already established ones. Communicating emotions In the book Human Emotions, author Carroll Ellis Izard says "a complete definition of emotion must take into account all three of these aspects or components: (a) the experience or conscious feeling of emotion; (b) the processes that occur in the brain and nervous system; and (c) the observable expressive patterns of emotion, particularly those on the face" (p.4). This third component is where oculesics plays a role in nonverbal communication of emotion. Oculesics is a primary form of communicating emotion. The pseudoscientific study of neuro-linguistic programming (NLP) established three main types of thinking regarding what someone sees, hears, or feels. According to this pseudoscience, oculesics can show which type of thinking someone is using when they are communicating. A person thinking visually might physically turn their eyes away, as if to look at an imagined presentation of what they are thinking, even to the point of changing the focus of their eyes. Someone thinking in terms of hearing might turn their eyes as much as possible to one of their ears. A person thinking in terms of what they feel could look downwards as if looking toward their emotions coming from their body. Whether or not someone intends to send a particular meaning or someone else perceives meaning correctly, the exchange of communication happens and can initiate emotion. It is important to understand these dynamics because we often establish relationships (on small and grand scales) with oculesics. Lists of emotions There are many theories on how to annotate a specific list of emotions. Two prominent methodologies come from Dr. Paul Ekman and Dr. Robert Plutchik. Dr. Ekman states there are 15 basic emotions – amusement, anger, contempt, contentment, disgust, embarrassment, excitement, fear, guilt, pride in achievement, relief, sadness/distress, satisfaction, sensory pleasure, and shame – with each of these fifteen stemming out to similar and related sub-emotions. Dr. Plutchik says there are eight basic emotions, which have eight opposite emotions, all of which create human feelings (which also have opposites). He created Plutchik's Wheel of Emotions to demonstrate this theory. Perceptions and displays of emotions vary across time and culture. Some theorists say that even with these differences, there can be generally accepted "truths" about oculesics, such as the theory that constant eye contact between two people is physically and mentally uncomfortable. Emotions with eye summary: Anxiety – wetness or moisture in the eyes Anger – eyes glaring and wide open Boredom – eyes not focused, or focused on something else Desire – eyes wide, dilation of pupils Disgust – rapid turning away of eyes Envy – glaring Fear – eyes wide, or looking downward; may also be closed Happiness – "glittery" look to eyes, wrinkled at the sides Interest – intense focus, perhaps squinting Pity – heavy gaze to eyes, moisture in eyes Sadness – tears in eyes, looking downward; may have a sleepless appearance Shame – eyes looking down while head is turned down Surprise – eyes wide open Eye behaviors with emotional summaries: Eyes up – Different people look up for different reasons. Some look up when they are thinking. Others look upward in an effort to recall something from their memory. It may also indicate a person's subconscious display boredom. The head position is also considered - for example, an upwards look with a lowered head can be a coy, suggestive action. Eyes down – Avoiding eye contact, or looking down, can be a sign of submission or fear. It may also indicate that someone feels guilty. However, depending on the culture of the person, it may also be a sign of respect. Lateral movement of eyes – Looking away from the person from whom one is speaking could be a sign that something else has taken their interest. It may also mean that a person is easily distracted. Looking to the left can mean that a person is trying to remember a sound while looking to the right can mean that the person is actually imagining the sound. Side-to-side movement, however, can indicate that a person is lying. Gazing - Staring at someone means that a person shows sincere interest. For instance, staring at a person's lips can indicate that someone wants to kiss another person. The subject of someone's gaze can communicate what that person wants. Glancing – Glancing can show a person's true desires. For example, glancing at a door might mean that someone wants to leave, while glancing at a glass of water might mean that a person is thirsty. Eye contact – Eye contact is powerful and shows sincere interest if it is unbroken. A softening of the stare can indicate sexual desire. Breaking that eye contact can be threatening to the person who does not break eye contact. Staring – Staring is more than just eye contact; it usually involves eyes wider than normal. A lack of blinking may indicate more interest, but it may also indicate a stronger feeling than a person may intend. Prolonged eye contact can be aggressive, affectionate, or deceptive. Following with the eyes – Eyes follow movement naturally. If a person is interested in someone, then their eyes will naturally follow that person. Squinting – Squinting of the eyes may mean a person is trying to obtain a closer look. It may also mean that a person is considering whether something is true or not. Liars may use squinting as a tool to keep others from detecting their dishonesty. Squinting may also be just a result of a bright sun. Blinking – Blinking is a natural response that can occur for no other reason than having dry eyes. It can also be the result of a person feeling greater levels of stress. Rapid blinking can indicate arrogance while reduced blinking can move towards a stare. Winking – Winking can indicate that two people are non-verbally communicating a shared understanding. It can mean "hello" or it can be a sign of flirtation. Closing of eyes – Closing the eyes may be a response to fear or embarrassment. Others may close their eyes as a way to think more sincerely about a particular subject. Eye moisture – Tears can indicate sadness, but they are also used to wash and clean the eyes. Damp eyes can be suppressed by crying or an expression of extreme happiness or laughter. In many cultures, men are not expected to cry but may experience damp eyes in place of crying. Pupil dilation – Pupil dilation may be harder to detect by most people. Sexual desire may be a cause of such dilation. It may also be an indication of attraction. Physiologically, the eyes dilate when it is darker to let in more light. Rubbing of eyes – Eyes may water, causing a person to rub their own eyes. This can happen when a person feels uncomfortable or tired. It may also happen when a person simply has something in their eyes. Cultural impact Cultural differences in nonverbal communication In his essay The Coordinated Management of Meaning (CMM), Dr. W. Barnett Pearce discusses how people derive meaning in communication based on reference points gained or passed down to them culturally. Winston Bremback said, "To know another's language and not his culture is a good way to make a fluent fool of oneself." Culture in this sense, includes all of the nonverbal communication, customs, thought, speech and artifacts that make a group of people unique. Brembeck knew of the significant role that communication plays besides language. While most nonverbal communication is conveyed subconsciously, there are cultural similarities that enable us to understand the difference between what is being said and what is actually meant. But generalizing non-verbal communication between cultures can be tricky since there are as many cultural differences in nonverbal communication as there are different languages in the world. While growing up, a child will typically spend a couple of years learning to communicate verbally while simultaneously learning the idiosyncrasies of nonverbal communication of their culture. In fact, the first couple of years of a child's life is spent learning most of these nonverbals. The differences between cultures are thus ingrained at the very earliest points of development. Projected similarity Anthropologists have proven for years that nonverbal communication styles vary by culture. Most people, however, are not only oblivious to the differences in these nonverbal communication styles within their own culture, but they also assume that individuals from other cultures also communicate in the same way that they do. This is a phenomenon called projected similarity. The result of projected similarity is that misperceptions, misinterpretations, and misunderstandings occur in cross-cultural interactions when a person interprets another's nonverbal communication in the light of his or her own cultural norms. While all nonverbal communication differs greatly among cultures, perhaps none is so obviously different as the movement and study of eye contact. A particular nonverbal interaction between two individuals can have completely different meanings in different cultures. Even within that same culture, oculesics plays a tremendous role in obtaining meaning from other nonverbal cues. This is why, even in the same culture, humans still have trouble sometimes understanding each other because of their varying eye behavior, nonverbal cues, and cultural and personal differences. Stereotypes in cultural differences It is because of these personal differences, that in studying cultural communication patterns we sometimes find it necessary to speak in stereotypes and generalizations. Just as one might say that Puerto Ricans who speak Spanish tend to use a louder voice than others communicating at the same distance, it would not be fair to say that all Puerto Ricans exhibit the same qualities. There are obviously enormous variations within each culture. These variations can depend on age, gender, geographical location, race, socioeconomic status, and personality. Because there are so many factors to study, most are generally glossed over in favor of stereotypes and generalizations. Some oculesic findings from around the world As previously discussed, the effect that eye movement has on human behavior has been widely studied. In some cultures, however, this study actually allows for insights into individuals whose only way of communication is by nonverbal means. Studies show that eye behavior shows special patterns in psychiatric patients, autistic children, and persons from diverse cultures. In some countries, doctors use the study of oculesics to test stimulation among patients and interest levels in children who are not as expressive verbally. While lack of eye contact in many cultures can signal either disinterest or respect, depending on the culture of the individual, it may be an insight into a patient's brain functions at the time of observation. Latin American culture vs. Anglo Saxon culture There are several differences between Anglo Saxon culture and Latino/Latin American cultures, both in the way the two groups interact with each other as well as the way they interact with members of other cultural groups. Besides the obvious language differences, nonverbal communication is the most noticeable difference between the two groups. Specifically, within nonverbal communication, eye contact and eye behavior can actually help one differentiate between the cultural backgrounds of two individuals by looking at nothing but their eyes. Sociologists have found that Anglo-Saxons tend to look steadily and intently into the eyes of the person to whom they are speaking. Latinos will look into the eyes of the person to whom they are speaking, but only in a fleeting way. Latinos tend to look into the other person's eyes, and then immediately their eyes to wander when speaking. In traditional Anglo-Saxon culture, averting the eyes in such a way usually portrays a lack of confidence, certainty, or truthfulness. In the Latino culture, direct or prolonged eye contact can also indicate that you are challenging the individual with whom you are speaking, or that you have a romantic interest in the person. Muslim culture In the Islamic faith, most Muslims lower their heads and try not to focus on the opposite sex's features save for the hands and face. This is a show of respect but also a cultural rule which enforces Islamic law. Lustful glances at those of the opposite sex are also prohibited. Western Pacific Nations Many western Pacific nations share much of the same cultural customs. Children, for instance, are taught in school to direct their eyes to their teacher's Adam's apple or tie knot. This continues through adulthood, as most Asian cultures lower their eyes when speaking to a superior as a gesture of respect. East Asia and Northern Africa In many East Asian and north African cultures such as Nigeria,[6] it is also respectful not to look the dominant person in the eye. The seeking of constant unbroken eye contact by the other participant in a conversation can often be considered overbearing or distracting- even in Western cultures. United States In the United States, eye contact may serve as a regulating gesture and is typically associated with respect, attentiveness, and honesty. Americans associate direct eye contact with forthrightness and trustworthiness. Dealing with cultural differences Across all cultures, communicators and leaders become successful because they observe the unconscious actions of others. Sometimes an individual's actions are the result of their culture or upbringing and sometimes they are the result of the emotion they are portraying. Keen communicators are able to tell the difference between the two and effectively communicate based on their observations. Oculesics is not a standalone science. Combining the information obtained from eye movements and behaviors with other nonverbal cues such as Haptics, Kinesics, or Olfactics will lend the observer a much more well-rounded and accurate portrait of an individual's behavior. According to social scientists, individuals need to first become consciously aware of their own culture before being able to interpret differences among other cultures. In learning about our own culture, we learn how we are different from the cultures of those around us. Only then, will we become aware of the differences among the cultures of others. Finally, we should undergo acculturation, that is, borrow attributes from other cultures that will help us function effectively without in any way having to relinquish our own cultural identities. In Nonverbal Communication, Nine-Curt stresses that "we should develop, refine, and constantly practice the skill of switching cultural channels, as on a TV set, in order to be able to interact with people from other cultures, and often with people from subcultures within our own, more effectively. This is indispensable if we are to avoid the pain, frustration, and discomfort that usually accompany trying to move and live in a culture different from our own. As we become proficient in this skill, we will find it less difficult and highly satisfying to accept others and their styles of living. See also Jacques Lacan Orthoptics Visual perception Vision therapy References Further reading Eyes for Lies (2012). Articles on truth wizards. Eyes for Lies: Deception Expert. Ekman, P., Friesen, W. V., & Ellsworth, P. (1982). What emotion categories or dimensions can observers judge from facial behavior? In P. Ekman (Ed.), Emotion in the human face. New York: Cambridge University Press. Guerrero, L.K., & Hecht, M.L. (2008). The nonverbal communication reader: Classic and contemporary readings (3rd ed.) (pp. 511–520). Long Grove, IL: Waveland Press. Oatley, K., & Johnson-Laird, P. N. (1987). Towards a cognitive theory of emotions. Cognition & Emotion. 1(29-50). Pazian, Maggie. (2010). The Wizards Project: People with exceptional skills in lie detection. Examiner.com. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik & H. Kellerman (Eds.) Emotion: Theory, research, and experience: Vol. 1. Theories of emotion. New York: Academic. Visual perception
0.784175
0.98331
0.771087
First law of thermodynamics
The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. The law distinguishes two principal forms of energy transfer, heat and thermodynamic work, that modify a thermodynamic system containing a constant amount of matter. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat and work in the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an isolated system the sum of all forms of energy is constant. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously. The ideal isolated system, of which the entire universe is an example, is often only used as a model. Many systems in practical applications require the consideration of internal chemical or nuclear reactions, as well as transfers of matter into or out of the system. For such considerations, thermodynamics also defines the concept of open systems, closed systems, and other types. Definition For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy, , from all work, , done on or by the system, and the quantity of heat, , supplied or withdrawn from the system. The historical sign convention for the terms has been that heat supplied to the system is positive, but work done by the system is subtracted. This was the convention of Rudolf Clausius, so that a change in the internal energy, , is written . Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine. When a system expands in an isobaric process, the thermodynamic work, , done by the system on the surroundings is the product, , of system pressure, , and system volume change, , whereas is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is: where denotes the quantity of heat supplied to the system from its surroundings. Work and heat express physical processes of supply or removal of energy, while the internal energy is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise, denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change, , can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with , which is not a form of thermodynamic work. History In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Leibniz's concept of ' vis viva ', mv2, as distinct from Newton's momentum, mv. Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat. In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote: At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes. In 1840, Germain Hess stated a conservation law (Hess's Law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work. In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water. The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius. Original statements: the "thermodynamic approach" The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach. The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows: Reflecting the experimental work of Mayer and of Joule, Clausius wrote: Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system. The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation . In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy). Conceptual revision: the "mechanical approach" In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach". Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer. The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic. This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others. Conceptually revised statement, according to the mechanical approach The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work. The revised statement is then For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes. This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines. Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers. Description Cyclic processes The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic. A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system. In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units. The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat. Various statements of the law for closed systems The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author. For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'. There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another. An example of a physical statement is that of Planck (1897/1903): It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing. This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium. An example of a mathematical statement is that of Crawford (1963): For a given system we let large-scale mechanical energy, large-scale potential energy, and total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition For any finite process, whether reversible or irreversible, The first law in a form that involves the principle of conservation of energy more generally is Here and are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)] This statement by Crawford, for , uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state. The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date. Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors. Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated. The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat. According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude. Sometimes the concept of internal energy is not made explicit in the statement. Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process. A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature. A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903). Evidence for the first law of thermodynamics for closed systems The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes. The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work). Adiabatic processes In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures. For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system. Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank. A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted". This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below. That important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states. In an adiabatic process, adiabatic work takes the system either from a reference state with internal energy to an arbitrary one with internal energy , or from the state to the state : Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article. The fact of such irreversibility may be dealt with in two main ways, according to different points of view: Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes,Planck, M. (1897/1903), Section 71, p. 52. as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions. The formula above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent of the path This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement: For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, . Adynamic processes A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry...". When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy: General case for reversible processes Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously. Putting the two complementary aspects together, the first law for a particular reversible process can be written This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems. In particular, if no work is done on a thermally isolated closed system we have . This is one aspect of the law of conservation of energy and can be stated: The internal energy of an isolated system remains constant. General case for irreversible processes If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, , through the space of thermodynamic states. This means that the internal energy is a function of state and that the internal energy change between two states is a function only of the two states. Overview of the weight of evidence for the law The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law. State functional formulation for infinitesimal processes When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by , rather than exact differentials denoted by , as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process. The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy may then be expressed as a function of the system's defining state variables , entropy, and , volume: . In these terms, , the system's temperature, and , its pressure, are partial derivatives of with respect to and . These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium. The first law requires that: Then, for the fictive case of a reversible process, can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by and the quantity of heat added can be expressed as . For these conditions While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and : Equation is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of . It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and . In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes: where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to: Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters. For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems. A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system. It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement. Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero. The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy. Fluid dynamics In fluid dynamics, the first law of thermodynamics reads . Spatially inhomogeneous systems Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write where and denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and denotes its internal energy. Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system. A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write The quantity in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments. The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems. First law of thermodynamics for open systems For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed. There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system. Internal energy for an open system Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems. In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above. For the thermodynamic operation of adding two systems with internal energies and , to produce a new system with internal energy , one may write ; the reference states for , and should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables. There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors. Also of course where and denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass. Process of transfer of matter between an open system and its surroundings A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem. An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium. Open system with multiple contacts An open system can be in contact equilibrium with several other systems at once. This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work. With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics: where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here. Combination of first and second laws If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above. For a general natural process, there is no immediate term-wise correspondence between equations and, because they describe the process in different conceptual frames. Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely. For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write where is the added amount of species and is the corresponding molar entropy. For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield where is the molar enthalpy of species . Non-equilibrium transfers The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined. The first law of thermodynamics for any process on the specification of equation (3) can be defined as where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and is the molar enthalpy of species , coming into the system from the surrounding that is in contact with the system. Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced, which allows to formulate for the general case Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient. Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical. The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity. Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow. This conduction flow is by definition the heat flow . Therefore: where denotes the [internal] energy per unit mass. [These authors actually use the symbols and to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase. In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat. See also Laws of thermodynamics Perpetual motion Microstate (statistical mechanics) – includes microscopic definitions of internal energy, heat and work Entropy production Relativistic heat conduction References Cited sources Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, . Aston, J. G., Fritz, J. J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York. Balian, R. (1991/2007). From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F. Gregg, Springer, Berlin, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B. G. Teubner, Leipzig. Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, . Buchdahl, H. A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London. Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, . A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. . See English Translation: On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom. Phil. Mag. (1851), series 4, 2, 1–21, 102–119. Also available on Google Books. Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. de Groot, S. R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, . Denbigh, K. G. (1951). The Thermodynamics of the Steady State, Methuen, London, Wiley, New York. Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, . Eckart, C. (1940). The thermodynamics of irreversible processes. The simple fluid, Phys. Rev. 58: 267–269. Fitts, D. D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York. Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, . Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W. F. Heinz, Springer-Verlag, New York. Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, Addison-Wesley Publishing, Reading MA. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114–162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S. G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, , pp. 89–110. Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA. Kirkwood, J. G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York. Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, . Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, . Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, . Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. Pippard, A. B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK. Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège. Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York. Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York. Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA. Truesdell, C. A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, . Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, . Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, . Further reading Chpts. 2 and 3 contain a nontechnical treatment of the first law. Chapter 2. External links MISN-0-158, The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET. First law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky Equations of physics 1 de:Thermodynamik#Erster Hauptsatz
0.771725
0.999049
0.770991
Frenet–Serret formulas
In differential geometry, the Frenet–Serret formulas describe the kinematic properties of a particle moving along a differentiable curve in three-dimensional Euclidean space , or the geometric properties of the curve itself irrespective of any motion. More specifically, the formulas describe the derivatives of the so-called tangent, normal, and binormal unit vectors in terms of each other. The formulas are named after the two French mathematicians who independently discovered them: Jean Frédéric Frenet, in his thesis of 1847, and Joseph Alfred Serret, in 1851. Vector notation and linear algebra currently used to write these formulas were not yet available at the time of their discovery. The tangent, normal, and binormal unit vectors, often called T, N, and B, or collectively the Frenet–Serret frame (TNB frame or TNB basis), together form an orthonormal basis spanning and are defined as follows: T is the unit vector tangent to the curve, pointing in the direction of motion. N is the normal unit vector, the derivative of T with respect to the arclength parameter of the curve, divided by its length. B is the binormal unit vector, the cross product of T and N. The Frenet–Serret formulas are: where d/ds is the derivative with respect to arclength, κ is the curvature, and τ is the torsion of the space curve. (Intuitively, curvature measures the failure of a curve to be a straight line, while torsion measures the failure of a curve to be planar.) The TNB basis combined with the two scalars, κ and τ, is called collectively the Frenet–Serret apparatus. Definitions Let r(t) be a curve in Euclidean space, representing the position vector of the particle as a function of time. The Frenet–Serret formulas apply to curves which are non-degenerate, which roughly means that they have nonzero curvature. More formally, in this situation the velocity vector r′(t) and the acceleration vector r′′(t) are required not to be proportional. Let s(t) represent the arc length which the particle has moved along the curve in time t. The quantity s is used to give the curve traced out by the trajectory of the particle a natural parametrization by arc length (i.e. arc-length parametrization), since many different particle paths may trace out the same geometrical curve by traversing it at different rates. In detail, s is given by Moreover, since we have assumed that r′ ≠ 0, it follows that s(t) is a strictly monotonically increasing function. Therefore, it is possible to solve for t as a function of s, and thus to write r(s) = r(t(s)). The curve is thus parametrized in a preferred manner by its arc length. With a non-degenerate curve r(s), parameterized by its arc length, it is now possible to define the Frenet–Serret frame (or TNB frame): The tangent unit vector T is defined as The normal unit vector N is defined as from which it follows, since T always has unit magnitude, that N (the change of T) is always perpendicular to T, since there is no change in length of T. Note that by calling curvature we automatically obtain the first relation.The binormal unit vector B is defined as the cross product of T and N: from which it follows that B is always perpendicular to both T and N. Thus, the three unit vectors T, N, and B are all perpendicular to each other. The Frenet–Serret formulas are: where is the curvature and is the torsion. The Frenet–Serret formulas are also known as Frenet–Serret theorem, and can be stated more concisely using matrix notation: This matrix is skew-symmetric. Formulas in n dimensions The Frenet–Serret formulas were generalized to higher-dimensional Euclidean spaces by Camille Jordan in 1874. Suppose that r(s) is a smooth curve in , and that the first n derivatives of r are linearly independent. The vectors in the Frenet–Serret frame are an orthonormal basis constructed by applying the Gram-Schmidt process to the vectors (r′(s), r′′(s), ..., r(n)(s)). In detail, the unit tangent vector is the first Frenet vector e1(s) and is defined as where The normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line. It is defined as Its normalized form, the unit normal vector, is the second Frenet vector e2(s) and defined as The tangent and the normal vector at point s define the osculating plane at point r(s). The remaining vectors in the frame (the binormal, trinormal, etc.) are defined similarly by The last vector in the frame is defined by the cross-product of the first vectors: The real valued functions used below χi(s) are called generalized curvature and are defined as The Frenet–Serret formulas, stated in matrix language, are Notice that as defined here, the generalized curvatures and the frame may differ slightly from the convention found in other sources. The top curvature (also called the torsion, in this context) and the last vector in the frame , differ by a sign (the orientation of the basis) from the usual torsion. The Frenet–Serret formulas are invariant under flipping the sign of both and , and this change of sign makes the frame positively oriented. As defined above, the frame inherits its orientation from the jet of . Proof of the Frenet-Serret formulas The first Frenet-Serret formula holds by the definition of the normal N and the curvature κ, and the third Frenet-Serret formula holds by the definition of the torsion τ. Thus what is needed is to show the second Frenet-Serret formula. Since T, N, and B are orthogonal unit vectors with B = T × N, one also has T = N × B and N = B × T. Differentiating the last equation with respect to s gives ∂N / ∂s = (∂B / ∂s) × T + B × (∂T / ∂s) Using that ∂B / ∂s = -τN and ∂T / ∂s = κN, this becomes ∂N / ∂s = -τ (N × T) + κ (B × N) = τB - κT This is exactly the second Frenet-Serret formula. Applications and interpretation Kinematics of the frame The Frenet–Serret frame consisting of the tangent T, normal N, and binormal B collectively forms an orthonormal basis of 3-space. At each point of the curve, this attaches a frame of reference or rectilinear coordinate system (see image). The Frenet–Serret formulas admit a kinematic interpretation. Imagine that an observer moves along the curve in time, using the attached frame at each point as their coordinate system. The Frenet–Serret formulas mean that this coordinate system is constantly rotating as an observer moves along the curve. Hence, this coordinate system is always non-inertial. The angular momentum of the observer's coordinate system is proportional to the Darboux vector of the frame. Concretely, suppose that the observer carries an (inertial) top (or gyroscope) with them along the curve. If the axis of the top points along the tangent to the curve, then it will be observed to rotate about its axis with angular velocity -τ relative to the observer's non-inertial coordinate system. If, on the other hand, the axis of the top points in the binormal direction, then it is observed to rotate with angular velocity -κ. This is easily visualized in the case when the curvature is a positive constant and the torsion vanishes. The observer is then in uniform circular motion. If the top points in the direction of the binormal, then by conservation of angular momentum it must rotate in the opposite direction of the circular motion. In the limiting case when the curvature vanishes, the observer's normal precesses about the tangent vector, and similarly the top will rotate in the opposite direction of this precession. The general case is illustrated below. There are further illustrations on Wikimedia. Applications The kinematics of the frame have many applications in the sciences. In the life sciences, particularly in models of microbial motion, considerations of the Frenet–Serret frame have been used to explain the mechanism by which a moving organism in a viscous medium changes its direction. In physics, the Frenet–Serret frame is useful when it is impossible or inconvenient to assign a natural coordinate system for a trajectory. Such is often the case, for instance, in relativity theory. Within this setting, Frenet–Serret frames have been used to model the precession of a gyroscope in a gravitational well. Graphical Illustrations Example of a moving Frenet basis (T in blue, N in green, B in purple) along Viviani's curve. On the example of a torus knot, the tangent vector T, the normal vector N, and the binormal vector B, along with the curvature κ(s), and the torsion τ(s) are displayed. At the peaks of the torsion function the rotation of the Frenet–Serret frame (T,N,B) around the tangent vector is clearly visible. The kinematic significance of the curvature is best illustrated with plane curves (having constant torsion equal to zero). See the page on curvature of plane curves. Frenet–Serret formulas in calculus The Frenet–Serret formulas are frequently introduced in courses on multivariable calculus as a companion to the study of space curves such as the helix. A helix can be characterized by the height 2πh and radius r of a single turn. The curvature and torsion of a helix (with constant radius) are given by the formulas The sign of the torsion is determined by the right-handed or left-handed sense in which the helix twists around its central axis. Explicitly, the parametrization of a single turn of a right-handed helix with height 2πh and radius r is x = r cos t y = r sin t z = h t (0 ≤ t ≤ 2 π) and, for a left-handed helix, x = r cos t y = −r sin t z = h t (0 ≤ t ≤ 2 π). Note that these are not the arc length parametrizations (in which case, each of x, y, and z would need to be divided by .) In his expository writings on the geometry of curves, Rudy Rucker employs the model of a slinky to explain the meaning of the torsion and curvature. The slinky, he says, is characterized by the property that the quantity remains constant if the slinky is vertically stretched out along its central axis. (Here 2πh is the height of a single twist of the slinky, and r the radius.) In particular, curvature and torsion are complementary in the sense that the torsion can be increased at the expense of curvature by stretching out the slinky. Taylor expansion Repeatedly differentiating the curve and applying the Frenet–Serret formulas gives the following Taylor approximation to the curve near s = 0 if the curve is parameterized by arclength: For a generic curve with nonvanishing torsion, the projection of the curve onto various coordinate planes in the T, N, B coordinate system at have the following interpretations: The osculating plane is the plane containing T and N. The projection of the curve onto this plane has the form:This is a parabola up to terms of order , whose curvature at 0 is equal to κ(0). The osculating plane has the special property that the distance from the curve to the osculating plane is , while the distance from the curve to any other plane is no better than . This can be seen from the above Taylor expansion. Thus in a sense the osculating plane is the closest plane to the curve at a given point. The normal plane is the plane containing N and B. The projection of the curve onto this plane has the form:which is a cuspidal cubic to order o(s3). The rectifying plane is the plane containing T and B. The projection of the curve onto this plane is:which traces out the graph of a cubic polynomial to order o(s3). Ribbons and tubes The Frenet–Serret apparatus allows one to define certain optimal ribbons and tubes centered around a curve. These have diverse applications in materials science and elasticity theory, as well as to computer graphics. The Frenet ribbon along a curve C is the surface traced out by sweeping the line segment [−N,N] generated by the unit normal along the curve. This surface is sometimes confused with the tangent developable, which is the envelope E of the osculating planes of C. This is perhaps because both the Frenet ribbon and E exhibit similar properties along C. Namely, the tangent planes of both sheets of E, near the singular locus C where these sheets intersect, approach the osculating planes of C; the tangent planes of the Frenet ribbon along C are equal to these osculating planes. The Frenet ribbon is in general not developable. Congruence of curves In classical Euclidean geometry, one is interested in studying the properties of figures in the plane which are invariant under congruence, so that if two figures are congruent then they must have the same properties. The Frenet–Serret apparatus presents the curvature and torsion as numerical invariants of a space curve. Roughly speaking, two curves C and C′ in space are congruent if one can be rigidly moved to the other. A rigid motion consists of a combination of a translation and a rotation. A translation moves one point of C to a point of C′. The rotation then adjusts the orientation of the curve C to line up with that of C′. Such a combination of translation and rotation is called a Euclidean motion. In terms of the parametrization r(t) defining the first curve C, a general Euclidean motion of C is a composite of the following operations: (Translation) r(t) → r(t) + v, where v is a constant vector. (Rotation) r(t) + v → M(r(t) + v), where M is the matrix of a rotation. The Frenet–Serret frame is particularly well-behaved with regard to Euclidean motions. First, since T, N, and B can all be given as successive derivatives of the parametrization of the curve, each of them is insensitive to the addition of a constant vector to r(t). Intuitively, the TNB frame attached to r(t) is the same as the TNB frame attached to the new curve . This leaves only the rotations to consider. Intuitively, if we apply a rotation M to the curve, then the TNB frame also rotates. More precisely, the matrix Q whose rows are the TNB vectors of the Frenet–Serret frame changes by the matrix of a rotation A fortiori, the matrix QT is unaffected by a rotation: since for the matrix of a rotation. Hence the entries κ and τ of QT are invariants of the curve under Euclidean motions: if a Euclidean motion is applied to a curve, then the resulting curve has the same curvature and torsion. Moreover, using the Frenet–Serret frame, one can also prove the converse: any two curves having the same curvature and torsion functions must be congruent by a Euclidean motion. Roughly speaking, the Frenet–Serret formulas express the Darboux derivative of the TNB frame. If the Darboux derivatives of two frames are equal, then a version of the fundamental theorem of calculus asserts that the curves are congruent. In particular, the curvature and torsion are a complete set of invariants for a curve in three-dimensions. Other expressions of the frame The formulas given above for T, N, and B depend on the curve being given in terms of the arclength parameter. This is a natural assumption in Euclidean geometry, because the arclength is a Euclidean invariant of the curve. In the terminology of physics, the arclength parametrization is a natural choice of gauge. However, it may be awkward to work with in practice. A number of other equivalent expressions are available. Suppose that the curve is given by r(t), where the parameter t need no longer be arclength. Then the unit tangent vector T may be written as The normal vector N takes the form The binormal B is then An alternative way to arrive at the same expressions is to take the first three derivatives of the curve r′(t), r′′(t), r′′′(t), and to apply the Gram-Schmidt process. The resulting ordered orthonormal basis is precisely the TNB frame. This procedure also generalizes to produce Frenet frames in higher dimensions. In terms of the parameter t, the Frenet–Serret formulas pick up an additional factor of ||r′(t)|| because of the chain rule: Explicit expressions for the curvature and torsion may be computed. For example, The torsion may be expressed using a scalar triple product as follows, Special cases If the curvature is always zero then the curve will be a straight line. Here the vectors N, B and the torsion are not well defined. If the torsion is always zero then the curve will lie in a plane. A curve may have nonzero curvature and zero torsion. For example, the circle of radius R given by r(t)=(R cos t, R sin t, 0) in the z=0 plane has zero torsion and curvature equal to 1/R. The converse, however, is false. That is, a regular curve with nonzero torsion must have nonzero curvature. This is just the contrapositive of the fact that zero curvature implies zero torsion. A helix has constant curvature and constant torsion. Plane curves If a curve is contained in the -plane, then its tangent vector and principal unit normal vector will also lie in the -plane. As a result, the unit binormal vector is perpendicular to the plane and thus must be either or . By the right-hand rule will be if, when viewed from above, the curve's trajectory is turning leftward, and will be if it is turning rightward. As a result, the torsion will always be zero and the formula for the curvature becomes See also Affine geometry of curves Differentiable curve Darboux frame Kinematics Moving frame Tangential and normal components Radial, transverse, normal Notes References . Abstract in Journal de Mathématiques Pures et Appliquées 17, 1852. . . . . . External links Create your own animated illustrations of moving Frenet-Serret frames, curvature and torsion functions (Maple Worksheet) Rudy Rucker's KappaTau Paper. Very nice visual representation for the trihedron Differential geometry Multivariable calculus Curves Curvature (mathematics)
0.77499
0.994817
0.770973
Theory U
Theory U is a change management method and the title of a book by Otto Scharmer. Scharmer with colleagues at MIT conducted 150 interviews with entrepreneurs and innovators in science, business, and society and then extended the basic principles into a theory of learning and management, which he calls Theory U. The principles of Theory U are suggested to help political leaders, civil servants, and managers break through past unproductive patterns of behavior that prevent them from empathizing with their clients' perspectives and often lock them into ineffective patterns of decision-making. Some notes about theory U Fields of attention Thinking (individual) Conversing (group) Structuring (institutions) Ecosystem coordination (global systems) Presencing The author of the theory U concept expresses it as a process or journey, which is also described as Presencing, as indicated in the diagram (for which there are numerous variants). At the core of the "U" theory is presencing: sensing + presence. According to The Learning Exchange, Presencing is a journey with five movements: On that journey, at the bottom of the U, lies an inner gate that requires us to drop everything that isn't essential. This process of letting-go (of our old ego and self) and letting-come (our highest future possibility: our Self) establishes a subtle connection to a deeper source of knowing. The essence of presencing is that these two selves – our current self and our best future self – meet at the bottom of the U and begin to listen and resonate with each other. Once a group crosses this threshold, nothing remains the same. Individual members and the group as a whole begin to operate with a heightened level of energy and sense of future possibility. Often they then begin to function as an intentional vehicle for an emerging future. The core elements are shown below. "Moving down the left side of the U is about opening up and dealing with the resistance of thought, emotion, and will; moving up the right side is about intentionally reintegrating the intelligence of the head, the heart, and the hand in the context of practical applications". Leadership capacities According to Scharmer, a value created by journeying through the "U" is to develop seven essential leadership capacities: Holding the space: listen to what life calls you to do (listen to oneself, to others and make sure that there is space where people can talk) Observing: Attend with your mind wide open (observe without your voice of judgment, effectively suspending past cognitive schema) Sensing: Connect with your heart and facilitate the opening process (i.e. see things as interconnected wholes) Presencing: Connect to the deepest source of your self and will and act from the emerging whole Crystallizing: Access the power of intention (ensure a small group of key people commits itself to the purpose and outcomes of the project) Prototyping: Integrating head, heart, and hand (one should act and learn by doing, avoiding the paralysis of inaction, reactive action, over-analysis, etc.) Performing: Playing the "macro violin" (i.e. find the right leaders, find appropriate social technology to get a multi-stakeholder project going). The sources of Theory U include interviews with 150 innovators and thought leaders on management and change. Particularly the work of Brian Arthur, Francisco Varela, Peter Senge, Ed Schein, Joseph Jaworski, Arawana Hayashi, Eleanor Rosch, Friedrich Glasl, Martin Buber, Rudolf Steiner and Johann Wolfgang von Goethe have been critical. Artists are represented in the project from 2001 -2010 by Andrew Campbell, whose work was given a separate index page linked to the original project site. https://web.archive.org/web/20050404033150/http://www.dialogonleadership.org/indexPaintings.html Today, Theory U constitutes a body of leadership and management praxis drawing from a variety of sources and more than 20 years of elaboration by Scharmer and colleagues. Theory U is translated into 20 languages and is used in change processes worldwide. Meditation teacher Arawana Hayashi has explained how she considers Theory U relevant to "the feminine principle". Earlier work: U-procedure The earlier work by Glasl involved a sociotechnical, Goethean and anthroposophical process involving a few or many co-workers, managers and/or policymakers. It proceeded from phenomenological diagnosis of the present state of the organisation to plans for the future. They described a process in a U formation consisting of three levels (technical and instrumental subsystem, social subsystem and cultural subsystem) and seven stages beginning with the observation of organisational phenomena, workflows, resources etc., and concluding with specific decisions about desired future processes and phenomena. The method draws on the Goethean techniques described by Rudolf Steiner, transforming observations into intuitions and judgements about the present state of the organisation and decisions about the future. The three stages represent explicitly recursive reappraisals at progressively advanced levels of reflective, creative and intuitive insight and (epistemologies), thereby enabling more radically systemic intervention and redesign. The stages are: phenomena – picture (a qualitative metaphoric visual representation) – idea (the organising idea or formative principle) – and judgement (does this fit?). The first three then are reflexively replaced by better alternatives (new idea --> new image --> new phenomena) to form the design design. Glasl published the method in Dutch (1975), German (1975, 1994) and English (1997). The seven stages are shown below. In contrast to that earlier work on the U procedure, which assumes a set of three subsystems in the organization that need to be analyzed in a specific sequence, Theory U starts from a different epistemological view that is grounded in Varela's approach to neurophenomenology. It focuses on the process of becoming aware and applies to all levels of systems change. Theory U contributed to advancing organizational learning and systems thinking tools towards an awareness-based view of systems change that blends systems thinking with systems sensing. On the left-hand side of the U the process is going through the three main "gestures" of becoming aware that Francisco Varela spelled out in his work (suspension, redirection, letting-go). On the right-hand side of the U this process extends towards actualizing the future that is wanting to emerge (letting come, enacting, embodying). Criticism Sociologist Stefan Kühl criticizes Theory U as a management fashion on three main points: First of all, while Theory U posits to create change on all levels, including the level of the individual "self" and the institutional level, case studies mainly focus on clarifying the positions of individuals in groups or teams. Except of the idea of participating in online courses on Theory U, the theory remains silent on how broad organisational or societal changes may take place. Secondly, Theory U, like many management fashions, neglects structural conflicts of interest, for instance between groups, organisations and class. While it makes sense for top management to emphasize common values, visions and the community of all staff externally, Kühl believes this to be problematic if organisations internally believe too strongly in this community, as this may prevent the articulation of conflicting interests and therefore organisational learning processes. Finally, the 5 phase model of Theory U, like other cyclical (but less esoteric) management models, such as PDCA, are a gross simplification of decision-making processes in organisation that are often wilder, less structured and more complex. Kühl argues that Theory U may be useful as it allows management to make decisions despite unsure knowledge and encourages change, but expects that Theory U will lose its glamour. See also Appreciative inquiry Art of Hosting Decision cycle Learning cycle OODA loop V-Model References External links C. Otto Scharmer Home Page Presencing Home Page The U-Process for Discovery Change management
0.780913
0.987202
0.770919
Elastic pendulum
In physics and mathematics, in the area of dynamical systems, an elastic pendulum (also called spring pendulum or swinging spring) is a physical system where a piece of mass is connected to a spring so that the resulting motion contains elements of both a simple pendulum and a one-dimensional spring-mass system. For specific energy values, the system demonstrates all the hallmarks of chaotic behavior and is sensitive to initial conditions.At very low and very high energy, there also appears to be regular motion. The motion of an elastic pendulum is governed by a set of coupled ordinary differential equations.This behavior suggests a complex interplay between energy states and system dynamics. Analysis and interpretation The system is much more complex than a simple pendulum, as the properties of the spring add an extra dimension of freedom to the system. For example, when the spring compresses, the shorter radius causes the spring to move faster due to the conservation of angular momentum. It is also possible that the spring has a range that is overtaken by the motion of the pendulum, making it practically neutral to the motion of the pendulum. Lagrangian The spring has the rest length and can be stretched by a length . The angle of oscillation of the pendulum is . The Lagrangian is: where is the kinetic energy and is the potential energy. Hooke's law is the potential energy of the spring itself: where is the spring constant. The potential energy from gravity, on the other hand, is determined by the height of the mass. For a given angle and displacement, the potential energy is: where is the gravitational acceleration. The kinetic energy is given by: where is the velocity of the mass. To relate to the other variables, the velocity is written as a combination of a movement along and perpendicular to the spring: So the Lagrangian becomes: Equations of motion With two degrees of freedom, for and , the equations of motion can be found using two Euler-Lagrange equations: For : isolated: And for : isolated: The elastic pendulum is now described with two coupled ordinary differential equations. These can be solved numerically. Furthermore, one can use analytical methods to study the intriguing phenomenon of order-chaos-order in this system. See also Double pendulum Duffing oscillator Pendulum (mathematics) Spring-mass system References Further reading External links Holovatsky V., Holovatska Y. (2019) "Oscillations of an elastic pendulum" (interactive animation), Wolfram Demonstrations Project, published February 19, 2019. Holovatsky V., Holovatskyi I., Holovatska Ya., Struk Ya. Oscillations of the resonant elastic pendulum. Physics and Educational Technology, 2023, 1, 10–17, https://doi.org/10.32782/pet-2023-1-2 http://journals.vnu.volyn.ua/index.php/physics/article/view/1093 Chaotic maps Dynamical systems Mathematical physics Pendulums
0.790412
0.975335
0.770916
Introduction to quantum mechanics
Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics. Many aspects of quantum mechanics are counterintuitive and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin 'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place. Quantum mechanics helps us understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero is placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics. History James C. Maxwell's unification of the equations governing electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century. Evidence of quanta from the photoelectric effect The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation. Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors are infrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation. At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein proposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years to Thomas Young's work on diffraction. Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902, Philipp Lenard directed light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is the photoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons. Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured. Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories. Quantization of bound electrons in atoms Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines. The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics. Throughout the 19th century evidence grew for the atomic nature of matter. With Thomson's discovery of the electron in 1897, scientist began the search for a model of the interior of the atom. Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom. Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913 Niels Bohr and Ernest Rutherford connected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed. Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution. Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits. A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws. Quantization of spin In 1922 Otto Stern and Walther Gerlach demonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom. In 1927, T.E. Phipps and J.B. Taylor obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest. Quantization of matter In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced. Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target which showed a diffraction pattern indicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons. Further developments In 1928 Paul Dirac published his relativistic wave equation simultaneously incorporating relativity, predicting anti-matter, and providing a complete theory for the Stern–Gerlach result. These successes launched a new fundamental understanding of our world at small scale: quantum mechanics. Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics. Quantum radiation, quantum fields In 1923 Compton demonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon" Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics. Wave–particle duality The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality is the double-slit experiment. In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light. Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics. If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves. Uncertainty principle Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired. In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical. Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact. At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles. The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to the Planck constant. Wave function collapse Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics. For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD. Eigenstates and eigenvalues Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate. In the Stern–Gerlach experiment discussed above, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows. The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value. The Pauli exclusion principle In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers." A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the Stern–Gerlach experiment. Dirac wave equation In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum. Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory. Quantum entanglement In quantum physics, a group of particles can interact or be created together in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known as quantum entanglement. An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics." The Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory. Quantum field theory The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory. Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote: Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ... He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view". In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory". Quantum electrodynamics Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge. Electric charges are the sources of and create, electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization largely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman diagrams provided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles. The Lamb shift is an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split. Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept. Standard Model The Standard Model of particle physics is the quantum field theory that describes three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifies all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. Interpretations The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe. Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales. Applications Everyday applications The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn. Technological applications Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells. See also Einstein's thought experiments Macroscopic quantum phenomena Philosophy of physics Quantum computing Virtual particle Teaching quantum mechanics List of textbooks on classical and quantum mechanics Notes Notes are in the main script References Bibliography Scientific American Reader, 1953. ; cited in: Van Vleck, J. H.,1928, "The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics", Proc. Natl. Acad. Sci. 14: 179. Further reading The following titles, all by working physicists, attempt to communicate quantum theory to laypeople, using a minimum of technical apparatus. Jim Al-Khalili (2003). Quantum: A Guide for the Perplexed. Weidenfeld & Nicolson. . Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley. . Brian Cox and Jeff Forshaw (2011) The Quantum Universe. Allen Lane. . Richard Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. . Ford, Kenneth (2005). The Quantum World. Harvard Univ. Press. Includes elementary particle physics. Ghirardi, GianCarlo (2004). Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading. Tony Hey and Walters, Patrick (2003). The New Quantum Universe. Cambridge Univ. Press. Includes much about the technologies quantum theory has made possible. . Vladimir G. Ivancevic, Tijana T. Ivancevic (2008). Quantum leap: from Dirac and Feynman, Across the universe, to human body and mind. World Scientific Publishing Company. Provides an intuitive introduction in non-mathematical terms and an introduction in comparatively basic mathematical terms. . J. P. McEvoy and Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. ' N. David Mermin (1990). "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge Univ. Press: 110–76. The author is a rare physicist who tries to communicate to philosophers and humanists. . Roland Omnès (1999). Understanding Quantum Mechanics. Princeton Univ. Press. . Victor Stenger (2000). Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5–8. . Martinus Veltman (2003). Facts and Mysteries in Elementary Particle Physics. World Scientific Publishing Company. . External links "Microscopic World – Introduction to Quantum Mechanics". by Takada, Kenjiro, emeritus professor at Kyushu University The Quantum Exchange (tutorials and open-source learning software). Atoms and the Periodic Table Single and double slit interference Time-Evolution of a Wavepacket in a Square Well An animated demonstration of a wave packet dispersion over time. Articles containing video clips
0.774194
0.995745
0.7709
Great Acceleration
The Great Acceleration is the dramatic, continuous and roughly simultaneous surge across a large range of measures of human activity, first recorded in the mid-20th century and continuing into the early 21st century. Within the concept of the proposed epoch of the Anthropocene, these measures are specifically those of humanity's impact on Earth's geology and its ecosystems. Within the Anthropocene epoch, the Great Acceleration can be variously classified as its only age to date, one of its many ages (depending on the epoch's proposed start date), or its defining feature that is thus not an age, as well as other classifications. Environmental historian J. R. McNeill has argued that the Great Acceleration is idiosyncratic of the current age and is set to halt in the near future; that it has never happened before and will never happen again. However, climate change scientist and chemist Will Steffen's team have found evidence to be inconclusive to either confirm or refute such a claim. Related to Great Acceleration is the concept of accelerating change. While not explicitly commenting on whether the Great Acceleration as a whole is set to continue into the near future, the common implication is that the particular trend of accelerating progress will not cease until technological singularity is achieved, at which point technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to the Earth and possibly even the universe itself. Therefore, while adherents of the theory of accelerating change do not comment on the short-term fate of the Great Acceleration, they do hold that its eventual fate is continuation, which also contradicts McNeill's conclusions. In gauging the effects of human activity on Earth's geology, a number of socioeconomic and earth system parameters are utilized, including population, economics, water usage, food production, transportation, technology, greenhouse gases, surface temperature, and natural resource usage. Since 1950, these trends have been increasing significantly, often at an exponential rate. Data classification categories The International Geosphere-Biosphere Programme (IGBP) has divided and analyzed data from years 1750 to 2010 into two broad categories, each with 12 subcategories. The first category of socioeconomic trend data illustrates the impact on the second, the earth system trend data. Socioeconomic trends Population Real GDP Foreign direct investment Urban population Primary energy use Fertilizer consumption Large dams Water use Paper production Transportation Telecommunications International tourism Technology Earth system trends Carbon dioxide Nitrous oxide Methane Stratospheric ozone Surface temperature Ocean acidification Marine fish capture Shrimp aquaculture Nitrogen to coastal zone Tropical forest loss Domesticated land Terrestrial biosphere degradation See also References Holocene
0.781511
0.986397
0.77088
Rapidity
In special relativity, the classical concept of velocity is converted to rapidity to accommodate the limit determined by the speed of light. Velocities must be combined by Einstein's velocity-addition formula. For low speeds, rapidity and velocity are almost exactly proportional but, for higher velocities, rapidity takes a larger value, with the rapidity of light being infinite. Mathematically, rapidity can be defined as the hyperbolic angle that differentiates two frames of reference in relative motion, each frame being associated with distance and time coordinates. Using the inverse hyperbolic function , the rapidity corresponding to velocity is where is the speed of light. For low speeds, is approximately . Since in relativity any velocity is constrained to the interval the ratio satisfies . The inverse hyperbolic tangent has the unit interval for its domain and the whole real line for its image; that is, the interval maps onto . History In 1908 Hermann Minkowski explained how the Lorentz transformation could be seen as simply a hyperbolic rotation of the spacetime coordinates, i.e., a rotation through an imaginary angle. This angle therefore represents (in one spatial dimension) a simple additive measure of the velocity between frames. The rapidity parameter replacing velocity was introduced in 1910 by Vladimir Varićak and by E. T. Whittaker. The parameter was named rapidity by Alfred Robb (1911) and this term was adopted by many subsequent authors, such as Ludwik Silberstein (1914), Frank Morley (1936) and Wolfgang Rindler (2001). Area of a hyperbolic sector The quadrature of the hyperbola by Grégoire de Saint-Vincent established the natural logarithm as the area of a hyperbolic sector or an equivalent area against an asymptote. In spacetime theory, the connection of events by light divides the universe into Past, Future, or Elsewhere based on a Here and Now . On any line in space, a light beam may be directed left or right. Take the as the events passed by the right beam and the as the events of the left beam. Then a resting frame has time along the diagonal . The rectangular hyperbola can be used to gauge velocities (in the first quadrant). Zero velocity corresponds to . Any point on the hyperbola has light-cone coordinates where is the rapidity, and is equal to the area of the hyperbolic sector from to these coordinates. Many authors refer instead to the unit hyperbola , using rapidity for a parameter, as in the standard spacetime diagram. There the axes are measured by clock and meter-stick, more familiar benchmarks, and the basis of spacetime theory. So the delineation of rapidity as a hyperbolic parameter of beam-space is a reference to the seventeenth-century origin of our precious transcendental functions, and a supplement to spacetime diagramming. Lorentz boost The rapidity arises in the linear representation of a Lorentz boost as a vector-matrix product The matrix is of the type with and satisfying , so that lies on the unit hyperbola. Such matrices form the indefinite orthogonal group O(1,1) with one-dimensional Lie algebra spanned by the anti-diagonal unit matrix, showing that the rapidity is the coordinate on this Lie algebra. This action may be depicted in a spacetime diagram. In matrix exponential notation, can be expressed as , where is the negative of the anti-diagonal unit matrix A key property of the matrix exponential is from which immediately follows that This establishes the useful additive property of rapidity: if , and are frames of reference, then where denotes the rapidity of a frame of reference relative to a frame of reference . The simplicity of this formula contrasts with the complexity of the corresponding velocity-addition formula. As we can see from the Lorentz transformation above, the Lorentz factor identifies with so the rapidity is implicitly used as a hyperbolic angle in the Lorentz transformation expressions using and . We relate rapidities to the velocity-addition formula by recognizing and so Proper acceleration (the acceleration 'felt' by the object being accelerated) is the rate of change of rapidity with respect to proper time (time as measured by the object undergoing acceleration itself). Therefore, the rapidity of an object in a given frame can be viewed simply as the velocity of that object as would be calculated non-relativistically by an inertial guidance system on board the object itself if it accelerated from rest in that frame to its given speed. The product of and appears frequently, and is from the above arguments Exponential and logarithmic relations From the above expressions we have and thus or explicitly The Doppler-shift factor associated with rapidity is . In experimental particle physics The energy and scalar momentum of a particle of non-zero (rest) mass are given by: With the definition of and thus with the energy and scalar momentum can be written as: So, rapidity can be calculated from measured energy and momentum by However, experimental particle physicists often use a modified definition of rapidity relative to a beam axis where is the component of momentum along the beam axis. This is the rapidity of the boost along the beam axis which takes an observer from the lab frame to a frame in which the particle moves only perpendicular to the beam. Related to this is the concept of pseudorapidity. Rapidity relative to a beam axis can also be expressed as See also Bondi k-calculus Lorentz transformation Pseudorapidity Proper velocity Theory of relativity Notes and references Vladimir Varićak (1910, 1912, 1924), see Vladimir Varićak#Publications Émile Borel (1913), La théorie de la relativité et la cinématique (in French), Comptes rendus de l'Académie des Sciences, Paris: volume 156, pages 215-218; volume 157, pages 703-705 Vladimir Karapetoff (1936), "Restricted relativity in terms of hyperbolic functions of rapidities", American Mathematical Monthly, volume 43, page 70. Frank Morley (1936), "When and Where", The Criterion, edited by Thomas Stearns Eliot, volume 15, pages 200-209. Wolfgang Rindler (2001) Relativity: Special, General, and Cosmological, page 53, Oxford University Press. Shaw, Ronald (1982) Linear Algebra and Group Representations, volume 1, page 229, Academic Press . (see page 17 of e-link) Special relativity Velocity Hyperbolic functions
0.779711
0.98867
0.770877
Ponderomotive force
In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. It causes the particle to move towards the area of the weaker field strength, rather than oscillating around an initial point as happens in a homogeneous field. This occurs because the particle sees a greater magnitude of force during the half of the oscillation period while it is in the area with the stronger field. The net force during its period in the weaker area in the second half of the oscillation does not offset the net force of the first half, and so over a complete cycle this makes the particle move towards the area of lesser force. The ponderomotive force Fp is expressed by which has units of newtons (in SI units) and where e is the electrical charge of the particle, m is its mass, ω is the angular frequency of oscillation of the field, and E is the amplitude of the electric field. At low enough amplitudes the magnetic field exerts very little force. This equation means that a charged particle in an inhomogeneous oscillating field not only oscillates at the frequency of ω of the field, but is also accelerated by Fp toward the weak field direction. This is a rare case in which the direction of the force does not depend on whether the particle is positively or negatively charged. Etymology The term ponderomotive comes from the Latin ponder- (meaning weight) and the english motive (having to do with motion). Derivation The derivation of the ponderomotive force expression proceeds as follows. Consider a particle under the action of a non-uniform electric field oscillating at frequency in the x-direction. The equation of motion is given by: neglecting the effect of the associated oscillating magnetic field. If the length scale of variation of is large enough, then the particle trajectory can be divided into a slow time (secular) motion and a fast time (micro)motion: where is the slow drift motion and represents fast oscillations. Now, let us also assume that . Under this assumption, we can use Taylor expansion on the force equation about , to get: , and because is small, , so On the time scale on which oscillates, is essentially a constant. Thus, the above can be integrated to get: Substituting this in the force equation and averaging over the timescale, we get, Thus, we have obtained an expression for the drift motion of a charged particle under the effect of a non-uniform oscillating field. Time averaged density Instead of a single charged particle, there could be a gas of charged particles confined by the action of such a force. Such a gas of charged particles is called plasma. The distribution function and density of the plasma will fluctuate at the applied oscillating frequency and to obtain an exact solution, we need to solve the Vlasov Equation. But, it is usually assumed that the time averaged density of the plasma can be directly obtained from the expression for the force expression for the drift motion of individual charged particles: where is the ponderomotive potential and is given by Generalized ponderomotive force Instead of just an oscillating field, a permanent field could also be present. In such a situation, the force equation of a charged particle becomes: To solve the above equation, we can make a similar assumption as we did for the case when . This gives a generalized expression for the drift motion of the particle: Applications The idea of a ponderomotive description of particles under the action of a time-varying field has applications in areas like: High harmonic generation Plasma acceleration of particles Plasma propulsion engine especially the Electrodeless plasma thruster Quadrupole ion trap Terahertz time-domain spectroscopy as a source of high energy THz radiation in laser-induced air plasmas The quadrupole ion trap uses a linear function along its principal axes. This gives rise to a harmonic oscillator in the secular motion with the so-called trapping frequency , where are the charge and mass of the ion, the peak amplitude and the frequency of the radiofrequency (rf) trapping field, and the ion-to-electrode distance respectively. Note that a larger rf frequency lowers the trapping frequency. The ponderomotive force also plays an important role in laser induced plasmas as a major density lowering factor. Often, however, the assumed slow-time independency of is too restrictive, an example being the ultra-short, intense laser pulse-plasma(target) interaction. Here a new ponderomotive effect comes into play, the ponderomotive memory effect. The result is a weakening of the ponderomotive force and the generation of wake fields and ponderomotive streamers. In this case the fast-time averaged density becomes for a Maxwellian plasma: , where and . References General Citations Journals Electrodynamics Force
0.790617
0.975029
0.770874
Spacetime
In physics, spacetime, also called the space-time continuum, is a mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams are useful in visualizing and understanding relativistic effects, such as how different observers perceive where and when events occur. Until the turn of the 20th century, the assumption had been that the three-dimensional geometry of the universe (its description in terms of locations, shapes, distances, and directions) was distinct from time (the measurement of when events occur within the universe). However, space and time took on new meanings with the Lorentz transformation and special theory of relativity. In 1908, Hermann Minkowski presented a geometric interpretation of special relativity that fused time and the three spatial dimensions of space into a single four-dimensional continuum now known as Minkowski space. This interpretation proved vital to the general theory of relativity, wherein spacetime is curved by mass and energy. Fundamentals Definitions Non-relativistic classical mechanics treats time as a universal quantity of measurement that is uniform throughout, is separate from space, and is agreed on by all observers. Classical mechanics assumes that time has a constant rate of passage, independent of the observer's state of motion, or anything external. It assumes that space is Euclidean: it assumes that space follows the geometry of common sense. In the context of special relativity, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer. General relativity provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field. In ordinary space, a position is specified by three numbers, known as dimensions. In the Cartesian coordinate system, these are often called x, y and z. A point in spacetime is called an event, and requires four numbers to be specified: the three-dimensional location in space, plus the position in time (Fig. 1). An event is represented by a set of coordinates x, y, z and t. Spacetime is thus four-dimensional. Unlike the analogies used in popular writings to explain events, such as firecrackers or sparks, mathematical events have zero duration and represent a single point in spacetime. Although it is possible to be in motion relative to the popping of a firecracker or a spark, it is not possible for an observer to be in motion relative to an event. The path of a particle through spacetime can be considered to be a sequence of events. The series of events can be linked together to form a curve that represents the particle's progress through spacetime. That path is called the particle's world line. Mathematically, spacetime is a manifold, which is to say, it appears locally "flat" near each point in the same way that, at small enough scales, the surface of a globe appears to be flat. A scale factor, (conventionally called the speed-of-light) relates distances measured in space to distances measured in time. The magnitude of this scale factor (nearly in space being equivalent to one second in time), along with the fact that spacetime is a manifold, implies that at ordinary, non-relativistic speeds and at ordinary, human-scale distances, there is little that humans might observe that is noticeably different from what they might observe if the world were Euclidean. It was only with the advent of sensitive scientific measurements in the mid-1800s, such as the Fizeau experiment and the Michelson–Morley experiment, that puzzling discrepancies began to be noted between observation versus predictions based on the implicit assumption of Euclidean space. In special relativity, an observer will, in most cases, mean a frame of reference from which a set of objects or events is being measured. This usage differs significantly from the ordinary English meaning of the term. Reference frames are inherently nonlocal constructs, and according to this usage of the term, it does not make sense to speak of an observer as having a location. In Fig. 1-1, imagine that the frame under consideration is equipped with a dense lattice of clocks, synchronized within this reference frame, that extends indefinitely throughout the three dimensions of space. Any specific location within the lattice is not important. The latticework of clocks is used to determine the time and position of events taking place within the whole frame. The term observer refers to the whole ensemble of clocks associated with one inertial frame of reference. In this idealized case, every point in space has a clock associated with it, and thus the clocks register each event instantly, with no time delay between an event and its recording. A real observer, will see a delay between the emission of a signal and its detection due to the speed of light. To synchronize the clocks, in the data reduction following an experiment, the time when a signal is received will be corrected to reflect its actual time were it to have been recorded by an idealized lattice of clocks. In many books on special relativity, especially older ones, the word "observer" is used in the more ordinary sense of the word. It is usually clear from context which meaning has been adopted. Physicists distinguish between what one measures or observes, after one has factored out signal propagation delays, versus what one visually sees without such corrections. Failing to understand the difference between what one measures and what one sees is the source of much confusion among students of relativity. History By the mid-1800s, various experiments such as the observation of the Arago spot and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to a corpuscular theory. Propagation of waves was then assumed to require the existence of a waving medium; in the case of light waves, this was considered to be a hypothetical luminiferous aether. The various attempts to establish the properties of this hypothetical medium yielded contradictory results. For example, the Fizeau experiment of 1851, conducted by French physicist Hippolyte Fizeau, demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction. Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light. The Michelson–Morley experiment of 1887 (Fig. 1-2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration. George Francis FitzGerald in 1889, and Hendrik Lorentz in 1892, independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson–Morley experiment. No length changes occur in directions transverse to the direction of motion. By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein was to derive later, i.e. the Lorentz transformation. As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter. Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena. Henri Poincaré was the first to combine space and time into spacetime. He argued in 1898 that the simultaneity of two events is a matter of convention. In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by applying an explicitly operational definition of clock synchronization assuming constant light speed. In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity. In 1905/1906 he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity. While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional spacetime by defining various four vectors, namely four-position, four-velocity, and four-force. He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world". Even as late as 1909, Poincaré continued to describe the dynamical interpretation of the Lorentz transform. In 1905, Albert Einstein analyzed special relativity in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. His results were mathematically equivalent to those of Lorentz and Poincaré. He obtained them by recognizing that the entire theory can be built upon two postulates: the principle of relativity and the principle of the constancy of light speed. His work was filled with vivid imagery involving the exchange of light signals between clocks in motion, careful measurements of the lengths of moving rods, and other such examples. Einstein in 1905 superseded previous attempts of an electromagnetic mass–energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass–energy equivalence, Einstein showed that the gravitational mass of a body is proportional to its energy content, which was one of the early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime, in the further development of general relativity, Einstein fully incorporated the spacetime formalism. When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator: Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincaré et al. Minkowski saw Einstein's work as an extension of Lorentz's, and was most directly influenced by Poincaré. On 5 November 1907 (a little more than a year before his death), Minkowski introduced his geometric interpretation of spacetime in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitätsprinzip). On 21 September 1908, Minkowski presented his talk, Space and Time (Raum und Zeit), to the German Society of Scientists and Physicians. The opening words of Space and Time include Minkowski's statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1-4), and included a remarkable demonstration that the concept of the invariant interval (discussed below), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.{{refn|group=note|(In the following, the group G∞ is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for The spacetime concept and the Lorentz group are closely connected to certain types of sphere, hyperbolic, or conformal geometries and their transformation groups already developed in the 19th century, in which invariant intervals analogous to the spacetime interval are used. Einstein, for his part, was initially dismissive of Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital. In 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity. Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime. Spacetime in special relativity Spacetime interval In three dimensions, the distance between two points can be defined using the Pythagorean theorem: Although two viewers may measure the x, y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both, assuming that they are measuring using the same units. The distance is "invariant". In special relativity, however, the distance between two points is no longer the same if measured by two different observers, when one of the observers is moving, because of Lorentz contraction. The situation is even more complicated if the two points are separated in time as well as in space. For example, if one observer sees two events occur at the same place, but at different times, a person moving with respect to the first observer will see the two events occurring at different places, because the moving point of view sees itself as stationary, and the position of the event as receding or approaching. Thus, a different measure must be used to measure the effective "distance" between two events. In four-dimensional spacetime, the analog to distance is the interval. Although time comes in as a fourth dimension, it is treated differently than the spatial dimensions. Minkowski space hence differs in important respects from four-dimensional Euclidean space. The fundamental reason for merging space and time into spacetime is that space and time are separately not invariant, which is to say that, under the proper conditions, different observers will disagree on the length of time between two events (because of time dilation) or the distance between the two events (because of length contraction). Special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure the time and distance between any two events will end up computing the same spacetime interval. Suppose an observer measures two events as being separated in time by and a spatial distance Then the squared spacetime interval between the two events that are separated by a distance in space and by in the -coordinate is: or for three space dimensions, The constant the speed of light, converts time units (like seconds) into space units (like meters). The squared interval is a measure of separation between events A and B that are time separated and in addition space separated either because there are two separate objects undergoing events, or because a single object in space is moving inertially between its events. The separation interval is the difference between the square of the spatial distance separating event B from event A and the square of the spatial distance traveled by a light signal in that same time interval . If the event separation is due to a light signal, then this difference vanishes and . When the event considered is infinitesimally close to each other, then we may write In a different inertial frame, say with coordinates , the spacetime interval can be written in a same form as above. Because of the constancy of speed of light, the light events in all inertial frames belong to zero interval, . For any other infinitesimal event where , one can prove that which in turn upon integration leads to . The invariance of the spacetime interval between the same events for all inertial frames of reference is one of the fundamental results of special theory of relativity. Although for brevity, one frequently sees interval expressions expressed without deltas, including in most of the following discussion, it should be understood that in general, means , etc. We are always concerned with differences of spatial or temporal coordinate values belonging to two events, and since there is no preferred origin, single coordinate values have no essential meaning. The equation above is similar to the Pythagorean theorem, except with a minus sign between the and the terms. The spacetime interval is the quantity not itself. The reason is that unlike distances in Euclidean geometry, intervals in Minkowski spacetime can be negative. Rather than deal with square roots of negative numbers, physicists customarily regard as a distinct symbol in itself, rather than the square of something. Note: There are two sign conventions in use in the relativity literature: and These sign conventions are associated with the metric signatures and A minor variation is to place the time coordinate last rather than first. Both conventions are widely used within the field of study. In the following discussion, we use the first convention. In general can assume any real number value. If is positive, the spacetime interval is referred to as timelike. Since spatial distance traversed by any massive object is always less than distance traveled by the light for the same time interval, positive intervals are always timelike. If is negative, the spacetime interval is said to be spacelike. Spacetime intervals are equal to zero when In other words, the spacetime interval between two events on the world line of something moving at the speed of light is zero. Such an interval is termed lightlike or null. A photon arriving in our eye from a distant star will not have aged, despite having (from our perspective) spent years in its passage. A spacetime diagram is typically drawn with only a single space and a single time coordinate. Fig. 2-1 presents a spacetime diagram illustrating the world lines (i.e. paths in spacetime) of two photons, A and B, originating from the same event and going in opposite directions. In addition, C illustrates the world line of a slower-than-light-speed object. The vertical time coordinate is scaled by so that it has the same units (meters) as the horizontal space coordinate. Since photons travel at the speed of light, their world lines have a slope of ±1. In other words, every meter that a photon travels to the left or right requires approximately 3.3 nanoseconds of time. Reference frames To gain insight in how spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-2, two Galilean reference frames (i.e. conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime") belongs to a second observer O′. The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′. Frame S′ moves in the x-direction of frame S with a constant velocity v as measured in frame S. The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′. Fig. 2-3a redraws Fig. 2-2 in a different orientation. Fig. 2-3b illustrates a relativistic spacetime diagram from the viewpoint of observer O. Since S and S′ are in standard configuration, their origins coincide at times t = 0 in frame S and t′ = 0 in frame S′. The ct′ axis passes through the events in frame S′ which have x′ = 0. But the points with x′ = 0 are moving in the x-direction of frame S with velocity v, so that they are not coincident with the ct axis at any time other than zero. Therefore, the ct′ axis is tilted with respect to the ct axis by an angle θ given by The x′ axis is also tilted with respect to the x axis. To determine the angle of this tilt, we recall that the slope of the world line of a light pulse is always ±1. Fig. 2-3c presents a spacetime diagram from the viewpoint of observer O′. Event P represents the emission of a light pulse at x′ = 0, ct′ = −a. The pulse is reflected from a mirror situated a distance a from the light source (event Q), and returns to the light source at x′ = 0, ct′ = a (event R). The same events P, Q, R are plotted in Fig. 2-3b in the frame of observer O. The light paths have slopes = 1 and −1, so that △PQR forms a right triangle with PQ and QR both at 45 degrees to the x and ct axes. Since OP = OQ = OR, the angle between x′ and x must also be θ. While the rest frame has space and time axes that meet at right angles, the moving frame is drawn with axes that meet at an acute angle. The frames are actually equivalent. The asymmetry is due to unavoidable distortions in how spacetime coordinates can map onto a Cartesian plane, and should be considered no stranger than the manner in which, on a Mercator projection of the Earth, the relative sizes of land masses near the poles (Greenland and Antarctica) are highly exaggerated relative to land masses near the Equator. Light cone In Fig. 2–4, event O is at the origin of a spacetime diagram, and the two diagonal lines represent all events that have zero spacetime interval with respect to the origin event. These two lines form what is called the light cone of the event O, since adding a second spatial dimension (Fig. 2-5) makes the appearance that of two right circular cones meeting with their apices at O. One cone extends into the future (t>0), the other into the past (t<0). A light (double) cone divides spacetime into separate regions with respect to its apex. The interior of the future light cone consists of all events that are separated from the apex by more time (temporal distance) than necessary to cross their spatial distance at lightspeed; these events comprise the timelike future of the event O. Likewise, the timelike past comprises the interior events of the past light cone. So in timelike intervals Δct is greater than Δx, making timelike intervals positive. The region exterior to the light cone consists of events that are separated from the event O by more space than can be crossed at lightspeed in the given time. These events comprise the so-called spacelike region of the event O, denoted "Elsewhere" in Fig. 2-4. Events on the light cone itself are said to be lightlike (or null separated) from O. Because of the invariance of the spacetime interval, all observers will assign the same light cone to any given event, and thus will agree on this division of spacetime. The light cone has an essential role within the concept of causality. It is possible for a not-faster-than-light-speed signal to travel from the position and time of O to the position and time of D (Fig. 2-4). It is hence possible for event O to have a causal influence on event D. The future light cone contains all the events that could be causally influenced by O. Likewise, it is possible for a not-faster-than-light-speed signal to travel from the position and time of A, to the position and time of O. The past light cone contains all the events that could have a causal influence on O. In contrast, assuming that signals cannot travel faster than the speed of light, any event, like e.g. B or C, in the spacelike region (Elsewhere), cannot either affect event O, nor can they be affected by event O employing such signalling. Under this assumption any causal relationship between event O and any events in the spacelike region of a light cone is excluded. Relativity of simultaneity All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before–after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2-4 was drawn from the reference frame of an observer moving at From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O. From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity. Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events are simultaneous from the reference frame of an observer moving at From the reference frame of an observer moving at the events appear to occur in the order From the reference frame of an observer moving at , the events appear to occur in the order . The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant. A spacelike spacetime interval gives the same distance that an observer would measure if the events being measured were simultaneous to the observer. A spacelike spacetime interval hence provides a measure of proper distance, i.e. the true distance = Likewise, a timelike spacetime interval gives the same measure of time as would be presented by the cumulative ticking of a clock that moves along a given world line. A timelike spacetime interval hence provides a measure of the proper time = Invariant hyperbola In Euclidean space (having spatial dimensions only), the set of points equidistant (using the Euclidean metric) from some point form a circle (in two dimensions) or a sphere (in three dimensions). In Minkowski spacetime (having one temporal and one spatial dimension), the points at some constant spacetime interval away from the origin (using the Minkowski metric) form curves given by the two equations with some positive real constant. These equations describe two families of hyperbolae in an x–ct spacetime diagram, which are termed invariant hyperbolae. In Fig. 2-7a, each magenta hyperbola connects all events having some fixed spacelike separation from the origin, while the green hyperbolae connect events of equal timelike separation. The magenta hyperbolae, which cross the x axis, are timelike curves, which is to say that these hyperbolae represent actual paths that can be traversed by (constantly accelerating) particles in spacetime: Between any two events on one hyperbola a causality relation is possible, because the inverse of the slope—representing the necessary speed—for all secants is less than . On the other hand, the green hyperbolae, which cross the ct axis, are spacelike curves because all intervals along these hyperbolae are spacelike intervals: No causality is possible between any two points on one of these hyperbolae, because all secants represent speeds larger than . Fig. 2-7b reflects the situation in Minkowski spacetime (one temporal and two spatial dimensions) with the corresponding hyperboloids. The invariant hyperbolae displaced by spacelike intervals from the origin generate hyperboloids of one sheet, while the invariant hyperbolae displaced by timelike intervals from the origin generate hyperboloids of two sheets. The (1+2)-dimensional boundary between space- and time-like hyperboloids, established by the events forming a zero spacetime interval to the origin, is made up by degenerating the hyperboloids to the light cone. In (1+1)-dimensions the hyperbolae degenerate to the two grey 45°-lines depicted in Fig. 2-7a. Time dilation and length contraction Fig. 2-8 illustrates the invariant hyperbola for all events that can be reached from the origin in a proper time of 5 meters (approximately ). Different world lines represent clocks moving at different speeds. A clock that is stationary with respect to the observer has a world line that is vertical, and the elapsed time measured by the observer is the same as the proper time. For a clock traveling at 0.3 c, the elapsed time measured by the observer is 5.24 meters, while for a clock traveling at 0.7 c, the elapsed time measured by the observer is 7.00 meters. This illustrates the phenomenon known as time dilation. Clocks that travel faster take longer (in the observer frame) to tick out the same amount of proper time, and they travel further along the x–axis within that proper time than they would have without time dilation. The measurement of time dilation by two observers in different inertial reference frames is mutual. If observer O measures the clocks of observer O′ as running slower in his frame, observer O′ in turn will measure the clocks of observer O as running slower. Length contraction, like time dilation, is a manifestation of the relativity of simultaneity. Measurement of length requires measurement of the spacetime interval between two events that are simultaneous in one's frame of reference. But events that are simultaneous in one frame of reference are, in general, not simultaneous in other frames of reference. Fig. 2-9 illustrates the motions of a 1 m rod that is traveling at 0.5 c along the x axis. The edges of the blue band represent the world lines of the rod's two endpoints. The invariant hyperbola illustrates events separated from the origin by a spacelike interval of 1 m. The endpoints O and B measured when  = 0 are simultaneous events in the S′ frame. But to an observer in frame S, events O and B are not simultaneous. To measure length, the observer in frame S measures the endpoints of the rod as projected onto the x-axis along their world lines. The projection of the rod's world sheet onto the x axis yields the foreshortened length OC. (not illustrated) Drawing a vertical line through A so that it intersects the x′ axis demonstrates that, even as OB is foreshortened from the point of view of observer O, OA is likewise foreshortened from the point of view of observer O′. In the same way that each observer measures the other's clocks as running slow, each observer measures the other's rulers as being contracted. In regards to mutual length contraction, Fig. 2-9 illustrates that the primed and unprimed frames are mutually rotated by a hyperbolic angle (analogous to ordinary angles in Euclidean geometry). Because of this rotation, the projection of a primed meter-stick onto the unprimed x-axis is foreshortened, while the projection of an unprimed meter-stick onto the primed x′-axis is likewise foreshortened. Mutual time dilation and the twin paradox Mutual time dilation Mutual time dilation and length contraction tend to strike beginners as inherently self-contradictory concepts. If an observer in frame S measures a clock, at rest in frame S', as running slower than his', while S' is moving at speed v in S, then the principle of relativity requires that an observer in frame S' likewise measures a clock in frame S, moving at speed −v in S', as running slower than hers. How two clocks can run both slower than the other, is an important question that "goes to the heart of understanding special relativity." This apparent contradiction stems from not correctly taking into account the different settings of the necessary, related measurements. These settings allow for a consistent explanation of the only apparent contradiction. It is not about the abstract ticking of two identical clocks, but about how to measure in one frame the temporal distance of two ticks of a moving clock. It turns out that in mutually observing the duration between ticks of clocks, each moving in the respective frame, different sets of clocks must be involved. In order to measure in frame S the tick duration of a moving clock W′ (at rest in S′), one uses two additional, synchronized clocks W1 and W2 at rest in two arbitrarily fixed points in S with the spatial distance d. Two events can be defined by the condition "two clocks are simultaneously at one place", i.e., when W′ passes each W1 and W2. For both events the two readings of the collocated clocks are recorded. The difference of the two readings of W1 and W2 is the temporal distance of the two events in S, and their spatial distance is d. The difference of the two readings of W′ is the temporal distance of the two events in S′. In S′ these events are only separated in time, they happen at the same place in S′. Because of the invariance of the spacetime interval spanned by these two events, and the nonzero spatial separation d in S, the temporal distance in S′ must be smaller than the one in S: the smaller temporal distance between the two events, resulting from the readings of the moving clock W′, belongs to the slower running clock W′. Conversely, for judging in frame S′ the temporal distance of two events on a moving clock W (at rest in S), one needs two clocks at rest in S′. In this comparison the clock W is moving by with velocity −v. Recording again the four readings for the events, defined by "two clocks simultaneously at one place", results in the analogous temporal distances of the two events, now temporally and spatially separated in S′, and only temporally separated but collocated in S. To keep the spacetime interval invariant, the temporal distance in S must be smaller than in S′, because of the spatial separation of the events in S′: now clock W is observed to run slower. The necessary recordings for the two judgements, with "one moving clock" and "two clocks at rest" in respectively S or S′, involves two different sets, each with three clocks. Since there are different sets of clocks involved in the measurements, there is no inherent necessity that the measurements be reciprocally "consistent" such that, if one observer measures the moving clock to be slow, the other observer measures the one's clock to be fast. Fig. 2-10 illustrates the previous discussion of mutual time dilation with Minkowski diagrams. The upper picture reflects the measurements as seen from frame S "at rest" with unprimed, rectangular axes, and frame S′ "moving with v > 0", coordinatized by primed, oblique axes, slanted to the right; the lower picture shows frame S′ "at rest" with primed, rectangular coordinates, and frame S "moving with −v < 0", with unprimed, oblique axes, slanted to the left. Each line drawn parallel to a spatial axis (x, x′) represents a line of simultaneity. All events on such a line have the same time value (ct, ct′). Likewise, each line drawn parallel to a temporal axis (ct, ct′) represents a line of equal spatial coordinate values (x, x′). One may designate in both pictures the origin O (= ) as the event, where the respective "moving clock" is collocated with the "first clock at rest" in both comparisons. Obviously, for this event the readings on both clocks in both comparisons are zero. As a consequence, the worldlines of the moving clocks are the slanted to the right ct′-axis (upper pictures, clock W′) and the slanted to the left ct-axes (lower pictures, clock W). The worldlines of W1 and W′1 are the corresponding vertical time axes (ct in the upper pictures, and ct′ in the lower pictures). In the upper picture the place for W2 is taken to be Ax > 0, and thus the worldline (not shown in the pictures) of this clock intersects the worldline of the moving clock (the ct′-axis) in the event labelled A, where "two clocks are simultaneously at one place". In the lower picture the place for W′2 is taken to be Cx′ < 0, and so in this measurement the moving clock W passes W′2 in the event C. In the upper picture the ct-coordinate At of the event A (the reading of W2) is labeled B, thus giving the elapsed time between the two events, measured with W1 and W2, as OB. For a comparison, the length of the time interval OA, measured with W′, must be transformed to the scale of the ct-axis. This is done by the invariant hyperbola (see also Fig. 2-8) through A, connecting all events with the same spacetime interval from the origin as A. This yields the event C on the ct-axis, and obviously: OC < OB, the "moving" clock W′ runs slower. To show the mutual time dilation immediately in the upper picture, the event D may be constructed as the event at x′ = 0 (the location of clock W′ in S′), that is simultaneous to C (OC has equal spacetime interval as OA) in S′. This shows that the time interval OD is longer than OA, showing that the "moving" clock runs slower. In the lower picture the frame S is moving with velocity −v in the frame S′ at rest. The worldline of clock W is the ct-axis (slanted to the left), the worldline of W′1 is the vertical ct′-axis, and the worldline of W′2 is the vertical through event C, with ct′-coordinate D. The invariant hyperbola through event C scales the time interval OC to OA, which is shorter than OD; also, B is constructed (similar to D in the upper pictures) as simultaneous to A in S, at x = 0. The result OB > OC corresponds again to above. The word "measure" is important. In classical physics an observer cannot affect an observed object, but the object's state of motion can affect the observer's observations of the object. Twin paradox Many introductions to special relativity illustrate the differences between Galilean relativity and special relativity by posing a series of "paradoxes". These paradoxes are, in fact, ill-posed problems, resulting from our unfamiliarity with velocities comparable to the speed of light. The remedy is to solve many problems in special relativity and to become familiar with its so-called counter-intuitive predictions. The geometrical approach to studying spacetime is considered one of the best methods for developing a modern intuition. The twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. The twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock. Nevertheless, the twin paradox is not a true paradox because it is easily understood within the context of special relativity. The impression that a paradox exists stems from a misunderstanding of what special relativity states. Special relativity does not declare all frames of reference to be equivalent, only inertial frames. The traveling twin's frame is not inertial during periods when she is accelerating. Furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not.Even with no (de)acceleration i.e. using one inertial frame O for constant, high-velocity outward journey and another inertial frame I for constant, high-velocity inward journey – the sum of the elapsed time in those frames (O and I) is shorter than the elapsed time in the stationary inertial frame S. Thus acceleration and deceleration is not the cause of shorter elapsed time during the outward and inward journey. Instead the use of two different constant, high-velocity inertial frames for outward and inward journey is really the cause of shorter elapsed time total. Granted, if the same twin has to travel outward and inward leg of the journey and safely switch from outward to inward leg of the journey, the acceleration and deceleration is required. If the travelling twin could ride the high-velocity outward inertial frame and instantaneously switch to high-velocity inward inertial frame the example would still work. The point is that real reason should be stated clearly. The asymmetry is because of the comparison of sum of elapsed times in two different inertial frames (O and I) to the elapsed time in a single inertial frame S. These distinctions should result in a difference in the twins' ages. The spacetime diagram of Fig. 2-11 presents the simple case of a twin going straight out along the x axis and immediately turning back. From the standpoint of the stay-at-home twin, there is nothing puzzling about the twin paradox at all. The proper time measured along the traveling twin's world line from O to C, plus the proper time measured from C to B, is less than the stay-at-home twin's proper time measured from O to A to B. More complex trajectories require integrating the proper time between the respective events along the curve (i.e. the path integral) to calculate the total amount of proper time experienced by the traveling twin. Complications arise if the twin paradox is analyzed from the traveling twin's point of view. Weiss's nomenclature, designating the stay-at-home twin as Terence and the traveling twin as Stella, is hereafter used. Stella is not in an inertial frame. Given this fact, it is sometimes incorrectly stated that full resolution of the twin paradox requires general relativity: Although general relativity is not required to analyze the twin paradox, application of the Equivalence Principle of general relativity does provide some additional insight into the subject. Stella is not stationary in an inertial frame. Analyzed in Stella's rest frame, she is motionless for the entire trip. When she is coasting her rest frame is inertial, and Terence's clock will appear to run slow. But when she fires her rockets for the turnaround, her rest frame is an accelerated frame and she experiences a force which is pushing her as if she were in a gravitational field. Terence will appear to be high up in that field and because of gravitational time dilation, his clock will appear to run fast, so much so that the net result will be that Terence has aged more than Stella when they are back together. The theoretical arguments predicting gravitational time dilation are not exclusive to general relativity. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence, including Newton's theory. Gravitation This introductory section has focused on the spacetime of special relativity, since it is the easiest to describe. Minkowski spacetime is flat, takes no account of gravity, is uniform throughout, and serves as nothing more than a static background for the events that take place in it. The presence of gravity greatly complicates the description of spacetime. In general relativity, spacetime is no longer a static background, but actively interacts with the physical systems that it contains. Spacetime curves in the presence of matter, can propagate waves, bends light, and exhibits a host of other phenomena. A few of these phenomena are described in the later sections of this article. Basic mathematics of spacetime Galilean transformations A basic goal is to be able to compare measurements made by observers in relative motion. If there is an observer O in frame S who has measured the time and space coordinates of an event, assigning this event three Cartesian coordinates and the time as measured on his lattice of synchronized clocks (see Fig. 1-1). A second observer O′ in a different frame S′ measures the same event in her coordinate system and her lattice of synchronized clocks . With inertial frames, neither observer is under acceleration, and a simple set of equations allows us to relate coordinates to . Given that the two coordinate systems are in standard configuration, meaning that they are aligned with parallel coordinates and that when , the coordinate transformation is as follows: Fig. 3-1 illustrates that in Newton's theory, time is universal, not the velocity of light. Consider the following thought experiment: The red arrow illustrates a train that is moving at 0.4 c with respect to the platform. Within the train, a passenger shoots a bullet with a speed of 0.4 c in the frame of the train. The blue arrow illustrates that a person standing on the train tracks measures the bullet as traveling at 0.8 c. This is in accordance with our naive expectations. More generally, assuming that frame S′ is moving at velocity v with respect to frame S, then within frame S′, observer O′ measures an object moving with velocity . Velocity u with respect to frame S, since , , and , can be written as = = . This leads to and ultimately   or   which is the common-sense Galilean law for the addition of velocities. Relativistic composition of velocities The composition of velocities is quite different in relativistic spacetime. To reduce the complexity of the equations slightly, we introduce a common shorthand for the ratio of the speed of an object relative to light, Fig. 3-2a illustrates a red train that is moving forward at a speed given by . From the primed frame of the train, a passenger shoots a bullet with a speed given by , where the distance is measured along a line parallel to the red axis rather than parallel to the black x axis. What is the composite velocity u of the bullet relative to the platform, as represented by the blue arrow? Referring to Fig. 3-2b: From the platform, the composite speed of the bullet is given by . The two yellow triangles are similar because they are right triangles that share a common angle α. In the large yellow triangle, the ratio . The ratios of corresponding sides of the two yellow triangles are constant, so that = . So and . Substitute the expressions for b and r into the expression for u in step 1 to yield Einstein's formula for the addition of velocities: The relativistic formula for addition of velocities presented above exhibits several important features: If and v are both very small compared with the speed of light, then the product /c2 becomes vanishingly small, and the overall result becomes indistinguishable from the Galilean formula (Newton's formula) for the addition of velocities: u =  + v. The Galilean formula is a special case of the relativistic formula applicable to low velocities. If is set equal to c, then the formula yields u = c regardless of the starting value of v. The velocity of light is the same for all observers regardless their motions relative to the emitting source. Time dilation and length contraction revisited It is straightforward to obtain quantitative expressions for time dilation and length contraction. Fig. 3-3 is a composite image containing individual frames taken from two previous animations, simplified and relabeled for the purposes of this section. To reduce the complexity of the equations slightly, there are a variety of different shorthand notations for ct: and are common. One also sees very frequently the use of the convention In Fig. 3-3a, segments OA and OK represent equal spacetime intervals. Time dilation is represented by the ratio OB/OK. The invariant hyperbola has the equation where k = OK, and the red line representing the world line of a particle in motion has the equation w = x/β = xc/v. A bit of algebraic manipulation yields The expression involving the square root symbol appears very frequently in relativity, and one over the expression is called the Lorentz factor, denoted by the Greek letter gamma : If v is greater than or equal to c, the expression for becomes physically meaningless, implying that c is the maximum possible speed in nature. For any v greater than zero, the Lorentz factor will be greater than one, although the shape of the curve is such that for low speeds, the Lorentz factor is extremely close to one. In Fig. 3-3b, segments OA and OK represent equal spacetime intervals. Length contraction is represented by the ratio OB/OK. The invariant hyperbola has the equation , where k = OK, and the edges of the blue band representing the world lines of the endpoints of a rod in motion have slope 1/β = c/v. Event A has coordinates (x, w) = (γk, γβk). Since the tangent line through A and B has the equation w = (x − OB)/β, we have γβk = (γk − OB)/β and Lorentz transformations The Galilean transformations and their consequent commonsense law of addition of velocities work well in our ordinary low-speed world of planes, cars and balls. Beginning in the mid-1800s, however, sensitive scientific instrumentation began finding anomalies that did not fit well with the ordinary addition of velocities. Lorentz transformations are used to transform the coordinates of an event from one frame to another in special relativity. The Lorentz factor appears in the Lorentz transformations: The inverse Lorentz transformations are: When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations. etc., most often really mean etc. Although for brevity the Lorentz transformation equations are written without deltas, x means Δx, etc. We are, in general, always concerned with the space and time differences between events. Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to . So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v. Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time , Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about ). Terence observes Stella crossing the finish-line clock at . But Stella observes the time on her ship chronometer to be as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about ). 1). Deriving the Lorentz transformations There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously. The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates in the red frame that is moving with velocity parameter . To determine and in terms of w and x (or the other way around) it is easier at first to derive the inverse Lorentz transformation. There can be no such thing as length expansion/contraction in the transverse directions. y must equal y and must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law. From the drawing, w = a + b and From previous results using similar triangles, we know that . Because of time dilation, Substituting equation (4) into yields . Length contraction and similar triangles give us and Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, for , and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for and . Linearity of the Lorentz transformations The Lorentz transformations have a mathematical property called linearity, since and are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that was tacitly assumed in the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere. All inertial observers will agree on what constitutes accelerating and non-accelerating motion. Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well. A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation. Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with to relate Ursula's measurements with his own. Doppler effect The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles. Longitudinal Doppler effect The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3-6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter so that the separation between source and receiver at time is . Because of time dilation, Since the slope of the green light ray is −1, Hence, the relativistic Doppler effect is given by Transverse Doppler effect Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these: <!—end plainlist—> In scenario (a), the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency , but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated as measured in frame S, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of , and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.) Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d). Energy and momentum Extending momentum to four dimensions In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: . It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector . In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. , but the time component equals mc. We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that and , since the red axes are rescaled by gamma. Fig. 3-8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c. We will use this information shortly to obtain an expression for the four-momentum. Momentum of light Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer. A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: . Rearranging, , and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector. Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined. By this analysis, if the energy of a photon equals E in the rest frame, it equals in a moving frame. This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously. Mass–energy relationship Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several important conclusions. In the low speed limit as approaches zero, approaches 1, so the spatial component of the relativistic momentum approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula . Likewise, comparing the time component of the relativistic momentum with that of the photon, , so that Einstein arrived at the relationship . Simplified to the case of zero velocity, this is Einstein's equation relating energy and mass. Another way of looking at the relationship between mass and energy is to consider a series expansion of at low velocity: The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy. The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes, old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity. For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy. "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula, This formula applies to all particles, massless as well as massive. For photons where mrest equals zero, it yields, . Four-momentum Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as or alternatively, using the convention that Conservation laws In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes do not care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes do not care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective. Total momentum To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension. In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity: (1) The two bodies rebound from each other in a completely elastic collision. (2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision. For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat. In case (2), two masses with momentums and collide to produce a single particle of conserved mass traveling at the center of mass velocity of the original system, . The total momentum is conserved. Fig. 3-10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components and add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components and add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: . Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy. Choice of reference frames The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same. Energy and momentum conservation In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since , the momentum . If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame. Conservation of momentum in the COM frame amounts to the requirement that both before and after collision. In the Newtonian analysis, conservation of mass dictates that . In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero. Newtonian momenta, calculated as , fail to behave properly under Lorentzian transformation. The linear transformation of velocities is replaced by the highly nonlinear so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. This second option was what he chose. The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass. Introduction to curved spacetime Technical topics Is spacetime really curved? In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is. Such being said, Is it possible to represent general relativity in terms of flat spacetime? Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation? In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bimetric gravity", the "field-theoretical approach to general relativity", and so forth. Kip Thorne has provided a popular review of these theories. The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm is convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques tend be used when solving gravitational wave problems, while curved spacetime techniques tend be used in the analysis of black holes. Asymptotic symmetries The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. Riemannian geometry Curved manifolds For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold . This means the smooth Lorentz metric has signature . The metric determines the , as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light is equal to 1. A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event . Another reference frame may be identified by a second coordinate chart about . Two observers (one in each reference frame) may describe the same event but obtain different descriptions. Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing (representing an observer) and another containing (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally. For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event ). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented. Geodesics are said to be timelike, null, or spacelike if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by timelike and null (lightlike) geodesics, respectively. Privileged character of 3+1 spacetime See also Basic introduction to the mathematics of curved spacetime Complex spacetime Einstein's thought experiments Four-dimensionalism Geography Global spacetime structure List of spacetimes Metric space Philosophy of space and time Present Time geography Notes Additional details References Further reading George F. Ellis and Ruth M. Williams (1992) Flat and curved space–times. Oxford University Press. Lorentz, H. A., Einstein, Albert, Minkowski, Hermann, and Weyl, Hermann (1952) The Principle of Relativity: A Collection of Original Memoirs. Dover. Lucas, John Randolph (1973) A Treatise on Time and Space. London: Methuen. Chapters 17–18. External links Albert Einstein on space–time 13th edition Encyclopædia Britannica Historical: Albert Einstein's 1926 article Encyclopedia of Space–time and gravitation Scholarpedia Expert articles Stanford Encyclopedia of Philosophy: "Space and Time: Inertial Frames" by Robert DiSalle. Concepts in physics Theoretical physics Theory of relativity Time Time in physics Conceptual models
0.771395
0.999323
0.770873
Rheology
Rheology (; ) is the study of the flow of matter, primarily in a fluid (liquid or gas) state but also as "soft solids" or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. Rheology is the branch of physics that deals with the deformation and flow of materials, both solids and liquids. The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920 from a suggestion by a colleague, Markus Reiner. The term was inspired by the aphorism of Heraclitus (often mistakenly attributed to Simplicius), (, 'everything flows') and was first used to describe the flow of liquids and the deformation of solids. It applies to substances that have a complex microstructure, such as muds, sludges, suspensions, and polymers and other glass formers (e.g., silicates), as well as many foods and additives, bodily fluids (e.g., blood) and other biological materials, and other materials that belong to the class of soft matter such as food. Newtonian fluids can be characterized by a single coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate (the relative flow velocity) are called non-Newtonian fluids. Rheology generally accounts for the behavior of non-Newtonian fluids by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strain or strain rates. For example, ketchup can have its viscosity reduced by shaking (or other forms of mechanical agitation, where the relative movement of different layers in the material actually causes the reduction in viscosity), but water cannot. Ketchup is a shear-thinning material, like yogurt and emulsion paint (US terminology latex paint or acrylic paint), exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the opposite behavior, rheopecty (viscosity increasing with relative deformation), and are called shear-thickening or dilatant materials. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain-rate-dependent viscosity is also often called Non-Newtonian fluid mechanics. The experimental characterisation of a material's rheological behaviour is known as rheometry, although the term rheology is frequently used synonymously with rheometry, particularly by experimentalists. Theoretical aspects of rheology are the relation of the flow/deformation behaviour of material and its internal structure (e.g., the orientation and elongation of polymer molecules) and the flow/deformation behaviour of materials that cannot be described by classical fluid mechanics or elasticity. Scope In practice, rheology is principally concerned with extending continuum mechanics to characterize the flow of materials that exhibit a combination of elastic, viscous and plastic behavior by properly combining elasticity and (Newtonian) fluid mechanics. It is also concerned with predicting mechanical behavior (on the continuum mechanical scale) based on the micro- or nanostructure of the material, e.g. the molecular size and architecture of polymers in solution or the particle size distribution in a solid suspension. Materials with the characteristics of a fluid will flow when subjected to a stress, which is defined as the force per area. There are different sorts of stress (e.g. shear, torsional, etc.), and materials can respond differently under different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses, internal strain gradients, and flow velocities. Rheology unites the seemingly unrelated fields of plasticity and non-Newtonian fluid dynamics by recognizing that materials undergoing these types of deformation are unable to support a stress (particularly a shear stress, since it is easier to analyze shear deformation) in static equilibrium. In this sense, a solid undergoing plastic deformation is a fluid, although no viscosity coefficient is associated with this flow. Granular rheology refers to the continuum mechanical description of granular materials. One of the major tasks of rheology is to establish by measurement the relationships between strains (or rates of strain) and stresses, although a number of theoretical developments (such as assuring frame invariants) are also required before using the empirical data. These experimental techniques are known as rheometry and are concerned with the determination of well-defined rheological material functions. Such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a simple shear stress field is called shear rheometry (or shear rheology). The study of extensional flows is called extensional rheology. Shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. Viscoelasticity Fluid and solid character are relevant at long times:We consider the application of a constant stress (a so-called creep experiment): if the material, after some deformation, eventually resists further deformation, it is considered a solid if, by contrast, the material flows indefinitely, it is considered a fluid By contrast, elastic and viscous (or intermediate, viscoelastic) behaviour is relevant at short times (transient behaviour):We again consider the application of a constant stress: if the material deformation strain increases linearly with increasing applied stress, then the material is linear elastic within the range it shows recoverable strains. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, without any time delay. if the material deformation strain rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the time delay between the applied constant stress and the maximum strain. if the materials behaves as a combination of viscous and elastic components, then the material is viscoelastic. Theoretically such materials can show both instantaneous deformation as elastic material and a delayed time dependent deformation as in fluids. Plasticity is the behavior observed after the material is subjected to a yield stress:A material that behaves as a solid under low applied stresses may start to flow above a certain level of stress, called the yield stress of the material. The term plastic solid is often used when this plasticity threshold is rather high, while yield stress fluid is used when the threshold stress is rather low. However, there is no fundamental difference between the two concepts. Dimensionless numbers Deborah number On one end of the spectrum we have an inviscid or a simple Newtonian fluid and on the other end, a rigid solid; thus the behavior of all materials fall somewhere in between these two ends. The difference in material behavior is characterized by the level and nature of elasticity present in the material when it deforms, which takes the material behavior to the non-Newtonian regime. The non-dimensional Deborah number is designed to account for the degree of non-Newtonian behavior in a flow. The Deborah number is defined as the ratio of the characteristic time of relaxation (which purely depends on the material and other conditions like the temperature) to the characteristic time of experiment or observation. Small Deborah numbers represent Newtonian flow, while non-Newtonian (with both viscous and elastic effects present) behavior occurs for intermediate range Deborah numbers, and high Deborah numbers indicate an elastic/rigid solid. Since Deborah number is a relative quantity, the numerator or the denominator can alter the number. A very small Deborah number can be obtained for a fluid with extremely small relaxation time or a very large experimental time, for example. Reynolds number In fluid mechanics, the Reynolds number is a measure of the ratio of inertial forces to viscous forces and consequently it quantifies the relative importance of these two types of effect for given flow conditions. Under low Reynolds numbers viscous effects dominate and the flow is laminar, whereas at high Reynolds numbers inertia predominates and the flow may be turbulent. However, since rheology is concerned with fluids which do not have a fixed viscosity, but one which can vary with flow and time, calculation of the Reynolds number can be complicated. It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. When two geometrically similar flow patterns, in perhaps different fluids with possibly different flow rates, have the same values for the relevant dimensionless numbers, they are said to be dynamically similar. Typically it is given as follows: where: us – mean flow velocity, [m s−1] L – characteristic length, [m] μ – (absolute) dynamic fluid viscosity, [N s m−2] or [Pa s] ν – kinematic fluid viscosity: , [m2 s−1] ρ – fluid density, [kg m−3]. Measurement Rheometers are instruments used to characterize the rheological properties of materials, typically fluids that are melts or solution. These instruments impose a specific stress field or deformation to the fluid, and monitor the resultant deformation or stress. Instruments can be run in steady flow or oscillatory flow, in both shear and extension. Applications Rheology has applications in materials science, engineering, geophysics, physiology, human biology and pharmaceutics. Materials science is utilized in the production of many industrially important substances, such as cement, paint, and chocolate, which have complex flow characteristics. In addition, plasticity theory has been similarly important for the design of metal forming processes. The science of rheology and the characterization of viscoelastic properties in the production and use of polymeric materials has been critical for the production of many products for use in both the industrial and military sectors. Study of flow properties of liquids is important for pharmacists working in the manufacture of several dosage forms, such as simple liquids, ointments, creams, pastes etc. The flow behavior of liquids under applied stress is of great relevance in the field of pharmacy. Flow properties are used as important quality control tools to maintain the superiority of the product and reduce batch to batch variations. Materials science Polymers Examples may be given to illustrate the potential applications of these principles to practical problems in the processing and use of rubbers, plastics, and fibers. Polymers constitute the basic materials of the rubber and plastic industries and are of vital importance to the textile, petroleum, automobile, paper, and pharmaceutical industries. Their viscoelastic properties determine the mechanical performance of the final products of these industries, and also the success of processing methods at intermediate stages of production. In viscoelastic materials, such as most polymers and plastics, the presence of liquid-like behaviour depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy 'Silly Putty' behaves quite differently depending on the time rate of applying a force. Pull on it slowly and it exhibits continuous flow, similar to that evidenced in a highly viscous liquid. Alternatively, when hit hard and directly, it shatters like a silicate glass. In addition, conventional rubber undergoes a glass transition (often called a rubber-glass transition). E.g. The Space Shuttle Challenger disaster was caused by rubber O-rings that were being used well below their glass transition temperature on an unusually cold Florida morning, and thus could not flex adequately to form proper seals between sections of the two solid-fuel rocket boosters. Biopolymers Sol-gel With the viscosity of a sol adjusted into a proper range, both optical quality glass fiber and refractory ceramic fiber can be drawn which are used for fiber-optic sensors and thermal insulation, respectively. The mechanisms of hydrolysis and condensation, and the rheological factors that bias the structure toward linear or branched structures are the most critical issues of sol-gel science and technology. Geophysics The scientific discipline of geophysics includes study of the flow of molten lava and study of debris flows (fluid mudslides). This disciplinary branch also deals with solid Earth materials which only exhibit flow over extended time-scales. Those that display viscous behaviour are known as rheids. For example, granite can flow plastically with a negligible yield stress at room temperatures (i.e. a viscous flow). Long-term creep experiments (~10 years) indicate that the viscosity of granite and glass under ambient conditions are on the order of 1020 poises. Physiology Physiology includes the study of many bodily fluids that have complex structure and composition, and thus exhibit a wide range of viscoelastic flow characteristics. In particular there is a specialist study of blood flow called hemorheology. This is the study of flow properties of blood and its elements (plasma and formed elements, including red blood cells, white blood cells and platelets). Blood viscosity is determined by plasma viscosity, hematocrit (volume fraction of red blood cell, which constitute 99.9% of the cellular elements) and mechanical behaviour of red blood cells. Therefore, red blood cell mechanics is the major determinant of flow properties of blood.(The ocular Vitreous humor is subject to rheologic observations, particularly during studies of age-related vitreous liquefaction, or synaeresis.) The leading characteristic for hemorheology has been shear thinning in steady shear flow. Other non-Newtonian rheological characteristics that blood can demonstrate includes pseudoplasticity, viscoelasticity, and thixotropy. Red blood cell aggregation There are two current major hypotheses to explain blood flow predictions and shear thinning responses. The two models also attempt to demonstrate the drive for reversible red blood cell aggregation, although the mechanism is still being debated. There is a direct effect of red blood cell aggregation on blood viscosity and circulation. The foundation of hemorheology can also provide information for modeling of other biofluids. The bridging or "cross-bridging" hypothesis suggests that macromolecules physically crosslink adjacent red blood cells into rouleaux structures. This occurs through adsorption of macromolecules onto the red blood cell surfaces. The depletion layer hypothesis suggests the opposite mechanism. The surfaces of the red blood cells are bound together by an osmotic pressure gradient that is created by depletion layers overlapping. The effect of rouleaux aggregation tendency can be explained by hematocrit and fibrinogen concentration in whole blood rheology. Some techniques researchers use are optical trapping and microfluidics to measure cell interaction in vitro. Disease and diagnostics Changes to viscosity has been shown to be linked with diseases like hyperviscosity, hypertension, sickle cell anemia, and diabetes. Hemorheological measurements and genomic testing technologies act as preventative measures and diagnostic tools. Hemorheology has also been correlated with aging effects, especially with impaired blood fluidity, and studies have shown that physical activity may improve the thickening of blood rheology. Zoology Many animals make use of rheological phenomena, for example sandfish that exploit the granular rheology of dry sand to "swim" in it or land gastropods that use snail slime for adhesive locomotion. Certain animals produce specialized endogenous complex fluids, such as the sticky slime produced by velvet worms to immobilize prey or the fast-gelling underwater slime secreted by hagfish to deter predators. Food rheology Food rheology is important in the manufacture and processing of food products, such as cheese and gelato. An adequate rheology is important for the indulgence of many common foods, particularly in the case of sauces, dressings, yogurt, or fondue. Thickening agents, or thickeners, are substances which, when added to an aqueous mixture, increase its viscosity without substantially modifying its other properties, such as taste. They provide body, increase stability, and improve suspension of added ingredients. Thickening agents are often used as food additives and in cosmetics and personal hygiene products. Some thickening agents are gelling agents, forming a gel. The agents are materials used to thicken and stabilize liquid solutions, emulsions, and suspensions. They dissolve in the liquid phase as a colloid mixture that forms a weakly cohesive internal structure. Food thickeners frequently are based on either polysaccharides (starches, vegetable gums, and pectin), or proteins. Concrete rheology Concrete's and mortar's workability is related to the rheological properties of the fresh cement paste. The mechanical properties of hardened concrete increase if less water is used in the concrete mix design, however reducing the water-to-cement ratio may decrease the ease of mixing and application. To avoid these undesired effects, superplasticizers are typically added to decrease the apparent yield stress and the viscosity of the fresh paste. Their addition highly improves concrete and mortar properties. Filled polymer rheology The incorporation of various types of fillers into polymers is a common means of reducing cost and to impart certain desirable mechanical, thermal, electrical and magnetic properties to the resulting material. The advantages that filled polymer systems have to offer come with an increased complexity in the rheological behavior. Usually when the use of fillers is considered, a compromise has to be made between the improved mechanical properties in the solid state on one side and the increased difficulty in melt processing, the problem of achieving uniform dispersion of the filler in the polymer matrix and the economics of the process due to the added step of compounding on the other. The rheological properties of filled polymers are determined not only by the type and amount of filler, but also by the shape, size and size distribution of its particles. The viscosity of filled systems generally increases with increasing filler fraction. This can be partially ameliorated via broad particle size distributions via the Farris effect. An additional factor is the stress transfer at the filler-polymer interface. The interfacial adhesion can be substantially enhanced via a coupling agent that adheres well to both the polymer and the filler particles. The type and amount of surface treatment on the filler are thus additional parameters affecting the rheological and material properties of filled polymeric systems. It is important to take into consideration wall slip when performing the rheological characterization of highly filled materials, as there can be a large difference between the actual strain and the measured strain. Rheologist A rheologist is an interdisciplinary scientist or engineer who studies the flow of complex liquids or the deformation of soft solids. It is not a primary degree subject; there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the physical sciences (e.g. chemistry, physics, geology, biology), engineering (e.g. mechanical, chemical, materials science, plastics engineering and engineering or civil engineering), medicine, or certain technologies, notably materials or food. Typically, a small amount of rheology may be studied when obtaining a degree, but a person working in rheology will extend this knowledge during postgraduate research or by attending short courses and by joining a professional association. See also Bingham plastic Die swell Fluid dynamics Glass transition Interfacial rheology Liquid List of rheologists Microrheology Nordic Rheology Society Rheological weldability for thermoplastics Rheopectic Solid Transport phenomena μ(I) rheology: one model of the rheology of a granular flow. References External links "The Origins of Rheology: A short historical excursion" by Deepak Doraiswamy, DuPont iTechnologies RHEOTEST Medingen GmbH – Short history and collection of rheological instruments from the time of Fritz Höppler - On the Rheology of Cats Societies American Society of Rheology Australian Society of Rheology British Society of Rheology European Society of Rheology French Society of Rheology Nordic Rheology Society Romanian Society of Rheology Korean Society of Rheology Journals Applied Rheology Journal of Non-Newtonian Fluid Mechanics Journal of Rheology Rheologica Acta Tribology
0.775107
0.994486
0.770832
Davisson–Germer experiment
The Davisson–Germer experiment was a 1923–1927 experiment by Clinton Davisson and Lester Germer at Western Electric (later Bell Labs), in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, and also the wave mechanics approach of the Schrödinger equation. It was an experimental milestone in the creation of quantum mechanics. History and overview According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy of the particle was related to the frequency of its associated wave by the Planck relation: And that the momentum of the particle was related to its wavelength by what is now known as the de Broglie relation: where is the Planck constant. An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through Barkla's X-ray scattering experiments on crystalline solids. This suggestion of Elsasser was then communicated by his senior colleague (and later Nobel Prize recipient) Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition. However the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target. The angular dependence of the reflected electron intensity was measured and was determined to have a similar diffraction pattern as those predicted by Bragg for X-rays; some small, but significant differences were due to the average potential which Hans Bethe showed in his more complete analysis. At the same time George Paget Thomson and his student Alexander Reid independently demonstrated the same effect firing electrons through celluloid films to produce a diffraction pattern, and Davisson and Thomson shared the Nobel Prize in Physics in 1937. The exclusion of Germer from sharing the prize has puzzled physicists ever since. The Davisson–Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton (who won the Nobel Prize for Physics in 1927), established the wave–particle duality hypothesis which was a fundamental step in quantum theory. Early experiments Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925. Prior to 1923, Davisson had been working with Charles H. Kunsman on detecting the effects of electron bombardment on tungsten when they noticed that 1% of the electrons bounced straight back to the electron gun in elastic scattering. This small but unexpected result led Davisson to theorize that he could examine the electron configuration of the atom in an analogous manner to how the Rutherford alpha particle scattering had examined the nucleus. They changed to a high vacuum and used nickel along with various other metals with unimpressive results. In October 1924 when Germer joined the experiment, Davisson’s actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles. They expected that because of the small size of electrons, even the smoothest crystal surface would be too rough and thus the electron beam would experience diffused reflection. The experiment consisted of firing an electron beam (from an electron gun, an electrostatic particle accelerator) at a nickel crystal, perpendicular to the surface of the crystal, and measuring how the number of reflected electrons varied as the angle between the detector and the nickel surface varied. The electron gun was a heated tungsten filament that released thermally excited electrons which were then accelerated through an electric potential difference, giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other atoms on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used. The detector was designed to accept only elastically scattered electrons. During the experiment, air accidentally entered the chamber, producing an oxide film on the nickel surface. To remove the oxide, Davisson and Germer heated the specimen in a high temperature oven, not knowing that this caused the formerly polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam. When they started the experiment again and the electrons hit the surface, they were scattered by nickel atoms in crystal planes (so the atoms were regularly spaced) of the crystal. This, in 1925, generated a diffraction pattern with unexpected and uncorrelated peaks due to the heating causing a ten crystal faceted area. They changed the experiment to a single crystal and started again. Breakthrough On his second honeymoon, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in summer 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used the uncorrelated diffraction curves from Davisson's 1923 research on platinum with Kunsman, using the data as confirmation of the de Broglie hypothesis of which Davisson was unaware. Davisson then learned that in prior years, other scientists – Walter Elsasser, E. G. Dymond, and Blackett, James Chadwick, and Charles Ellis – had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low-intensity beams needed. Returning to the United States, Davisson made modifications to the tube design and detector mounting, adding azimuth in addition to colatitude. Following experiments generated a strong signal peak at 65 V and an angle . He published a note to Nature titled, "The Scattering of Electrons by a Single Crystal of Nickel". Questions still needed to be answered and experimentation continued through 1927, because Davisson was now familiar with the de Broglie formula and had designed the test to see if any effect could be discerned for a changed electron wavelength , according to the de Broglie relationship, which they knew should create a peak at 78 and not 65 V as their paper had shown. Because of their failure to correlate with the de Broglie formula, their paper introduced an ad hoc contraction factor of 0.7, which, however, could only explain eight of the thirteen beams. By varying the applied voltage to the electron gun, the maximum intensity of electrons diffracted by the atomic surface was found at different angles. The highest intensity was observed at an angle with a voltage of 54 V, giving the electrons a kinetic energy of . As Max von Laue proved in 1912, the periodic crystal structure serves as a type of three-dimensional diffraction grating. The angles of maximum reflection are given by Bragg's condition for constructive interference from an array, Bragg's law for , , and for the spacing of the crystalline planes of nickel obtained from previous X-ray scattering experiments on crystalline nickel. According to the de Broglie relation, electrons with kinetic energy of have a wavelength of . The experimental outcome was via Bragg's law, which closely matched the predictions. As Davisson and Germer state in their 1928 follow-up paper to their Nobel prize winning paper, "These results, including the failure of the data to satisfy the Bragg formula, are in accord with those previously obtained in our experiments on electron diffraction. The reflection data fail to satisfy the Bragg relation for the same reason that the electron diffraction beams fail to coincide with their Laue beam analogues." However, they add, "The calculated wave-lengths are in excellent agreement with the theoretical values of h/mv as shown in the accompanying table." So although electron energy diffraction does not follow the Bragg law, it did confirm de Broglie's theory that particles behave like waves. The full explanation was provided by Hans Bethe who solved Schrödinger equation for the case of electron diffraction. Davisson and Germer's accidental discovery of the diffraction of electrons was the first direct evidence confirming de Broglie's hypothesis that particles can have wave properties as well. Davisson's attention to detail, his resources for conducting basic research, the expertise of colleagues, and luck all contributed to the experimental success. Practical applications The specific approach used by Davisson and Germer used low energy electrons, what is now called low-energy electron diffraction (LEED). It wasn't until much later that development of experimental methods exploiting ultra-high vacuum technologies (e.g. the approach described by Alpert in 1953) enabled the extensive use of LEED diffraction to explore the surfaces of crystallized elements and the spacing between atoms. Methods where higher energy electrons are used for diffraction in many different ways developed much earlier. References External links Foundational quantum physics Physics experiments 1927 in science
0.776873
0.992214
0.770824
Vacuum energy
Vacuum energy is an underlying background energy that exists in space throughout the entire universe. The vacuum energy is a special case of zero-point energy that relates to the quantum vacuum. The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect, and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy of free space has been estimated to be 10−9 joules (10−2 ergs), or ~5 GeV per cubic meter. However, in quantum electrodynamics, consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant suggests a much larger value of 10113 joules per cubic meter. This huge discrepancy is known as the cosmological constant problem or, colloquially, the "vacuum catastrophe." Origin Quantum field theory states that all fundamental fields, such as the electromagnetic field, must be quantized at every point in space. A field in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field is like the displacement of a ball from its rest position. The theory requires "vibrations" in, or more accurately changes in the strength of, such a field to propagate as per the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball–spring combination be quantized, that is, that the strength of the field be quantized at each point in space. Canonically, if the field at each point in space is a simple harmonic oscillator, its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. Thus, according to the theory, even the vacuum has a vastly complex structure and all calculations of quantum field theory must be made in relation to this model of the vacuum. The theory considers vacuum to implicitly have the same properties as a particle, such as spin or polarization in the case of light, energy, and so on. According to the theory, most of these properties cancel out on average leaving the vacuum empty in the literal sense of the word. One important exception, however, is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator requires the lowest possible energy, or zero-point energy of such an oscillator to be Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable, much as the concept of potential energy has been treated in classical mechanics for centuries. This argument is the underpinning of the theory of renormalization. In all practical calculations, this is how the infinity is handled. Vacuum energy can also be thought of in terms of virtual particles (also known as vacuum fluctuations) which are created and destroyed out of the vacuum. These particles are always created out of the vacuum in particle–antiparticle pairs, which in most cases shortly annihilate each other and disappear. However, these particles and antiparticles may interact with others before disappearing, a process which can be mapped using Feynman diagrams. Note that this method of computing vacuum energy is mathematically equivalent to having a quantum harmonic oscillator at each point and, therefore, suffers the same renormalization problems. Additional contributions to the vacuum energy come from spontaneous symmetry breaking in quantum field theory. Implications Vacuum energy has a number of consequences. In 1948, Dutch physicists Hendrik B. G. Casimir and Dirk Polder predicted the existence of a tiny attractive force between closely placed metal plates due to resonances in the vacuum energy in the space between them. This is now known as the Casimir effect and has since been extensively experimentally verified. It is therefore believed that the vacuum energy is "real" in the same sense that more familiar conceptual objects such as electrons, magnetic fields, etc., are real. However, alternative explanations for the Casimir effect have since been proposed. Other predictions are harder to verify. Vacuum fluctuations are always created as particle–antiparticle pairs. The creation of these virtual particles near the event horizon of a black hole has been hypothesized by physicist Stephen Hawking to be a mechanism for the eventual "evaporation" of black holes. If one of the pair is pulled into the black hole before this, then the other particle becomes "real" and energy/mass is essentially radiated into space from the black hole. This loss is cumulative and could result in the black hole's disappearance over time. The time required is dependent on the mass of the black hole (the equations indicate that the smaller the black hole, the more rapidly it evaporates) but could be on the order of 1060 years for large solar-mass black holes. The vacuum energy also has important consequences for physical cosmology. General relativity predicts that energy is equivalent to mass, and therefore, if the vacuum energy is "really there", it should exert a gravitational force. Essentially, a non-zero vacuum energy is expected to contribute to the cosmological constant, which affects the expansion of the universe. Field strength of vacuum energy The field strength of vacuum energy is a concept proposed in a theoretical study that explores the nature of the vacuum and its relationship to gravitational interactions. The study derived a mathematical framework that uses the field strength of vacuum energy as an indicator of the bulk (spacetime) resistance to localized curvature. It illustrates the association of the field strength of vacuum energy to the curvature of the background, where this concept challenges the traditional understanding of gravity and suggests that the gravitational constant, G, may not be a universal constant, but rather a parameter dependent on the field strength of vacuum energy. Determination of the value of G has been a topic of extensive research, with numerous experiments conducted over the years in an attempt to measure its precise value. These experiments, often employing high-precision techniques, have aimed to provide accurate measurements of G and establish a consensus on its exact value. However, the outcomes of these experiments have shown significant inconsistencies, making it difficult to reach a definitive conclusion regarding the value of G. This lack of consensus has puzzled scientists and called for alternative explanations. To test the theoretical predictions regarding the field strength of vacuum energy, specific experimental conditions involving the position of the moon are recommended in the theoretical study. These conditions aim to achieve consistent outcomes in precision measurements of G. The ultimate goal of such experiments is to either falsify or provide confirmations to the proposed theoretical framework. The significance of exploring the field strength of vacuum energy lies in its potential to revolutionize our understanding of gravity and its interactions. History In 1934, Georges Lemaître used an unusual perfect-fluid equation of state to interpret the cosmological constant as due to vacuum energy. In 1948, the Casimir effect provided an experimental method for a verification of the existence of vacuum energy; in 1955, however, Evgeny Lifshitz offered a different origin for the Casimir effect. In 1957, Lee and Yang proved the concepts of broken symmetry and parity violation, for which they won the Nobel prize. In 1973, Edward Tryon proposed the zero-energy universe hypothesis: that the Universe may be a large-scale quantum-mechanical vacuum fluctuation where positive mass–energy is balanced by negative gravitational potential energy. During the 1980s, there were many attempts to relate the fields that generate the vacuum energy to specific fields that were predicted by attempts at a Grand Unified Theory and to use observations of the Universe to confirm one or another version. However, the exact nature of the particles (or fields) that generate vacuum energy, with a density such as that required by inflation theory, remains a mystery. Vacuum energy in fiction Arthur C. Clarke's novel The Songs of Distant Earth features a starship powered by a "quantum drive" based on aspects of this theory. In the sci-fi television/film franchise Stargate, a Zero Point Module (ZPM) is a power source that extracts zero-point energy from a micro parallel universe. The book Star Trek: Deep Space Nine Technical Manual describes the operating principle of the so-called quantum torpedo. In this fictional weapon, an antimatter reaction is used to create a multi-dimensional membrane in a vacuum that releases at its decomposition more energy than was needed to produce it. The missing energy is removed from the vacuum. Usually about twice as much energy is released in the explosion as would correspond to the initial antimatter matter annihilation. In the video game Half-Life 2, the item generally known as the "gravity gun" is referred to as both the "zero point field energy manipulator" and the "zero point energy field manipulator." See also Cosmic microwave background Dark energy False vacuum Normal ordering Quantum fluctuation Sunyaev–Zeldovich effect Vacuum state References External articles and references Free PDF copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985); . Saunders, S., & Brown, H. R. (1991). The Philosophy of Vacuum. Oxford [England]: Clarendon Press. Poincaré Seminar, Duplantier, B., & Rivasseau, V. (2003). "Poincaré Seminar 2002: vacuum energy-renormalization". Progress in mathematical physics, v. 30. Basel: Birkhäuser Verlag. Futamase & Yoshida Possible measurement of vacuum energy. Study of Vacuum Energy Physics for Breakthrough Propulsion 2004, NASA Glenn Technical Reports Server (PDF, 57 pages, Retrieved 2013-09-18). Dark energy Energy (physics) Physical cosmological concepts Quantum field theory Vacuum
0.77543
0.994031
0.770801
Geophysics
Geophysics is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the Earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets. Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics. Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation. Physical phenomena Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences, while some geophysicists conduct research in the planetary sciences. To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System, which includes other planetary bodies. Gravity The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry). The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals). Heat flow The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. There is also some contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about , and it is a potential source of geothermal energy. Vibrations Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection. Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth. Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering. Electricity Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 volts per meter. Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. A current of about 1800 amperes flows in the global circuit. It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above. A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography). Electromagnetic waves Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core. Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics). In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation. Electromagnetic methods that are used for geophysical survey include transient electromagnetics, magnetotellurics, surface nuclear magnetic resonance and electromagnetic seabed logging. Magnetism The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. The magnetic field in the upper atmosphere gives rise to the auroras. The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals, analyzed within a Geomagnetic Polarity Time Scale, contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period. Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. They are the basis of magnetostratigraphy, which correlates magnetic reversals with other stratigraphies to construct geologic time scales. In addition, the magnetization in rocks can be used to measure the motion of continents. Radioactivity Radioactive decay accounts for about 80% of the Earth's internal heat, powering the geodynamo and plate tectonics. The main heat-producing isotopes are potassium-40, uranium-238, uranium-235, and thorium-232. Radioactive elements are used for radiometric dating, the primary method for establishing an absolute time scale in geochronology. Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration. Fluid dynamics Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo. Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns. Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics. Mineral physics The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams, melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move. Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans. The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost). Regions of the Earth Size and form of the Earth Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape. Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure), geological features such as mountains or ocean trenches, tectonic plate dynamics, and natural disasters can further distort the planet's shape. Structure of the interior Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity is far higher than the typical specific gravity of rocks at the surface, implying that the deeper material is denser. This is also implied by its low moment of inertia (, compared to for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals. Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core, however, is solid because of the enormous pressure. Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity. The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are consistent with phase transitions. The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible. Magnetosphere If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts. Methods Geodesy Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy. While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union, the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both. Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System. An alternative, optical astronomy, combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble. Relative positions of two or more points can be determined using very-long-baseline interferometry. Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry, contributing to a more accurate geoid. In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers. Satellites and space probes Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. Global positioning systems (GPS) and geographical information systems (GIS) Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation, and the calculation of the measurement derivative such as the first-vertical derivative. Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset. Remote sensing Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. This is done through the uses of various remote sensing platforms such as; satellites, aircraft, boats, drones, borehole sensing equipment and seismic receivers. These equipment are often used in conjunction with different geophysical methods such as magnetic, gravimetry, electromagnetic, radiometric, barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field. There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth. Signal processing Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements History Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography, geology, astronomy, meteorology, and physics. The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era. Ancient and classical eras The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD. In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. He developed a system of latitude and longitude. Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille. It was never built. Beginnings of modern science The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones') and artificially magnetized iron. His experiments lead to observations involving a small compass needle (versorium) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing 'magnetic dips' when it was pivoted on a horizontal axis. HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet. In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics. In it, Newton both laid the foundations for classical mechanics and gravitation, as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis. Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws. The first seismometer, an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844. See also International Union of Geodesy and Geophysics (IUGG) Sociedade Brasileira de Geofísica Geological Engineering Physics Space physics Geosciences Geodesy Notes References External links A reference manual for near-surface geophysics techniques and applications Commission on Geophysical Risk and Sustainability (GeoRisk), International Union of Geodesy and Geophysics (IUGG) Study of the Earth's Deep Interior, a Committee of IUGG Union Commissions (IUGG) USGS Geomagnetism Program Career crate: Seismic processor Society of Exploration Geophysicists Earth sciences Subfields of geology Applied and interdisciplinary physics
0.775792
0.993564
0.770799
Paramagnetism
Paramagnetism is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagnetic materials include most chemical elements and some compounds; they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUID magnetometer. Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Due to their spin, unpaired electrons have a magnetic dipole moment and act like tiny magnets. An external magnetic field causes the electrons' spins to align parallel to the field, causing a net attraction. Paramagnetic materials include aluminium, oxygen, titanium, and iron oxide (FeO). Therefore, a simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: if all electrons in the particle are paired, then the substance made of this particle is diamagnetic; if it has unpaired electrons, then the substance is paramagnetic. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. (Some paramagnetic materials retain spin disorder even at absolute zero, meaning they are paramagnetic in the ground state, i.e. in the absence of thermal motion.) Thus the total magnetization drops to zero when the applied field is removed. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself. Relation to electron spins Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum. If there is sufficient energy exchange between neighbouring dipoles, they will interact, and may spontaneously align or anti-align and form magnetic domains, resulting in ferromagnetism (permanent magnets) or antiferromagnetism, respectively. Paramagnetic behavior can also be observed in ferromagnetic materials that are above their Curie temperature, and in antiferromagnets above their Néel temperature. At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins. In general, paramagnetic effects are quite small: the magnetic susceptibility is of the order of 10−3 to 10−5 for most paramagnets, but may be as high as 10−1 for synthetic paramagnets such as ferrofluids. Delocalization In conductive materials, the electrons are delocalized, that is, they travel through the solid more or less as free electrons. Conductivity can be understood in a band structure picture as arising from the incomplete filling of energy bands. In an ordinary nonmagnetic conductor the conduction band is identical for both spin-up and spin-down electrons. When a magnetic field is applied, the conduction band splits apart into a spin-up and a spin-down band due to the difference in magnetic potential energy for spin-up and spin-down electrons. Since the Fermi level must be identical for both bands, this means that there will be a small surplus of the type of spin in the band that moved downwards. This effect is a weak form of paramagnetism known as Pauli paramagnetism. The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. Stronger forms of magnetism usually require localized rather than itinerant electrons. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. If one subband is preferentially filled over the other, one can have itinerant ferromagnetic order. This situation usually only occurs in relatively narrow (d-)bands, which are poorly delocalized. s and p electrons Generally, strong delocalization in a solid due to large overlap with neighboring wave functions means that there will be a large Fermi velocity; this means that the number of electrons in a band is less sensitive to shifts in that band's energy, implying a weak magnetism. This is why s- and p-type metals are typically either Pauli-paramagnetic or as in the case of gold even diamagnetic. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons. d and f electrons Stronger magnetic effects are typically only observed when d or f electrons are involved. Particularly the latter are usually strongly localized. Moreover, the size of the magnetic moment on a lanthanide atom can be quite large as it can carry up to 7 unpaired electrons in the case of gadolinium(III) (hence its use in MRI). The high magnetic moments associated with lanthanides is one reason why superstrong magnets are typically based on elements like neodymium or samarium. Molecular localization The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. Molecular structure can also lead to localization of electrons. Although there are usually energetic reasons why a molecular structure results such that it does not exhibit partly filled orbitals (i.e. unpaired spins), some non-closed shell moieties do occur in nature. Molecular oxygen is a good example. Even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O2 molecules. The distances to other oxygen atoms in the lattice remain too large to lead to delocalization and the magnetic moments remain unpaired. Theory The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Both descriptions are given below. Curie's law For low levels of magnetization, the magnetization of paramagnets follows what is known as Curie's law, at least approximately. This law indicates that the susceptibility, , of paramagnetic materials is inversely proportional to their temperature, i.e. that materials become more magnetic at lower temperatures. The mathematical expression is: where: is the resulting magnetization, measured in amperes/meter (A/m), is the volume magnetic susceptibility (dimensionless), is the auxiliary magnetic field (A/m), is absolute temperature, measured in kelvins (K), is a material-specific Curie constant (K). Curie's law is valid under the commonly encountered conditions of low magnetization (μBH ≲ kBT), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs (μBH ≳ kBT) and magnetic dipoles are all aligned with the applied field. When the dipoles are aligned, increasing the external field will not increase the total magnetization since there can be no further alignment. For a paramagnetic ion with noninteracting magnetic moments with angular momentum J, the Curie constant is related to the individual ions' magnetic moments, where n is the number of atoms per unit volume. The parameter μeff is interpreted as the effective magnetic moment per paramagnetic ion. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ, a Curie Law expression of the same form will emerge with μ appearing in place of μeff. When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d3 or high-spin d5 configurations, the effective magnetic moment takes the form ( with g-factor ge = 2.0023... ≈ 2), where Nu is the number of unpaired electrons. In other transition metal complexes this yields a useful, if somewhat cruder, estimate. When Curie constant is null, second order effects that couple the ground state with the excited states can also lead to a paramagnetic susceptibility independent of the temperature, known as Van Vleck susceptibility. Pauli paramagnetism For some alkali metals and noble metals, conduction electrons are weakly interacting and delocalized in space forming a Fermi gas. For these materials one contribution to the magnetic response comes from the interaction between the electron spins and the magnetic field known as Pauli paramagnetism. For a small magnetic field , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by: where is the vacuum permeability, is the electron magnetic moment, is the Bohr magneton, is the reduced Planck constant, and the g-factor cancels with the spin . The indicates that the sign is positive (negative) when the electron spin component in the direction of is parallel (antiparallel) to the magnetic field. For low temperatures with respect to the Fermi temperature (around 104 kelvins for metals), the number density of electrons pointing parallel (antiparallel) to the magnetic field can be written as: with the total free-electrons density and the electronic density of states (number of states per energy per volume) at the Fermi energy . In this approximation the magnetization is given as the magnetic moment of one electron times the difference in densities: which yields a positive paramagnetic susceptibility independent of temperature: The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers can differ from the electron mass . The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect. Pauli paramagnetism is named after the physicist Wolfgang Pauli. Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading Drude model could not account for this contribution without the use of quantum statistics. Pauli paramagnetism and Landau diamagnetism are essentially applications of the spin and the free electron model, the first is due to intrinsic spin of electrons; the second is due to their orbital motion. Examples of paramagnets Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. In principle any system that contains atoms, ions, or molecules with unpaired spins can be called a paramagnet, but the interactions between them need to be carefully considered. Systems with minimal interactions The narrowest definition would be: a system with unpaired spins that do not interact with each other. In this narrowest sense, the only pure paramagnet is a dilute gas of monatomic hydrogen atoms. Each atom has one non-interacting unpaired electron. A gas of lithium atoms already possess two paired core electrons that produce a diamagnetic response of opposite sign. Strictly speaking Li is a mixed system therefore, although admittedly the diamagnetic component is weak and often neglected. In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H2 and in so doing, the magnetic moments are lost (quenched), because of the spins pair. Hydrogen is therefore diamagnetic and the same holds true for many other elements. Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. The quenching tendency is weakest for f-electrons because f (especially 4f) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered. Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. There are two classes of materials for which this holds: Molecular materials with a (isolated) paramagnetic center. Good examples are coordination complexes of d- or f-metals or proteins with such centers, e.g. myoglobin. In such materials the organic part of the molecule acts as an envelope shielding the spins from their neighbors. Small molecules can be stable in radical form, oxygen O2 is a good example. Such systems are quite rare because they tend to be rather reactive. Dilute systems. Dissolving a paramagnetic species in a diamagnetic lattice at small concentrations, e.g. Nd3+ in CaCl2 will separate the neodymium ions at large enough distances that they do not interact. Such systems are of prime importance for what can be considered the most sensitive method to study paramagnetic systems: EPR. Systems with interactions As stated above, many materials that contain d- or f-elements do retain unquenched spins. Salts of such elements often show paramagnetic behavior but at low enough temperatures the magnetic moments may order. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. Even for iron it is not uncommon to say that iron becomes a paramagnet above its relatively high Curie-point. In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law: This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. The sign of θ depends on whether ferro- or antiferromagnetic interactions dominate and it is seldom exactly zero, except in the dilute, isolated cases mentioned above. Obviously, the paramagnetic Curie–Weiss description above TN or TC is a rather different interpretation of the word "paramagnet" as it does not imply the absence of interactions, but rather that the magnetic structure is random in the absence of an external field at these sufficiently high temperatures. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. An additional complication is that the interactions are often different in different directions of the crystalline lattice (anisotropy), leading to complicated magnetic structures once ordered. Randomness of the structure also applies to the many metals that show a net paramagnetic response over a broad temperature range. They do not follow a Curie type law as function of temperature however; often they are more or less temperature independent. This type of behavior is of an itinerant nature and better called Pauli-paramagnetism, but it is not unusual to see, for example, the metal aluminium called a "paramagnet", even though interactions are strong enough to give this element very good electrical conductivity. Superparamagnets Some materials show induced magnetic behavior that follows a Curie type law but with exceptionally large values for the Curie constants. These materials are known as superparamagnets. They are characterized by a strong ferromagnetic or ferrimagnetic type of coupling into domains of a limited size that behave independently from one another. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. The materials do show an ordering temperature above which the behavior reverts to ordinary paramagnetism (with interaction). Ferrofluids are a good example, but the phenomenon can also occur inside solids, e.g., when dilute paramagnetic centers are introduced in a strong itinerant medium of ferromagnetic coupling such as when Fe is substituted in TlCu2Se2 or the alloy AuFe. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. They are also called mictomagnets. See also Magnetochemistry References Further reading [https://feynmanlectures.caltech.edu/II_35.html The Feynman Lectures on Physics Vol. II, Ch. 35: "Paramagnetism and Magnetic Resonance] Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 1996). John David Jackson, Classical Electrodynamics (Wiley: New York, 1999). External links "Magnetism: Models and Mechanisms" in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter'', Jülich, 2013, Electric and magnetic fields in matter Magnetism Physical phenomena Quantum phases
0.773474
0.996473
0.770745
Chilton and Colburn J-factor analogy
Chilton–Colburn J-factor analogy (also known as the modified Reynolds analogy) is a successful and widely used analogy between heat, momentum, and mass transfer. The basic mechanisms and mathematics of heat, mass, and momentum transport are essentially the same. Among many analogies (like Reynolds analogy, Prandtl–Taylor analogy) developed to directly relate heat transfer coefficients, mass transfer coefficients and friction factors, Chilton and Colburn J-factor analogy proved to be the most accurate. It is written as follows, This equation permits the prediction of an unknown transfer coefficient when one of the other coefficients is known. The analogy is valid for fully developed turbulent flow in conduits with Re > 10000, 0.7 < Pr < 160, and tubes where L/d > 60 (the same constraints as the Sieder–Tate correlation). The wider range of data can be correlated by Friend–Metzner analogy. Relationship between Heat and Mass; See also Reynolds analogy Thomas H. Chilton References Geankoplis, C.J. Transport processes and separation process principles (2003). Fourth Edition, p. 475. External links Lecture notes on mass transfer coefficients: http://facstaff.cbu.edu/rprice/lectures/mtcoeff.html Transport phenomena Analogy
0.792237
0.972864
0.770739
Crystal momentum
In solid-state physics, crystal momentum or quasimomentum is a momentum-like vector associated with electrons in a crystal lattice. It is defined by the associated wave vectors of this lattice, according to (where is the reduced Planck constant). Frequently, crystal momentum is conserved like mechanical momentum, making it useful to physicists and materials scientists as an analytical tool. Lattice symmetry origins A common method of modeling crystal structure and behavior is to view electrons as quantum mechanical particles traveling through a fixed infinite periodic potential such that where is an arbitrary lattice vector. Such a model is sensible because crystal ions that form the lattice structure are typically on the order of tens of thousands of times more massive than electrons, making it safe to replace them with a fixed potential structure, and the macroscopic dimensions of a crystal are typically far greater than a single lattice spacing, making edge effects negligible. A consequence of this potential energy function is that it is possible to shift the initial position of an electron by any lattice vector without changing any aspect of the problem, thereby defining a discrete symmetry. Technically, an infinite periodic potential implies that the lattice translation operator commutes with the Hamiltonian, assuming a simple kinetic-plus-potential form. These conditions imply Bloch's theorem, which states , or that an electron in a lattice, which can be modeled as a single particle wave function , finds its stationary state solutions in the form of a plane wave multiplied by a periodic function . The theorem arises as a direct consequence of the aforementioned fact that the lattice symmetry translation operator commutes with the system's Hamiltonian. One of the notable aspects of Bloch's theorem is that it shows directly that steady state solutions may be identified with a wave vector , meaning that this quantum number remains a constant of motion. Crystal momentum is then conventionally defined by multiplying this wave vector by the Planck constant: While this is in fact identical to the definition one might give for regular momentum (for example, by treating the effects of the translation operator by the effects of a particle in free space), there are important theoretical differences. For example, while regular momentum is completely conserved, crystal momentum is only conserved to within a lattice vector. For example, an electron can be described not only by the wave vector , but also with any other wave vector such that where is an arbitrary reciprocal lattice vector. This is a consequence of the fact that the lattice symmetry is discrete as opposed to continuous, and thus its associated conservation law cannot be derived using Noether's theorem. Physical significance The phase modulation of the Bloch state is the same as that of a free particle with momentum , i.e. gives the state's periodicity, which is not the same as that of the lattice. This modulation contributes to the kinetic energy of the particle (whereas the modulation is entirely responsible for the kinetic energy of a free particle). In regions where the band is approximately parabolic the crystal momentum is equal to the momentum of a free particle with momentum if we assign the particle an effective mass that's related to the curvature of the parabola. Relation to velocity Crystal momentum corresponds to the physically measurable concept of velocity according to This is the same formula as the group velocity of a wave. More specifically, due to the Heisenberg uncertainty principle, an electron in a crystal cannot have both an exactly-defined k and an exact position in the crystal. It can, however, form a wave packet centered on momentum k (with slight uncertainty), and centered on a certain position (with slight uncertainty). The center position of this wave packet changes as the wave propagates, moving through the crystal at the velocity v given by the formula above. In a real crystal, an electron moves in this way—traveling in a certain direction at a certain speed—for only a short period of time, before colliding with an imperfection in the crystal that causes it to move in a different, random direction. These collisions, called electron scattering, are most commonly caused by crystallographic defects, the crystal surface, and random thermal vibrations of the atoms in the crystal (phonons). Response to electric and magnetic fields Crystal momentum also plays a seminal role in the semiclassical model of electron dynamics, where it follows from the acceleration theorem that it obeys the equations of motion (in cgs units): Here perhaps the analogy between crystal momentum and true momentum is at its most powerful, for these are precisely the equations that a free space electron obeys in the absence of any crystal structure. Crystal momentum also earns its chance to shine in these types of calculations, for, in order to calculate an electron's trajectory of motion using the above equations, one need only consider external fields, while attempting the calculation from a set of equations of motion based on true momentum would require taking into account individual Coulomb and Lorentz forces of every single lattice ion in addition to the external field. Applications Angle-resolved photo-emission spectroscopy (ARPES) In angle-resolved photo-emission spectroscopy (ARPES), irradiating light on a crystal sample results in the ejection of an electron away from the crystal. Throughout the course of the interaction, one is allowed to conflate the two concepts of crystal and true momentum and thereby gain direct knowledge of a crystal's band structure. That is to say, an electron's crystal momentum inside the crystal becomes its true momentum after it leaves, and the true momentum may be subsequently inferred from the equation by measuring the angle and kinetic energy at which the electron exits the crystal, where is a single electron's mass. Because crystal symmetry in the direction normal to the crystal surface is lost at the crystal boundary, crystal momentum in this direction is not conserved. Consequently, the only directions in which useful ARPES data can be gleaned are directions parallel to the crystal surface. References Electronic band structures Moment (physics) Momentum
0.784618
0.9823
0.77073
Planck's law
In physics, Planck's law (also Planck radiation law) describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment. At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. While Planck originally regarded the hypothesis of dividing energy into increments as a mathematical artifice, introduced merely to get the correct answer, other physicists including Albert Einstein built on his work, and Planck's insight is now recognized to be of fundamental importance to quantum theory. The law Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, , describes the spectral emissive power per unit area, per unit solid angle and per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. According to Planck's distribution law, the spectral energy density (energy per unit volume per unit frequency) at given temperature is given by:alternatively, the law can be expressed for the spectral radiance of a body for frequency at absolute temperature given as:where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The cgs units of spectral radiance are . The terms and are related to each other by a factor of since is independent of direction and radiation travels at speed . The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In addition, the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. In the limit of low frequencies (i.e. long wavelengths), Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation. Max Planck developed the law in 1900 with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium. As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution, the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution. Black-body radiation A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium, the emitted radiation is closely described by Planck's law and because of its dependence on temperature, Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength. Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the Sun (~) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law. Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol . It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization. The emissivity of a natural interface is always between and 1. A body that interfaces with another medium which both has and absorbs all the radiation incident upon it is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole. Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons. By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature. If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature. The quantity is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power is radiated in the direction described by the angle from the surface normal from infinitesimal surface area into infinitesimal solid angle in an infinitesimal frequency band of width centered on frequency . The total power radiated into any solid angle is the integral of over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator. Different forms Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields. In the fractional bandwidth formulation, and the integration is with respect to Planck's law can also be written in terms of the spectral energy density by multiplying by : These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents, or fractional frequency or wavelength). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized. Correspondence between spectral variable forms Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal. Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated. This is so whether it is expressed in terms of an increment of frequency, , or, correspondingly, of wavelength, , or of fractional bandwidth, or . Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength. In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written which leads to Also, , so that . Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units. Consequently, Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in § Properties §§ Percentiles. The fractional bandwidth form is related to the other forms by . First and second radiation constants In the above variants of Planck's law, the wavelength and wavenumber variants use the terms and which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, and are therefore referred to as the first radiation constant and the second radiation constant with and Using the radiation constants, the wavelength variant of Planck's law can be simplified to and the wavenumber variant can be simplified correspondingly. is used here instead of because it is the SI symbol for spectral radiance. The in refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance rather than spectral radiance , in which case replaces , with so that Planck's law for spectral radiant exitance can be written as As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of ; see for details. Physics Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law. Classical physics led, via the equipartition theorem, to the ultraviolet catastrophe, a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account. Photons Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies. Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution. Kirchhoff's law of thermal radiation Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions. Spectral dependence of thermal radiation There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies. It is generally known that the hotter a body becomes, the more heat it radiates at every frequency. In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency. One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters. Thinking theoretically, Kirchhoff went a little further and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded directly to its surroundings without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation. Relation between absorptivity and emissivity One may imagine a small homogeneous spherical material body labeled at a temperature , lying in a radiation field within a large cavity with walls of material labeled at a temperature . The body emits its own thermal radiation. At a particular frequency , the radiation emitted from a particular cross-section through the centre of in one sense in a direction normal to that cross-section may be denoted , characteristically for the material of . At that frequency , the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted , for the wall temperature . For the material of , defining the absorptivity as the fraction of that incident radiation absorbed by , that incident energy is absorbed at a rate . The rate of accumulation of energy in one sense into the cross-section of the body can then be expressed Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature , there exists a unique universal radiative distribution, nowadays denoted , that is independent of the chemical characteristics of the materials and , that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows. When there is thermodynamic equilibrium at temperature , the cavity radiation from the walls has that unique universal value, so that . Further, one may define the emissivity of the material of the body just so that at thermodynamic equilibrium at temperature , one has . When thermal equilibrium prevails at temperature , the rate of accumulation of energy vanishes so that . It follows that in thermodynamic equilibrium, when , Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when , Introducing the special notation for the absorptivity of material at thermodynamic equilibrium at temperature (justified by a discovery of Einstein, as indicated below), one further has the equality at thermodynamic equilibrium. The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation. Kirchhoff pointed out that he did not know the precise character of , but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution . Black body In physics, one considers an ideal black body, here labeled , defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency , at thermodynamic equilibrium at temperature , one has , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it. Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls. Lambert's cosine law As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle. At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction. This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux from a given infinitesimal element of area of the actual emitting surface of the black body, detected from a given direction that makes an angle with the normal to the actual emitting surface at , into an element of solid angle of detection centred on the direction indicated by , in an element of frequency bandwidth , can be represented as where denotes the flux, per unit area per unit frequency per unit solid angle, that area would show if it were measured in its normal direction . The factor is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by . This is the reason for the name cosine law. Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has and so Thus Lambert's cosine law expresses the independence of direction of the spectral radiance of the surface of a black body in thermodynamic equilibrium. Stefan–Boltzmann law The total power emitted per unit area at the surface of a black body may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere above the surface. The infinitesimal solid angle can be expressed in spherical polar coordinates: So that: where is known as the Stefan–Boltzmann constant. Radiative transfer The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance. For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance , energy is conserved: The change in the (spectral) radiance of that beam is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient. The absorption coefficient is the fractional change in the intensity of the light beam as it travels the distance , and has units of length−1. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density of the material, we may define a "mass absorption coefficient" which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance will then be The "mass emission coefficient" is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle−1⋅frequency−1⋅density−1. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance will then be The equation of radiative transfer will then be the sum of these two contributions: If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that and: which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium: Einstein coefficients The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is equilibrated by its reverse process. In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy , and level 2 is the upper energy level with energy , then the frequency of the radiation radiated or absorbed will be determined by Bohr's frequency condition: If and are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes: Spontaneous emission Stimulated emission Photo-absorption where is the spectral energy density of the radiation field. The three parameters , and , known as the Einstein coefficients, are associated with the photon frequency produced by the transition between two energy levels (states). As a result, each line in a spectrum has its own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero: Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann factor: where and are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients: so that knowledge of one coefficient will yield the other two. For the case of isotropic absorption and emission, the emission coefficient and absorption coefficient defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that These coefficients apply to both atoms and molecules. Properties Peaks The distributions , , and peak at a photon energy ofwhere is the Lambert W function and is Euler's number. However, the distribution peaks at a different energyThe reason for this is that, as mentioned above, one cannot go from (for example) to simply by substituting by . In addition, one must also multiply by , which shifts the peak of the distribution to higher energies. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively. Dividing by these energy expression gives the wavelength of the peak. The spectral radiance at these peaks is given by: with andwith Meanwhile, the average energy of a photon from a blackbody iswhere is the Riemann zeta function. Approximations In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law or The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation: or Percentiles Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength when divided by temperature . The second column of the following table lists the corresponding values of , that is, those values of for which the wavelength is micrometers at the radiance percentile point given by the corresponding entry in the first column. That is, 0.01% of the radiation is at a wavelength below  μm, 20% below , etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions , and , respectively, divided by attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long. Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason. Comparison to solar spectrum Solar radiation can be compared to black-body radiation at about 5778 K (but see graph). The table on the right shows how the radiation of a black body at this temperature is partitioned, and also how sunlight is partitioned for comparison. Also for comparison a planet modeled as a black body is shown, radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature. Its wavelengths are more than twenty times that of the Sun, tabulated in the third column in micrometers (thousands of nanometers). That is, only 1% of the Sun's radiation is at wavelengths shorter than 296 nm, and only 1% at longer than 3728 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.296 to 3.728 μm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 μm, well above the range of solar radiation (or below if expressed in terms of frequencies instead of wavelengths ). A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 μm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 μm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque. The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 8%, while that above 700 nm, or infrared, starts at about the 48% point and so accounts for 52% of the total. Hence only 40% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared. Derivations Photon gas Consider a cube of side with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature . If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation. At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions. The three wavelengths , , and , in the three directions orthogonal to the walls can be:where the are positive integers. For each set of integers there are two linearly independent solutions (known as modes). The two modes for each set of these correspond to the two polarization states of the photon which has a spin of 1. According to quantum theory, the total energy of a mode is given by: The number can be interpreted as the number of photons in the mode. For the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect. In the following we will calculate the internal energy of the box at absolute temperature . According to statistical mechanics, the equilibrium probability distribution over the energy levels of a particular mode is given by:where we use the reciprocal temperatureThe denominator , is the partition function of a single mode. It makes properly normalized, and can be evaluated aswith being the energy of a single photon. The average energy in a mode can be obtained from the partition function:This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics. Since there is no restriction on the total number of photons, the chemical potential is zero. If we measure the energy relative to the ground state, the total energy in the box follows by summing over all allowed single photon states. This can be done exactly in the thermodynamic limit as approaches infinity. In this limit, becomes continuous and we can then integrate over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between and as , where is the density of states (which is evaluated below), then the total energy is given by To calculate the density of states we rewrite equation as follows:where is the norm of the vector . For every vector with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of -space is twice the volume of that region. An energy range of corresponds to shell of thickness in -space. Because the components of have to be positive, this shell spans an octant of a sphere. The number of photon states , in an energy range , is thus given by:Inserting this in Eq. and dividing by volume gives the total energy densitywhere the frequency-dependent spectral energy density is given bySince the radiation is the same in all directions, and propagates at the speed of light, the spectral radiance of radiation exiting the small hole iswhich yields the Planck's lawOther forms of the law can be obtained by change of variables in the total energy integral. The above derivation is based on . Dipole approximation and Einstein Coefficients For the non-degenerate case, A and B coefficients can be calculated using dipole approximation in time dependent perturbation theory in quantum mechanics. Calculation of A also requires second quantization since semi-classical theory cannot explain spontaneous emission which does not go to zero as perturbing field goes to zero. The transition rates hence calculated are (in SI units): Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms. The A and B coefficients (which correspond to angular frequency energy distribution) are hence: where and A and B coefficients satisfy the given ratios for non degenerate case: and . Another useful ratio is that from maxwell distribution which says that the number of particles in an energy level is proportional to the exponent of . Mathematically: where and are number of occupied energy levels of and respectively, where . Then, using: Solving for for equilibrium condition , and using the derived ratios, we get Planck's Law: . History Balfour Stewart In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power." Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly. Gustav Kirchhoff In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber. Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature . Here is used a notation different from Kirchhoff's. Here, the emitting power denotes a dimensioned quantity, the total radiation emitted by a body labeled by index at temperature . The total absorption ratio of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because is dimensionless. Also here the wavelength-specific emitting power of the body at temperature is denoted by and the wavelength-specific absorption ratio by . Again, the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio has one and the same value for all bodies, that is for all values of index . In this report there was no mention of black bodies. In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio , has one and the same value common to all bodies, that is, for every value of the material index . Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid. But more importantly, it relied on a new theoretical postulate of "perfectly black bodies", which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction. Kirchhoff's proof considered an arbitrary non-ideal body labeled as well as various perfect black bodies labeled . It required that the bodies be kept in a cavity in thermal equilibrium at temperature . His proof intended to show that the ratio was independent of the nature of the non-ideal body, however partly transparent or partly reflective it was. His proof first argued that for wavelength and at temperature , at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power , with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio is again just , with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio was equal to , which may now be denoted , a continuous function, dependent only on at fixed temperature , and an increasing function of at fixed wavelength , at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.) Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature , for every wavelength , the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by . (For our notation , Kirchhoff's original notation was simply .) Kirchhoff announced that the determination of the function was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function has occasionally been called 'Kirchhoff's (emission, universal) function', though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law. According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860." Empirical and theoretical ingredients for the scientific induction of Planck's law In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result. In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile. In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. He determined the spectral variable by use of prisms. He analyzed the surface through what he called "isothermal" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. Such spectral sections are widely shown even today. In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature. Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of "complete irregularity of the vibrations of ... atoms". At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. Michelson produced a formula for the spectrum for temperature: where denotes specific radiative intensity at wavelength and temperature , and where and are empirical constants. In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides. The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments. Planck's views before the empirical facts led him to find his eventual law Planck first turned his attention to the problem of black-body radiation in 1897. Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law where and denote empirically measurable constants, and where and denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths. Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called "Kirchhoff's challenge". Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer, Ferdinand Kurlbaum, Ernst Pringsheim Sr., and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. In June of that same year, Lord Rayleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition. So Planck submitted a formula combining both Rayleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a "lucky intuition" into one of "true meaning" in physics, it did not have true significance. Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called "an act of desperation", he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper. Finding the empirical law Max Planck produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form . This was not the celebrated Rayleigh–Jeans formula , which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well. For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, . It is known that and this leads to and thence to for long wavelengths. But for short wavelengths, the Wien formula leads to and thence to for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula This led Planck to the formula where Planck used the symbols and to denote empirical fitting constants. Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials. Trying to find a physical explanation of the law Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. He tentatively mentioned the possible connection of such oscillators with atoms. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes. Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. In Planck's words, "I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost." Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta , which he had proceeded to make tend to zero in magnitude, because the finite magnitude had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, , Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, , characteristic of the respective characteristic frequency. His new universal constant of nature, , is now known as the Planck constant. Planck explained further that the respective definite unit, , of energy should be proportional to the respective characteristic oscillation frequency of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality : Planck did not propose that light propagating in free space is quantized. The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory. In 1906, Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. This is because of the linearity of Maxwell's equations. Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, and instead remains exactly as it was when introduced, rather than evolving into a black body field. Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments. Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface. Subsequent events It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula played no role. Einstein gave the energy content of such quanta in the form . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether." According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like ." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians. In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. This has at times been called Planck's "second theory". It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal. The colourful term "ultraviolet catastrophe" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans. In 1913, Bohr gave another formula with a further different physical meaning to the quantity . In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was where and denote the energy levels of quantum states of an atom, with quantum numbers and . The symbol denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency has no immediate relation to frequencies that might describe those quantum states themselves. Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. The actual word 'photon' was invented still later, by G.N. Lewis in 1926, who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms. Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, which became the fundamental basis for the development of quantum mechanics. The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. At that time, Heisenberg knew nothing of matrix algebra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena. Nowadays, as a statement of the energy of a light quantum, often one finds the formula , where , and denotes angular frequency, and less often the equivalent formula . This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators. An article by Helge Kragh published in Physics World gives an account of this history. See also Emissivity Radiance Sakuma–Hattori equation References Bibliography Translated in part as "On quantum mechanics" in Translated in and a nearly identical version Translated in See also . Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in a translation of Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden, 1969. Translated by Guthrie, F. as Translated in Translated in Translated in Translated in External links Summary of Radiation Radiation of a Blackbody – interactive simulation to play with Planck's law Scienceworld entry on Planck's Law Statistical mechanics Foundational quantum physics Max Planck Old quantum theory 1900 in science 1900 in Germany
0.77183
0.998564
0.770722
Dimensional analysis
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation. The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822. Formulation The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols: time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J). The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since . A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity. A quantity that has all exponents null is said to have dimension one. The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis. Simple cases As examples, the dimension of the physical quantity speed is The dimension of the physical quantity acceleration is The dimension of the physical quantity force is The dimension of the physical quantity pressure is The dimension of the physical quantity energy is The dimension of the physical quantity power is The dimension of the physical quantity electric charge is The dimension of the physical quantity voltage is The dimension of the physical quantity capacitance is Rayleigh's method In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh. The method involves the following steps: Gather all the independent variables that are likely to influence the dependent variable. If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as . Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents. Express each of the quantities in the equation in some base units in which the solution is required. By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., . Solve these equations to obtain the values of the exponents , , , ..., . Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents. As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Concrete numbers and base units Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof. A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as . Percentages, derivatives and integrals Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since . Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: position has the dimension L (length); derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient; the second derivative (, acceleration) has dimension . Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. force has the dimension (mass multiplied by acceleration); the integral of force with respect to the distance the object has travelled (, work) has dimension . In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. Dimensional homogeneity (commensurability) The most basic rule of dimensional analysis is that of dimensional homogeneity. However, the dimensions form an abelian group under multiplication, so: For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. Conversion factor In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so . Applications Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. Mathematics A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. Finance, economics, and accounting In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios. For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid". In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year). Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year. Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.) In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative. Fluid mechanics In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: Reynolds number, generally important in all types of fluid problems: Froude number, modeling flow with a free surface: Euler number, used in problems in which pressure is of interest: Mach number, important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound. History The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Lagrange, in a 1799 article at the Turin Academy of Science. This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem. Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity. In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables. James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. . Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound. The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents. Examples A simple example: period of a harmonic oscillator What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and . The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well. The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as . A more complex example: energy of a vibrating wire Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation where is some unknown function, or, equivalently as where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis. A third example: demand versus capacity for a rotating disc Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following non-dimensional groups: demand/capacity = thickness/radius or aspect ratio = Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs. Properties Mathematical properties The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents. Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second). An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module. A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, . In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions. One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form Knowing this restriction can be a powerful tool for obtaining new insight into the system. Mechanics The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space). Velocity, being expressible in terms of length and time, is redundant (the set is not linearly independent). Other fields of physics and chemistry Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N. In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Polynomials and transcendental functions Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form. Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not. Similarly, while one can evaluate monomials of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, This is the height to which an object rises in time  if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose  = 0.01 minutes. Then the first term would be Combining units and numerical values The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, . When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: is identical to The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Quantity equations A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities. In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation for displacement as speed multiplied by time difference would be: for = 5 m/s, where and may be expressed in any units, converted if necessary. In contrast, a corresponding numerical-value equation would be: where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres. Generally, the use of numerical-value equations is discouraged. Dimensionless concepts Constants The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Formalisms Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice. It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Dimensional equivalences Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force. SI units Programming languages Dimensional correctness as part of type checking has been studied since 1977. Implementations for Ada and C++ were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran. Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices. McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure. Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions. Geometry: position vs. displacement Affine quantities Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward), adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection), subtracting two positions should yield a displacement, but one may not add two positions. This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement). Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity. Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity. Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero, −273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F, where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, 1 K = 1 °C ≠ 1 °F = 1 °R. (Here °R refers to the Rankine scale, not the Réaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Orientation and frame of reference Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference. This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley's extensions Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix. He introduced two approaches: The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent. Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia. Directed dimensions As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the range may be written: Or dimensionally from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes: and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It does not deal well with vector equations involving the cross product, nor does it handle well the use of angles as physical variables. It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. Quantity of matter In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Siano's extension: orientational analysis Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and . Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results: The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is . The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form: Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of . It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis. See also Buckingham π theorem Dimensionless numbers in fluid mechanics Fermi estimate – used to teach dimensional analysis Numerical-value equation Rayleigh's method of dimensional analysis Similitude – an application of dimensional analysis System of measurement Related areas of mathematics Covariance and contravariance of vectors Exterior algebra Geometric algebra Quantity calculus Notes References As postscript , (5): 147, (6): 101, (7): 129 Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive Further reading External links List of dimensions for variety of physical quantities Unicalc Live web calculator doing units conversion by dimensional analysis A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries Buckingham's pi-theorem Quantity System calculator for units conversion based on dimensional approach Units, quantities, and fundamental constants project dimensional analysis maps Measurement Conversion of units of measurement Chemical engineering Mechanical engineering Environmental engineering
0.772417
0.997781
0.770703
Hawking radiation
Hawking radiation is the theoretical emission released outside a black hole's event horizon. This is counterintuitive because once ordinary electromagnetic radiation is inside the event horizon, it cannot escape. It is named after the physicist Stephen Hawking, who developed a theoretical argument for its existence in 1974. Hawking radiation is predicted to be extremely faint and is many orders of magnitude below the current best telescopes' detecting ability. Hawking radiation reduces the mass and rotational energy of black holes and is therefore also theorized to cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. For all except the smallest black holes, this happens extremely slowly. The radiation temperature, called Hawking temperature, is inversely proportional to the black hole's mass, so micro black holes are predicted to be larger emitters of radiation than larger black holes and should dissipate faster per their mass. As such, if small black holes exist such as permitted by the hypothesis of primordial black holes, they ought to lose mass more rapidly as they shrink, leading to a final cataclysm of high energy radiation alone. Such radiation bursts have not yet been detected. Overview Modern black holes were first predicted by Einstein's 1915 theory of general relativity. Evidence for the astrophysical objects termed black holes began to mount half a century later, and these objects are of current interest primarily because of their compact size and immense gravitational attraction. Early research into black holes were done by individuals such as Karl Schwarzschild and John Wheeler who modeled black holes as having zero entropy. A black hole can form when enough matter or energy is compressed into a volume small enough that the escape velocity is greater than the speed of light. Nothing can travel that fast, so nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is the event horizon; an observer outside it cannot observe, become aware of, or be affected by events within the event horizon. Alternatively, using a set of infalling coordinates in general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travel through space faster than light, space itself can infall at any speed.) Once matter is inside the event horizon, all of the matter inside falls inevitably into a gravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter; a classical black hole is pure empty spacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon. Our current understanding of quantum physics can be used to investigate what may happen in the region around the event horizon. In 1974, British physicist Stephen Hawking used quantum field theory in curved spacetime to show that in theory, instead of cancelling each other out normally, the antimatter and matter fields were disrupted by the black hole, causing antimatter and matter particles to "blip" into existence as a result of the imbalanced matter fields, and drawing energy from the disruptor itself: the black holes (to escape), effectively draining energy from the black hole. In addition, not all of the particles were close to the event horizon, and the ones that were could not escape. In effect, this energy acted as if the black hole itself was slowly evaporating (although it actually came from outside it). However, according to the conjectured gauge-gravity duality (also known as the AdS/CFT correspondence), black holes in certain cases (and perhaps in general) are equivalent to solutions of quantum field theory at a non-zero temperature. This means that no information loss is expected in black holes (since the theory permits no such loss) and the radiation emitted by a black hole is probably the usual thermal radiation. If this is correct, then Hawking's original calculation should be corrected, though it is not known how (see below). A black hole of one solar mass has a temperature of only 60 nanokelvins (60 billionths of a kelvin); in fact, such a black hole would absorb far more cosmic microwave background radiation than it emits. A black hole of (about the mass of the Moon, or about across) would be in equilibrium at 2.7 K, absorbing as much radiation as it emits. Formulation In 1972, Jacob Bekenstein developed a theory and reported that the black holes should have an entropy. Bekenstein's theory and report came to Stephen Hawking's attention, leading him to think about radiation due to this formalism. Hawking's subsequent theory and report followed a visit to Moscow in 1973, where Soviet scientists Yakov Zeldovich and Alexei Starobinsky convinced him that rotating black holes ought to create and emit particles. Hawking would find aspects of both of these arguments true once he did the calculation himself. Due to Bekenstein's contribution to black hole entropy, it is also known as Bekenstein-Hawking radiation. Emission process Hawking radiation is dependent on the Unruh effect and the equivalence principle applied to black-hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation. A Schwarzschild black hole has a metric The black hole is the background spacetime for a quantum field theory. The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position The local metric to lowest order is which is Rindler in terms of . The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration, , diverges as . The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local temperature which is the Unruh effect. The gravitational redshift is given by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature: The inverse temperature redshifted to at infinity is and is the near-horizon position, near , so this is really Thus a field theory defined on a black-hole background is in a thermal state whose temperature at infinity is From the black-hole temperature, it is straightforward to calculate the black-hole entropy . The change in entropy when a quantity of heat is added is The heat energy that enters serves to increase the total mass, so The radius of a black hole is twice its mass in Planck units, so the entropy of a black hole is proportional to its surface area: Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can be somehow encoded onto a bounding surface. Black hole evaporation When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation ). Consequently, an evaporating black hole will have a finite lifespan. By dimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass, and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1012 kg would have evaporated completely by the present day. In 1976, Don Page refined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-charged Schwarzschild black hole of mass . The time for the event horizon or entropy of a black hole to halve is known as the Page time. The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; the absorption cross section goes down in a complicated, spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon. Page concluded that primordial black holes could survive to the present day only if their initial mass were roughly or larger. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account 3 flavors of neutrinos with nonzero masses. A 2008 calculation using the particle content of the Standard Model and the WMAP figure for the age of the universe yielded a mass bound of . Some pre-1998 calculations, using outdated assumptions about neutrinos, were as follows: If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064 years which is vastly longer than the age of the universe. A supermassive black hole with a mass of 1011 (100 billion) will evaporate in around . Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 2 × 10106 years. Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067 years. The power emitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-charged Schwarzschild black hole of mass . Combining the formulas for the Schwarzschild radius of the black hole, the Stefan–Boltzmann law of blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of a sphere (the black hole's event horizon), several equations can be derived. The Hawking radiation temperature is: The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is: where is the luminosity, i.e., the radiated power, is the reduced Planck constant, is the speed of light, is the gravitational constant and is the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework of semiclassical gravity. The time that the black hole takes to dissipate is: where and are the mass and (Schwarzschild) volume of the black hole, and are Planck mass and Planck time. A black hole of one solar mass ( = ) takes more than to evaporate—much longer than the current age of the universe at . But for a black hole of , the evaporation time is . This is why some astronomers are searching for signs of exploding primordial black holes. However, since the universe contains the cosmic microwave background radiation, in order for the black hole to dissipate, the black hole must have a temperature greater than that of the present-day blackbody radiation of the universe of 2.7 K. A study suggests that must be less than 0.8% of the mass of the Earth – approximately the mass of the Moon. Black hole evaporation has several significant consequences: Black hole evaporation produces a more consistent view of black hole thermodynamics by showing how black holes interact thermally with the rest of the universe. Unlike most objects, a black hole's temperature increases as it radiates away mass. The rate of temperature increase is exponential, with the most likely endpoint being the dissolution of the black hole in a violent burst of gamma rays. A complete description of this dissolution requires a model of quantum gravity, however, as it occurs when the black hole's mass approaches 1 Planck mass, its radius will also approach two Planck lengths. The simplest models of black hole evaporation lead to the black hole information paradox. The information content of a black hole appears to be lost when it dissipates, as under these models the Hawking radiation is random (it has no relation to the original information). A number of solutions to this problem have been proposed, including suggestions that Hawking radiation is perturbed to contain the missing information, that the Hawking evaporation leaves some form of remnant particle containing the missing information, and that information is allowed to be lost under these conditions. Problems and extensions Trans-Planckian problem The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength. The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantly Doppler shifts the modes. An outgoing photon of Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed. The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing. The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon. There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial. Large extra dimensions The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale. In particular, for black holes with masses below the Planck mass (~), they result in impossible lifetimes below the Planck time (~). This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole. In a model with large extra dimensions (10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well. In particular, the lifetime of a micro black hole with a radius below the scale of the extra dimensions is given by equation 9 in Cheung (2002) and equations 25 and 26 in Carr (2005). where is the low-energy scale, which could be as low as a few TeV, and is the number of large extra dimensions. This formula is now consistent with black holes as light as a few TeV, with lifetimes on the order of the "new Planck time" ~. In loop quantum gravity A detailed study of the quantum geometry of a black hole event horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless the value of a free parameter is set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However, quantum gravitational corrections to the entropy and radiation of black holes have been computed based on the theory. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking radiation spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of the Hawking spectrum. Experimental observation Astronomical search In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. As of Jan 1st, 2023, none have been detected. Heavy-ion collider physics If speculative large extra dimension theories are correct, then CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN. Experimental Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy using sonic black holes, in which sound perturbations are analogous to light in a gravitational black hole and the flow of an approximately perfect fluid is analogous to gravity (see Analog models of gravity). Observations of Hawking radiation were reported, in sonic black holes employing Bose–Einstein condensates. In September 2010 an experimental set-up created a laboratory "white hole event horizon" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation. However, the results remain unverified and debatable, and its status as a genuine confirmation remains in doubt. See also Black hole information paradox Black hole thermodynamics Black hole starship Blandford–Znajek process and Penrose process, other extractions of black-hole energy Gibbons–Hawking effect Thorne–Hawking–Preskill bet Unruh effect References Further reading External links Hawking radiation calculator tool The case for mini black holes A. Barrau & J. Grain explain how the Hawking radiation could be detected at colliders Black holes Quantum field theory Radiation Astronomical hypotheses Hypothetical processes 1974 introductions
0.771706
0.998668
0.770678
Energy profile (chemistry)
In theoretical chemistry, an energy profile is a theoretical representation of a chemical reaction or process as a single energetic pathway as the reactants are transformed into products. This pathway runs along the reaction coordinate, which is a parametric curve that follows the pathway of the reaction and indicates its progress; thus, energy profiles are also called reaction coordinate diagrams. They are derived from the corresponding potential energy surface (PES), which is used in computational chemistry to model chemical reactions by relating the energy of a molecule(s) to its structure (within the Born–Oppenheimer approximation). Qualitatively, the reaction coordinate diagrams (one-dimensional energy surfaces) have numerous applications. Chemists use reaction coordinate diagrams as both an analytical and pedagogical aid for rationalizing and illustrating kinetic and thermodynamic events. The purpose of energy profiles and surfaces is to provide a qualitative representation of how potential energy varies with molecular motion for a given reaction or process. Potential energy surfaces In simplest terms, a potential energy surface or PES is a mathematical or graphical representation of the relation between energy of a molecule and its geometry. The methods for describing the potential energy are broken down into a classical mechanics interpretation (molecular mechanics) and a quantum mechanical interpretation. In the quantum mechanical interpretation an exact expression for energy can be obtained for any molecule derived from quantum principles (although an infinite basis set may be required) but ab initio calculations/methods will often use approximations to reduce computational cost. Molecular mechanics is empirically based and potential energy is described as a function of component terms that correspond to individual potential functions such as torsion, stretches, bends, Van der Waals energies, electrostatics and cross terms. Each component potential function is fit to experimental data or properties predicted by ab initio calculations. Molecular mechanics is useful in predicting equilibrium geometries and transition states as well as relative conformational stability. As a reaction occurs the atoms of the molecules involved will generally undergo some change in spatial orientation through internal motion as well as its electronic environment. Distortions in the geometric parameters result in a deviation from the equilibrium geometry (local energy minima). These changes in geometry of a molecule or interactions between molecules are dynamic processes which call for understanding all the forces operating within the system. Since these forces can be mathematically derived as first derivative of potential energy with respect to a displacement, it makes sense to map the potential energy of the system as a function of geometric parameters , , and so on. The potential energy at given values of the geometric parameters is represented as a hyper-surface (when ) or a surface (when ). Mathematically, it can be written as For the quantum mechanical interpretation, a PES is typically defined within the Born–Oppenheimer approximation (in order to distinguish between nuclear and electronic motion and energy) which states that the nuclei are stationary relative to the electrons. In other words, the approximation allows the kinetic energy of the nuclei (or movement of the nuclei) to be neglected and therefore the nuclei repulsion is a constant value (as static point charges) and is only considered when calculating the total energy of the system. The electronic energy is then taken to depend parametrically on the nuclear coordinates, meaning a new electronic energy must be calculated for each corresponding atomic configuration. PES is an important concept in computational chemistry and greatly aids in geometry and transition state optimization. Degrees of freedom An -atom system is defined by coordinates: for each atom. These degrees of freedom can be broken down to include 3 overall translational and 3 (or 2) overall rotational degrees of freedom for a non-linear system (for a linear system). However, overall translational or rotational degrees do not affect the potential energy of the system, which only depends on its internal coordinates. Thus an -atom system will be defined by (non-linear) or (linear) coordinates. These internal coordinates may be represented by simple stretch, bend, torsion coordinates, or symmetry-adapted linear combinations, or redundant coordinates, or normal modes coordinates, etc. For a system described by -internal coordinates a separate potential energy function can be written with respect to each of these coordinates by holding the other parameters at a constant value allowing the potential energy contribution from a particular molecular motion (or interaction) to be monitored while the other parameters are defined. Consider a diatomic molecule AB which can macroscopically visualized as two balls (which depict the two atoms A and B) connected through a spring which depicts the bond. As this spring (or bond) is stretched or compressed, the potential energy of the ball-spring system (AB molecule) changes and this can be mapped on a 2-dimensional plot as a function of distance between A and B, i.e. bond length. The concept can be expanded to a tri-atomic molecule such as water where we have two bonds and bond angle as variables on which the potential energy of a water molecule will depend. We can safely assume the two bonds to be equal. Thus, a PES can be drawn mapping the potential energy E of a water molecule as a function of two geometric parameters, bond length and bond angle. The lowest point on such a PES will define the equilibrium structure of a water molecule. The same concept is applied to organic compounds like ethane, butane etc. to define their lowest energy and most stable conformations. Characterizing a PES The most important points on a PES are the stationary points where the surface is flat, i.e. parallel to a horizontal line corresponding to one geometric parameter, a plane corresponding to two such parameters or even a hyper-plane corresponding to more than two geometric parameters. The energy values corresponding to the transition states and the ground state of the reactants and products can be found using the potential energy function by calculating the function's critical points or the stationary points. Stationary points occur when the 1st partial derivative of the energy with respect to each geometric parameter is equal to zero. Using analytical derivatives of the derived expression for energy, one can find and characterize a stationary point as minimum, maximum or a saddle point. The ground states are represented by local energy minima and the transition states by saddle points. Minima represent stable or quasi-stable species, i.e. reactants and products with finite lifetime. Mathematically, a minimum point is given as A point may be local minimum when it is lower in energy compared to its surrounding only or a global minimum which is the lowest energy point on the entire potential energy surface. Saddle point represents a maximum along only one direction (that of the reaction coordinate) and is a minimum along all other directions. In other words, a saddle point represents a transition state along the reaction coordinate. Mathematically, a saddle point occurs when for all except along the reaction coordinate and along the reaction coordinate. Reaction coordinate diagrams The intrinsic reaction coordinate (IRC), derived from the potential energy surface, is a parametric curve that connects two energy minima in the direction that traverses the minimum energy barrier (or shallowest ascent) passing through one or more saddle point(s). However, in reality if reacting species attains enough energy it may deviate from the IRC to some extent. The energy values (points on the hyper-surface) along the reaction coordinate result in a 1-D energy surface (a line) and when plotted against the reaction coordinate (energy vs reaction coordinate) gives what is called a reaction coordinate diagram (or energy profile). Another way of visualizing an energy profile is as a cross section of the hyper surface, or surface, long the reaction coordinate. Figure 5 shows an example of a cross section, represented by the plane, taken along the reaction coordinate and the potential energy is represented as a function or composite of two geometric variables to form a 2-D energy surface. In principle, the potential energy function can depend on N variables but since an accurate visual representation of a function of 3 or more variables cannot be produced (excluding level hypersurfaces) a 2-D surface has been shown. The points on the surface that intersect the plane are then projected onto the reaction coordinate diagram (shown on the right) to produce a 1-D slice of the surface along the IRC. The reaction coordinate is described by its parameters, which are frequently given as a composite of several geometric parameters, and can change direction as the reaction progresses so long as the smallest energy barrier (or activation energy (Ea)) is traversed. The saddle point represents the highest energy point lying on the reaction coordinate connecting the reactant and product; this is known as the transition state. A reaction coordinate diagram may also have one or more transient intermediates which are shown by high energy wells connected via a transition state peak. Any chemical structure that lasts longer than the time for typical bond vibrations (10−13 – 10−14s) can be considered as intermediate. A reaction involving more than one elementary step has one or more intermediates being formed which, in turn, means there is more than one energy barrier to overcome. In other words, there is more than one transition state lying on the reaction pathway. As it is intuitive that pushing over an energy barrier or passing through a transition state peak would entail the highest energy, it becomes clear that it would be the slowest step in a reaction pathway. However, when more than one such barrier is to be crossed, it becomes important to recognize the highest barrier which will determine the rate of the reaction. This step of the reaction whose rate determines the overall rate of reaction is known as rate determining step or rate limiting step. The height of energy barrier is always measured relative to the energy of the reactant or starting material. Different possibilities have been shown in figure 6. Reaction coordinate diagrams also give information about the equilibrium between a reactant or a product and an intermediate. If the barrier energy for going from intermediate to product is much higher than the one for reactant to intermediate transition, it can be safely concluded that a complete equilibrium is established between the reactant and intermediate. However, if the two energy barriers for reactant-to-intermediate and intermediate-to-product transformation are nearly equal, then no complete equilibrium is established and steady state approximation is invoked to derive the kinetic rate expressions for such a reaction. Drawing a reaction coordinate diagram Although a reaction coordinate diagram is essentially derived from a potential energy surface, it is not always feasible to draw one from a PES. A chemist draws a reaction coordinate diagram for a reaction based on the knowledge of free energy or enthalpy change associated with the transformation which helps him to place the reactant and product into perspective and whether any intermediate is formed or not. One guideline for drawing diagrams for complex reactions is the principle of least motion which says that a favored reaction proceeding from a reactant to an intermediate or from one intermediate to another or product is one which has the least change in nuclear position or electronic configuration. Thus, it can be said that the reactions involving dramatic changes in position of nuclei actually occur through a series of simple chemical reactions. Hammond postulate is another tool which assists in drawing the energy of a transition state relative to a reactant, an intermediate or a product. It states that the transition state resembles the reactant, intermediate or product that it is closest in energy to, as long the energy difference between the transition state and the adjacent structure is not too large. This postulate helps to accurately predict the shape of a reaction coordinate diagram and also gives an insight into the molecular structure at the transition state. Kinetic and thermodynamic considerations A chemical reaction can be defined by two important parameters- the Gibbs free energy associated with a chemical transformation and the rate of such a transformation. These parameters are independent of each other. While free energy change describes the stability of products relative to reactants, the rate of any reaction is defined by the energy of the transition state relative to the starting material. Depending on these parameters, a reaction can be favorable or unfavorable, fast or slow and reversible or irreversible, as shown in figure 8. A favorable reaction is one in which the change in free energy ∆G° is negative (exergonic) or in other words, the free energy of product, G°product, is less than the free energy of the starting materials, G°reactant. ∆G°> 0 (endergonic) corresponds to an unfavorable reaction. The ∆G° can be written as a function of change in enthalpy (∆H°) and change in entropy (∆S°) as ∆G°= ∆H° – T∆S°. Practically, enthalpies, not free energy, are used to determine whether a reaction is favorable or unfavorable, because ∆H° is easier to measure and T∆S° is usually too small to be of any significance (for T < 100 °C). A reaction with ∆H°<0 is called exothermic reaction while one with ∆H°>0 is endothermic. The relative stability of reactant and product does not define the feasibility of any reaction all by itself. For any reaction to proceed, the starting material must have enough energy to cross over an energy barrier. This energy barrier is known as activation energy (∆G≠) and the rate of reaction is dependent on the height of this barrier. A low energy barrier corresponds to a fast reaction and high energy barrier corresponds to a slow reaction. A reaction is in equilibrium when the rate of forward reaction is equal to the rate of reverse reaction. Such a reaction is said to be reversible. If the starting material and product(s) are in equilibrium then their relative abundance is decided by the difference in free energy between them. In principle, all elementary steps are reversible, but in many cases the equilibrium lies so much towards the product side that the starting material is effectively no longer observable or present in sufficient concentration to have an effect on reactivity. Practically speaking, the reaction is considered to be irreversible. While most reversible processes will have a reasonably small K of 103 or less, this is not a hard and fast rule, and a number of chemical processes require reversibility of even very favorable reactions. For instance, the reaction of an carboxylic acid with amines to form a salt takes place with K of 105–6, and at ordinary temperatures, this process is regarded as irreversible. Yet, with sufficient heating, the reverse reaction takes place to allow formation of the tetrahedral intermediate and, ultimately, amide and water. (For an extreme example requiring reversibility of a step with K > 1011, see demethylation.) A reaction can also be rendered irreversible if a subsequent, faster step takes place to consume the initial product(s), or a gas is evolved in an open system. Thus, there is no value of K that serves as a "dividing line" between reversible and irreversible processes. Instead, reversibility depends on timescale, temperature, the reaction conditions, and the overall energy landscape. When a reactant can form two different products depending on the reaction conditions, it becomes important to choose the right conditions to favor the desired product. If a reaction is carried out at relatively lower temperature, then the product formed is one lying across the smaller energy barrier. This is called kinetic control and the ratio of the products formed depends on the relative energy barriers leading to the products. Relative stabilities of the products do not matter. However, at higher temperatures the molecules have enough energy to cross over both energy barriers leading to the products. In such a case, the product ratio is determined solely by the energies of the products and energies of the barrier do not matter. This is known as thermodynamic control and it can only be achieved when the products can inter-convert and equilibrate under the reaction condition. A reaction coordinate diagram can also be used to qualitatively illustrate kinetic and thermodynamic control in a reaction. Applications Following are few examples on how to interpret reaction coordinate diagrams and use them in analyzing reactions. Solvent Effect: In general, if the transition state for the rate determining step corresponds to a more charged species relative to the starting material then increasing the polarity of the solvent will increase the rate of the reaction since a more polar solvent be more effective at stabilizing the transition state (ΔG‡ would decrease). If the transition state structure corresponds to a less charged species then increasing the solvents polarity would decrease the reaction rate since a more polar solvent would be more effective at stabilizing the starting material (ΔGo would decrease which in turn increases ΔG‡). SN1 vs SN2 The SN1 and SN2 mechanisms are used as an example to demonstrate how solvent effects can be indicated in reaction coordinate diagrams. SN1: Figure 10 shows the rate determining step for an SN1 mechanism, formation of the carbocation intermediate, and the corresponding reaction coordinate diagram. For an SN1 mechanism the transition state structure shows a partial charge density relative to the neutral ground state structure. Therefore, increasing the solvent polarity, for example from hexanes (shown as blue) to ether (shown in red), would decrease the rate of the reaction. As shown in figure 9, the starting material has approximately the same stability in both solvents (therefore ΔΔGo=ΔGopolar – ΔGonon polar is small) and the transition state is stabilized more in ether meaning ΔΔG≠ = ΔG≠polar – ΔG≠non-polar is large. SN2: For an SN2 mechanism a strongly basic nucleophile (i.e. a charged nucleophile) is favorable. In figure 11 below the rate determining step for Williamson ether synthesis is shown. The starting material is methyl chloride and an ethoxide ion which has a localized negative charge meaning it is more stable in polar solvents. The figure shows a transition state structure as the methyl chloride undergoes nucleophilic attack. In the transition state structure the charge is distributed between the Cl and the O atoms and the more polar solvent is less effective at stabilizing the transition state structure relative to the starting materials. In other words, the energy difference between the polar and non-polar solvent is greater for the ground state (for the starting material) than in the transition state. Catalysts: There are two types of catalysts, positive and negative. Positive catalysts increase the reaction rate and negative catalysts (or inhibitors) slow down a reaction and possibly cause the reaction not occur at all. The purpose of a catalyst is to alter the activation energy. Figure 12 illustrates the purpose of a catalyst in that only the activation energy is changed and not the relative thermodynamic stabilities, shown in the figure as ΔH, of the products and reactants. This means that a catalyst will not alter the equilibrium concentrations of the products and reactants but will only allow the reaction to reach equilibrium faster. Figure 13 shows the catalyzed pathway occurring in multiple steps which is a more realistic depiction of a catalyzed process. The new catalyzed pathway can occur through the same mechanism as the uncatalyzed reaction or through an alternate mechanism. An enzyme is a biological catalyst that increases the rate for many vital biochemical reactions. Figure 13 shows a common way to illustrate the effect of an enzyme on a given biochemical reaction. See also Gibbs free energy Enthalpy Entropy Computational chemistry Molecular mechanics Born–Oppenheimer approximation References Computational chemistry
0.790992
0.974298
0.770662
Biot–Savart law
In physics, specifically electromagnetism, the Biot–Savart law ( or ) is an equation describing the magnetic field generated by a constant electric current. It relates the magnetic field to the magnitude, direction, length, and proximity of the electric current. The Biot–Savart law is fundamental to magnetostatics. It is valid in the magnetostatic approximation and consistent with both Ampère's circuital law and Gauss's law for magnetism. When magnetostatics does not apply, the Biot–Savart law should be replaced by Jefimenko's equations. The law is named after Jean-Baptiste Biot and Félix Savart, who discovered this relationship in 1820. Equation In the following equations, it is assumed that the medium is not magnetic (e.g., vacuum). This allows for straightforward derivation of magnetic field B, while the fundamental vector here is H. Electric currents (along a closed curve/wire) The Biot–Savart law is used for computing the resultant magnetic flux density B at position r in 3D-space generated by a filamentary current I (for example due to a wire). A steady (or stationary) current is a continual flow of charges which does not change with time and the charge neither accumulates nor depletes at any point. The law is a physical example of a line integral, being evaluated over the path C in which the electric currents flow (e.g. the wire). The equation in SI units teslas (T) is where is a vector along the path whose magnitude is the length of the differential element of the wire in the direction of conventional current, is a point on path , and is the full displacement vector from the wire element at point to the point at which the field is being computed, and μ0 is the magnetic constant. Alternatively: where is the unit vector of . The symbols in boldface denote vector quantities. The integral is usually around a closed curve, since stationary electric currents can only flow around closed paths when they are bounded. However, the law also applies to infinitely long wires (this concept was used in the definition of the SI unit of electric current—the Ampere—until 20 May 2019). To apply the equation, the point in space where the magnetic field is to be calculated is arbitrarily chosen. Holding that point fixed, the line integral over the path of the electric current is calculated to find the total magnetic field at that point. The application of this law implicitly relies on the superposition principle for magnetic fields, i.e. the fact that the magnetic field is a vector sum of the field created by each infinitesimal section of the wire individually. For example, consider the magnetic field of a loop of radius carrying a current For a point a distance along the center line of the loop, the magnetic field vector at that point is:where is the unit vector of along the center-line of the loop (and the loop is taken to be centered at the origin). Loops such as the one described appear in devices like the Helmholtz coil, the solenoid, and the Magsail spacecraft propulsion system. Calculation of the magnetic field at points off the center line requires more complex mathematics involving elliptic integrals that require numerical solution or approximations. Electric current density (throughout conductor volume) The formulations given above work well when the current can be approximated as running through an infinitely-narrow wire. If the conductor has some thickness, the proper formulation of the Biot–Savart law (again in SI units) is: where is the vector from dV to the observation point , is the volume element, and is the current density vector in that volume (in SI in units of A/m2). In terms of unit vector Constant uniform current In the special case of a uniform constant current I, the magnetic field is i.e., the current can be taken out of the integral. Point charge at constant velocity In the case of a point charged particle q moving at a constant velocity v, Maxwell's equations give the following expression for the electric field and magnetic field: where is the unit vector pointing from the current (non-retarded) position of the particle to the point at which the field is being measured, is the speed in units of and is the angle between and . Alternatively, these can be derived by considering Lorentz transformation of Coulomb's force (in four-force form) in the source charge's inertial frame. When , the electric field and magnetic field can be approximated as These equations were first derived by Oliver Heaviside in 1888. Some authors call the above equation for the "Biot–Savart law for a point charge" due to its close resemblance to the standard Biot–Savart law. However, this language is misleading as the Biot–Savart law applies only to steady currents and a point charge moving in space does not constitute a steady current. Magnetic responses applications The Biot–Savart law can be used in the calculation of magnetic responses even at the atomic or molecular level, e.g. chemical shieldings or magnetic susceptibilities, provided that the current density can be obtained from a quantum mechanical calculation or theory. Aerodynamics applications The Biot–Savart law is also used in aerodynamic theory to calculate the velocity induced by vortex lines. In the aerodynamic application, the roles of vorticity and current are reversed in comparison to the magnetic application. In Maxwell's 1861 paper 'On Physical Lines of Force', magnetic field strength H was directly equated with pure vorticity (spin), whereas B was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability μ to be a measure of the density of the vortex sea. Hence the relationship, Magnetic induction current was essentially a rotational analogy to the linear electric current relationship, Electric convection current where ρ is electric charge density. B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the vortices. The electric current equation can be viewed as a convective current of electric charge that involves linear motion. By analogy, the magnetic equation is an inductive current involving spin. There is no linear motion in the inductive current along the direction of the B vector. The magnetic inductive current represents lines of force. In particular, it represents lines of inverse square law force. In aerodynamics the induced air currents form solenoidal rings around a vortex axis. Analogy can be made that the vortex axis is playing the role that electric current plays in magnetism. This puts the air currents of aerodynamics (fluid velocity field) into the equivalent role of the magnetic induction vector B in electromagnetism. In electromagnetism the B lines form solenoidal rings around the source electric current, whereas in aerodynamics, the air currents (velocity) form solenoidal rings around the source vortex axis. Hence in electromagnetism, the vortex plays the role of 'effect' whereas in aerodynamics, the vortex plays the role of 'cause'. Yet when we look at the B lines in isolation, we see exactly the aerodynamic scenario insomuch as B is the vortex axis and H is the circumferential velocity as in Maxwell's 1861 paper. In two dimensions, for a vortex line of infinite length, the induced velocity at a point is given by where is the strength of the vortex and r is the perpendicular distance between the point and the vortex line. This is similar to the magnetic field produced on a plane by an infinitely long straight thin wire normal to the plane. This is a limiting case of the formula for vortex segments of finite length (similar to a finite wire): where A and B are the (signed) angles between the point and the two ends of the segment. The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism In a magnetostatic situation, the magnetic field B as calculated from the Biot–Savart law will always satisfy Gauss's law for magnetism and Ampère's circuital law: In a non-magnetostatic situation, the Biot–Savart law ceases to be true (it is superseded by Jefimenko's equations), while Gauss's law for magnetism and the Maxwell–Ampère law are still true. Theoretical background Initially, the Biot–Savart law was discovered experimentally, then this law was derived in different ways theoretically. In The Feynman Lectures on Physics, at first, the similarity of expressions for the electric potential outside the static distribution of charges and the magnetic vector potential outside the system of continuously distributed currents is emphasized, and then the magnetic field is calculated through the curl from the vector potential. Another approach involves a general solution of the inhomogeneous wave equation for the vector potential in the case of constant currents. The magnetic field can also be calculated as a consequence of the Lorentz transformations for the electromagnetic force acting from one charged particle on another particle. Two other ways of deriving the Biot–Savart law include: 1) Lorentz transformation of the electromagnetic tensor components from a moving frame of reference, where there is only an electric field of some distribution of charges, into a stationary frame of reference, in which these charges move. 2) the use of the method of retarded potentials. See also People André-Marie Ampère James Clerk Maxwell Pierre-Simon Laplace Electromagnetism Darwin Lagrangian Notes References Further reading Electricity and Modern Physics (2nd Edition), G.A.G. Bennet, Edward Arnold (UK), 1974, Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray, The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, . Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, External links MISN-0-125 The Ampère–Laplace–Biot–Savart Law by Orilla McHarris and Peter Signell for Project PHYSNET. Aerodynamics Electromagnetism Eponymous laws of physics
0.772941
0.997024
0.770641
Joule-second
The joule-second (symbol J⋅s or J s) is the unit of action and of angular momentum in the International System of Units (SI) equal to the product of an SI derived unit, the joule (J), and an SI base unit, the second (s). The joule-second is a unit of action or of angular momentum. The joule-second also appears in quantum mechanics within the definition of the Planck constant. Angular momentum is the product of an object's moment of inertia, in units of kg⋅m2 and its angular velocity in units of rad⋅s−1. This product of moment of inertia and angular velocity yields kg⋅m2⋅s−1 or the joule-second. The Planck constant represents the energy of a wave, in units of joule, divided by the frequency of that wave, in units of s−1. This quotient of energy and frequency also yields the joule-second (J⋅s). Base units In SI base units the joule-second becomes kilogram-meter squared-per second or kg⋅m2⋅s−1. Dimensional Analysis of the joule-second yields M L2 T−1. Note the denominator of seconds (s) in the base units. Confusion with joules per second The joule-second (J⋅s) should not be confused with joules per second (J/s) or watts (W). In physical processes, when the unit of time appears in the denominator of a ratio, the described process occurs at a rate. For example, in discussions about speed, an object like a car travels a known distance of kilometers spread over a known number of seconds, and the car's speed is measured in the unit kilometer per hour (km/h). In physics, work per time describes a system's power, with the unit watt (W), which is equal to joules per second (J/s). See also Orders of magnitude (angular momentum) Action (physics) References SI derived units External links
0.783827
0.983171
0.770636
Agronomy
Agronomy is the science and technology of producing and using plants by agriculture for food, fuel, fiber, chemicals, recreation, or land conservation. Agronomy has come to include research of plant genetics, plant physiology, meteorology, and soil science. It is the application of a combination of sciences such as biology, chemistry, economics, ecology, earth science, and genetics. Professionals of agronomy are termed agronomists. History Agronomy has a long and rich history dating to the Neolithic Revolution. Some of the earliest practices of agronomy are found in ancient civilizations, including Ancient Egypt, Mesopotamia, China and India. They developed various techniques for the management of soil fertility, irrigation and crop rotation. During the 18th and 19th centuries, advances in science led to the development of modern agronomy. German chemist Justus von Liebig and John Bennett Lawes, an English entrepreneur, contributed to the understanding of plant nutrition and soil chemistry. Their work laid for the establishment of modern fertilizers and agricultural practices. Agronomy continued to evolve with the development of new technology and practices in the 20th century. From the 1960s, the Green Revolution saw the introduction of high-yield variety of crops, modern fertilizers and improvement of agricultural practices. It led to an increase of global food production to help reduce hunger and poverty in many parts of the world. Plant breeding This topic of agronomy involves selective breeding of plants to produce the best crops for various conditions. Plant breeding has increased crop yields and has improved the nutritional value of numerous crops, including corn, soybeans, and wheat. It has also resulted in the development of new types of plants. For example, a hybrid grain named triticale was produced by crossbreeding rye and wheat. Triticale contains more usable protein than does either rye or wheat. Agronomy has also been instrumental for fruit and vegetable production research. Furthermore, the application of plant breeding for turfgrass development has resulted in a reduction in the demand for fertilizer and water inputs (requirements), as well as turf-types with higher disease resistance. Biotechnology Agronomists use biotechnology to extend and expedite the development of desired characteristics. Biotechnology is often a laboratory activity requiring field testing of new crop varieties that are developed. In addition to increasing crop yields agronomic biotechnology is being applied increasingly for novel uses other than food. For example, oilseed is at present used mainly for margarine and other food oils, but it can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals. Soil science Agronomists study sustainable ways to make soils more productive and profitable. They classify soils and analyze them to determine whether they contain nutrients vital for plant growth. Common macronutrients analyzed include compounds of nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. Soil is also assessed for several micronutrients, like zinc and boron. The percentage of organic matter, soil pH, and nutrient holding capacity (cation exchange capacity) are tested in a regional laboratory. Agronomists will interpret these laboratory reports and make recommendations to modify soil nutrients for optimal plant growth. Soil conservation Additionally, agronomists develop methods to preserve soil and decrease the effects of [erosion] by wind and water. For example, a technique known as contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers of agronomy also seek ways to use the soil more effectively for solving other problems. Such problems include the disposal of human and animal manure, water pollution, and pesticide accumulation in the soil, as well as preserving the soil for future generations such as the burning of paddocks after crop production. Pasture management techniques include no-till farming, planting of soil-binding grasses along contours on steep slopes, and using contour drains of depths as much as 1 metre. Agroecology Agroecology is the management of agricultural systems with an emphasis on ecological and environmental applications. This topic is associated closely with work for sustainable agriculture, organic farming, and alternative food systems and the development of alternative cropping systems. Theoretical modeling Theoretical production ecology is the quantitative study of the growth of crops. The plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable products. The main parameters considered are temperature, sunlight, standing crop biomass, plant production distribution, and nutrient and water supply. See also Agricultural engineering Agricultural policy Agroecology Agrology Agrophysics Crop farming Food systems Horticulture Green Revolution Vegetable farming References Bibliography Wendy B. Murphy, The Future World of Agriculture, Watts, 1984. Antonio Saltini, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, , , , External links The American Society of Agronomy (ASA) Crop Science Society of America (CSSA) Soil Science Society of America (SSSA) European Society for Agronomy The National Agricultural Library (NAL) – Comprehensive agricultural library. Information System for Agriculture and Food Research . Applied sciences Plant agriculture
0.77332
0.996466
0.770588
Quaternions and spatial rotation
Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis. When used to represent rotation, unit quaternions are also called rotation quaternions as they represent the 3D rotation group. When used to represent an orientation (rotation relative to a reference coordinate system), they are called orientation quaternions or attitude quaternions. A spatial rotation around a fixed point of radians about a unit axis that denotes the Euler axis is given by the quaternion , where and . Compared to rotation matrices, quaternions are more compact, efficient, and numerically stable. Compared to Euler angles, they are simpler to compose. However, they are not as intuitive and easy to understand and, due to the periodic nature of sine and cosine, rotation angles differing precisely by the natural period will be encoded into identical quaternions and recovered angles in radians will be limited to . Using quaternions as rotations In 3-dimensional space, according to Euler's rotation theorem, any rotation or sequence of rotations of a rigid body or coordinate system about a fixed point is equivalent to a single rotation by a given angle about a fixed axis (called the Euler axis) that runs through the fixed point. The Euler axis is typically represented by a unit vector  ( in the picture). Therefore, any rotation in three dimensions can be represented as via a vector  and an angle . Quaternions give a simple way to encode this axis–angle representation using four real numbers, and can be used to apply (calculate) the corresponding rotation to a position vector , representing a point relative to the origin in R3. Euclidean vectors such as or can be rewritten as or , where , , are unit vectors representing the three Cartesian axes (traditionally , , ), and also obey the multiplication rules of the fundamental quaternion units by interpreting the Euclidean vector as the vector part of the pure quaternion . A rotation of angle around the axis defined by the unit vector can be represented by conjugation by a unit quaternion . Since the quaternion product gives 1, using the Taylor series of the exponential function, the extension of Euler's formula results: It can be shown that the desired rotation can be applied to an ordinary vector in 3-dimensional space, considered as the vector part of the pure quaternion , by evaluating the conjugation of  by , given by: using the Hamilton product, where the vector part of the pure quaternion is the new position vector of the point after the rotation. In a programmatic implementation, the conjugation is achieved by constructing a pure quaternion whose vector part is , and then performing the quaternion conjugation. The vector part of the resulting pure quaternion is the desired vector . Clearly, provides a linear transformation of the quaternion space to itself; also, since is unitary, the transformation is an isometry. Also, and so leaves vectors parallel to invariant. So, by decomposing as a vector parallel to the vector part of and a vector normal to the vector part of and showing that the application of to the normal component of rotates it, the claim is shown. So let be the component of orthogonal to the vector part of and let . It turns out that the vector part of is given by . The conjugation of by can be expressed with fewer arithmetic operations as: A geometric fact independent of quaternions is the existence of a two-to-one mapping from physical rotations to rotational transformation matrices. If 0 ⩽ ⩽ , a physical rotation about by and a physical rotation about by both achieve the same final orientation by disjoint paths through intermediate orientations. By inserting those vectors and angles into the formula for above, one finds that if represents the first rotation, represents the second rotation. This is a geometric proof that conjugation by and by must produce the same rotational transformation matrix. That fact is confirmed algebraically by noting that the conjugation is quadratic in , so the sign of cancels, and does not affect the result. (See 2:1 mapping of SU(2) to SO(3)) If both rotations are a half-turn , both and will have a real coordinate equal to zero. Otherwise, one will have a positive real part, representing a rotation by an angle less than , and the other will have a negative real part, representing a rotation by an angle greater than . Mathematically, this operation carries the set of all "pure" quaternions (those with real part equal to zero)—which constitute a 3-dimensional space among the quaternions—into itself, by the desired rotation about the axis u, by the angle θ. (Each real quaternion is carried into itself by this operation. But for the purpose of rotations in 3-dimensional space, we ignore the real quaternions.) The rotation is clockwise if our line of sight points in the same direction as . In this (which?) instance, is a unit quaternion and It follows that conjugation by the product of two quaternions is the composition of conjugations by these quaternions: If and are unit quaternions, then rotation (conjugation) by  is , which is the same as rotating (conjugating) by  and then by . The scalar component of the result is necessarily zero. The quaternion inverse of a rotation is the opposite rotation, since . The square of a quaternion rotation is a rotation by twice the angle around the same axis. More generally is a rotation by  times the angle around the same axis as . This can be extended to arbitrary real , allowing for smooth interpolation between spatial orientations; see Slerp. Two rotation quaternions can be combined into one equivalent quaternion by the relation: in which corresponds to the rotation followed by the rotation . Thus, an arbitrary number of rotations can be composed together and then applied as a single rotation. (Note that quaternion multiplication is not commutative.) Example conjugation operation Conjugating by refers to the operation . Consider the rotation around the axis , with a rotation angle of 120°, or  radians. The length of is , the half angle is (60°) with cosine , and sine ,. We are therefore dealing with a conjugation by the unit quaternion If is the rotation function, It can be proven that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components. As a consequence, and This can be simplified, using the ordinary rules for quaternion arithmetic, to As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the long diagonal through the fixed point (observe how the three axes are permuted cyclically). Quaternion-derived rotation matrix A quaternion rotation (with ) can be algebraically manipulated into a matrix rotation , where is the rotation matrix given by: Here and if is a unit quaternion, . This can be obtained by using vector calculus and linear algebra if we express and as scalar and vector parts and use the formula for the multiplication operation in the equation . If we write as , as and as , where , our equation turns into . By using the formula for multiplication of two quaternions that are expressed as scalar and vector parts, this equation can be rewritten as where denotes the outer product, is the identity matrix and is the transformation matrix that when multiplied from the right with a vector gives the cross product . Since , we can identify as , which upon expansion should result in the expression written in matrix form above. Recovering the axis-angle representation The expression rotates any vector quaternion around an axis given by the vector by the angle , where and depends on the quaternion . and can be found from the following equations: where is the two-argument arctangent. While works, it is numerically unstable (inaccurate) near for numbers with finite precision. Care should be taken when the quaternion approaches a scalar, since due to degeneracy the axis of an identity rotation is not well-defined. The composition of spatial rotations A benefit of the quaternion formulation of the composition of two rotations RB and RA is that it yields directly the rotation axis and angle of the composite rotation RC = RBRA. Let the quaternion associated with a spatial rotation R be constructed from its rotation axis S with the rotation angle around this axis. The associated quaternion is given by Then the composition of the rotation RB with RA is the rotation RC = RBRA with rotation axis and angle defined by the product of the quaternions that is Expand this product to obtain Divide both sides of this equation by the identity, which is the law of cosines on a sphere, and compute This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two rotations. He derived this formula in 1840 (see page 408). The three rotation axes A, B, and C form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles. Hamilton presented the component form of these equations showing that the quaternion product computes the third vertex of a spherical triangle from two given vertices and their associated arc-lengths, which is also defines an algebra for points in Elliptic geometry. Axis–angle composition The normalized rotation axis, removing the from the expanded product, leaves the vector which is the rotation axis, times some constant. Care should be taken normalizing the axis vector when is or where the vector is near ; which is identity, or 0 rotation around any axis. Or with angle addition trigonometric substitutions... finally normalizing the rotation axis: or . Differentiation with respect to the rotation quaternion The rotated quaternion needs to be differentiated with respect to the rotating quaternion , when the rotation is estimated from numerical optimization. The estimation of rotation angle is an essential procedure in 3D object registration or camera calibration. For unitary and pure imaginary , that is for a rotation in 3D space, the derivatives of the rotated quaternion can be represented using matrix calculus notation as A derivation can be found in. Background Quaternions The complex numbers can be defined by introducing an abstract symbol which satisfies the usual rules of algebra and additionally the rule . This is sufficient to reproduce all of the rules of complex number arithmetic: for example: In the same way the quaternions can be defined by introducing abstract symbols , , which satisfy the rules and the usual algebraic rules except the commutative law of multiplication (a familiar example of such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic follow, such as the rules on multiplication of quaternion basis elements. Using these rules, one can show that: The imaginary part of a quaternion behaves like a vector in three-dimensional vector space, and the real part behaves like a scalar in . When quaternions are used in geometry, it is more convenient to define them as a scalar plus a vector: Some might find it strange to add a number to a vector, as they are objects of very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and another one with zero scalar/real part: We can express quaternion multiplication in the modern language of vector cross and dot products (which were actually inspired by the quaternions in the first place). When multiplying the vector/imaginary parts, in place of the rules we have the quaternion multiplication rule: where: is the resulting quaternion, is vector cross product (a vector), is vector scalar product (a scalar). Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while scalar–scalar and scalar–vector multiplications commute. From these rules it follows immediately that (see ): The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm ratio (see details): as can be verified by direct calculation (note the similarity to the multiplicative inverse of complex numbers). Rotation identity Let be a unit vector (the rotation axis) and let . Our goal is to show that yields the vector rotated by an angle around the axis . Expanding out (and bearing in mind that ), we have If we let and equal the components of perpendicular and parallel to respectively, then and , leading to Using the trigonometric pythagorean and double-angle identities, we then have This is the formula of a rotation by around the axis. Quaternion rotation operations A very formal explanation of the properties used in this section is given by Altman. The hypersphere of rotations Visualizing the space of rotations Unit quaternions represent the group of Euclidean rotations in three dimensions in a very straightforward way. The correspondence between rotations and quaternions can be understood by first visualizing the space of rotations itself. In order to visualize the space of rotations, it helps to consider a simpler case. Any rotation in three dimensions can be described by a rotation by some angle about some axis; for our purposes, we will use an axis vector to establish handedness for our angle. Consider the special case in which the axis of rotation lies in the xy plane. We can then specify the axis of one of these rotations by a point on a circle through which the vector crosses, and we can select the radius of the circle to denote the angle of rotation. Similarly, a rotation whose axis of rotation lies in the xy plane can be described as a point on a sphere of fixed radius in three dimensions. Beginning at the north pole of a sphere in three-dimensional space, we specify the point at the north pole to be the identity rotation (a zero angle rotation). Just as in the case of the identity rotation, no axis of rotation is defined, and the angle of rotation (zero) is irrelevant. A rotation having a very small rotation angle can be specified by a slice through the sphere parallel to the xy plane and very near the north pole. The circle defined by this slice will be very small, corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative z direction, and the circles become larger until the equator of the sphere is reached, which will correspond to a rotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding to the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached, the circles shrink once more to the identity rotation, which is also specified as the point at the south pole. Notice that a number of characteristics of such rotations and their representations can be seen by this visualization. The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360 degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of rotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarily give a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will not be the case with a general rotation in 3-space, in which rotations do form a closed set under composition. This visualization can be extended to a general rotation in 3-dimensional space. The identity rotation is a point, and a small angle of rotation about some axis can be represented as a point on a sphere with a small radius. As the angle of rotation grows, the sphere grows, until the angle of rotation reaches 180 degrees, at which point the sphere begins to shrink, becoming a point as the angle approaches 360 degrees (or zero degrees from the negative direction). This set of expanding and contracting spheres represents a hypersphere in four dimensional space (a 3-sphere). Just as in the simpler example above, each rotation represented as a point on the hypersphere is matched by its antipodal point on that hypersphere. The "latitude" on the hypersphere will be half of the corresponding angle of rotation, and the neighborhood of any point will become "flatter" (i.e. be represented by a 3-D Euclidean space of points) as the neighborhood shrinks. This behavior is matched by the set of unit quaternions: A general quaternion represents a point in a four dimensional space, but constraining it to have unit magnitude yields a three-dimensional space equivalent to the surface of a hypersphere. The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part of a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the sine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion. Parameterizing the space of rotations We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude and longitude are ill-behaved (degenerate as described by the hairy ball theorem) at the north and south poles, though the poles are not intrinsically different from any other points on the sphere. At the poles (latitudes +90° and −90°), the longitude becomes meaningless. It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problems by embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates , placing the north pole at , the south pole at , and the equator at , . Points on the sphere satisfy the constraint , so we still have just two degrees of freedom though there are three coordinates. A point on the sphere represents a rotation in the ordinary space around the horizontal axis directed by the vector by an angle . In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), but any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock. We can avoid this by using four Euclidean coordinates , with . The point represents a rotation around the axis directed by the vector by an angle Explaining quaternions' properties with rotations Non-commutativity The multiplication of quaternions is non-commutative. This fact explains how the formula can work at all, having by definition. Since the multiplication of unit quaternions corresponds to the composition of three-dimensional rotations, this property can be made intuitive by showing that three-dimensional rotations are not commutative in general. The figure to the right illustrates this with dice. Use your right hand to create a pair of 90 degree rotations. Both dice are initially configured as shown in the upper left-hand corner (with 1 dot on the top face.) Path A begins with a rotation about the –y axis (using the right-hand rule.), followed by a rotation about the +z axis, resulting in the configuration shown in the lower left corner (5 dots on the top face.) Path B reverses the order of operations, resulting with 3 dots on top. If you don't have dice, set two books next to each other. Rotate one of them 90 degrees clockwise around the z axis, then flip it 180 degrees around the x axis. Take the other book, flip it 180° around x axis first, and 90° clockwise around z later. The two books do not end up parallel. This shows that, in general, the composition of two different rotations around two distinct spatial axes will not commute. Orientation The vector cross product, used to define the axis–angle representation, does confer an orientation ("handedness") to space: in a three-dimensional vector space, the three vectors in the equation will always form a right-handed set (or a left-handed set, depending on how the cross product is defined), thus fixing an orientation in the vector space. Alternatively, the dependence on orientation is expressed in referring to such that specifies a rotation as to axial vectors. In quaternionic formalism the choice of an orientation of the space corresponds to order of multiplication: but . If one reverses the orientation, then the formula above becomes , i.e., a unit is replaced with the conjugate quaternion – the same behaviour as of axial vectors. Alternative conventions It is reported that the existence and continued usage of an alternative quaternion convention in the aerospace and, to a lesser extent, robotics community is incurring a significant and ongoing cost. This alternative convention is proposed by Shuster M.D. in and departs from tradition by reversing the definition for multiplying quaternion basis elements such that under Shuster's convention, whereas Hamilton's definition is . This convention is also referred to as "JPL convention" for its use in some parts of NASA's Jet Propulsion Laboratory. Under Shuster's convention, the formula for multiplying two quaternions is altered such that The formula for rotating a vector by a quaternion is altered to be To identify the changes under Shuster's convention, see that the sign before the cross product is flipped from plus to minus. Finally, the formula for converting a quaternion to a rotation matrix is altered to be which is exactly the transpose of the rotation matrix converted under the traditional convention. Software applications by convention used The table below groups applications by their adherence to either quaternion convention: While use of either convention does not impact the capability or correctness of applications thus created, the authors of argued that the Shuster convention should be abandoned because it departs from the much older quaternion multiplication convention by Hamilton and may never be adopted by the mathematical or theoretical physics areas. Comparison with other representations of rotations Advantages of quaternions The representation of a rotation as a quaternion (4 numbers) is more compact than the representation as an orthogonal matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the corresponding quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are much harder with matrices or Euler angles. In video games and other applications, one is often interested in "smooth rotations", meaning that the scene should slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the spherical linear interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initial rotation) and the other being the intended final rotation. This is more problematic with other representations of rotations. When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that is slightly off still represents a rotation after being normalized: a matrix that is slightly off may not be orthogonal any more and is harder to convert back to a proper orthogonal matrix. Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/roll rotational systems, the pitch is rotated 90° up or down, so that yaw and roll then correspond to the same motion, and a degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, this could have disastrous results if the aircraft is in a steep dive or ascent. Conversion to and from the matrix representation From a quaternion to an orthogonal matrix The orthogonal matrix corresponding to a rotation by the unit quaternion (with ) when post-multiplying with a column vector is given by This rotation matrix is used on vector as . The quaternion representation of this rotation is given by: where is the conjugate of the quaternion , given by Also, quaternion multiplication is defined as (assuming a and b are quaternions, like z above): where the order a, b is important since the cross product of two vectors is not commutative. A more efficient calculation in which the quaternion does not need to be unit normalized is given by where the following intermediate quantities have been defined: From an orthogonal matrix to a quaternion One must be careful when converting a rotation matrix to a quaternion, as several straightforward methods tend to be unstable when the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. For a stable method of converting an orthogonal matrix to a quaternion, see the Rotation matrix#Quaternion. Fitting quaternions The above section described how to recover a quaternion from a rotation matrix . Suppose, however, that we have some matrix that is not a pure rotation—due to round-off errors, for example—and we wish to find the quaternion that most accurately represents . In that case we construct a symmetric matrix and find the eigenvector corresponding to the largest eigenvalue (that value will be 1 if and only if is a pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix . Performance comparisons This section discusses the performance implications of using quaternions versus other methods (axis/angle or rotation matrices) to perform rotations in 3D. Results Only three of the quaternion components are independent, as a rotation is represented by a unit quaternion. For further calculation one usually needs all four elements, so all calculations would suffer additional expense from recovering the fourth component. Likewise, angle–axis can be stored in a three-component vector by multiplying the unit direction by the angle (or a function thereof), but this comes at additional computational cost when using it for calculations. Similarly, a rotation matrix requires orthogonal basis vectors, so in 3D space the third vector can unambiguously be calculated from the first two vectors with a cross product (though there is ambiguity in the sign of the third vector if improper rotations are allowed). * Quaternions can be implicitly converted to a rotation-like matrix (12 multiplications and 12 additions/subtractions), which levels the following vectors rotating cost with the rotation matrix method. Used methods There are three basic approaches to rotating a vector : Compute the matrix product of a rotation matrix and the original column matrix representing . This requires 3 × (3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method for rotating a vector. A rotation can be represented by a unit-length quaternion with scalar (real) part and vector (imaginary) part . The rotation can be applied to a 3D vector via the formula . This requires only 15 multiplications and 15 additions to evaluate (or 18 multiplications and 12 additions if the factor of 2 is done via multiplication.) This formula, originally thought to be used with axis/angle notation (Rodrigues' formula), can also be applied to quaternion notation. This yields the same result as the less efficient but more compact formula of quaternion multiplication . Use the angle/axis formula to convert an angle/axis to a rotation matrix then multiplying with a vector, or, similarly, use a formula to convert quaternion notation to a rotation matrix, then multiplying with a vector. Converting the angle/axis to costs 12 multiplications, 2 function calls (sin, cos), and 10 additions/subtractions; from item 1, rotating using adds an additional 9 multiplications and 6 additions for a total of 21 multiplications, 16 add/subtractions, and 2 function calls (sin, cos). Converting a quaternion to costs 12 multiplications and 12 additions/subtractions; from item 1, rotating using adds an additional 9 multiplications and 6 additions for a total of 21 multiplications and 18 additions/subtractions. Pairs of unit quaternions as rotations in 4D space A pair of unit quaternions and can represent any rotation in 4D space. Given a four-dimensional vector , and assuming that it is a quaternion, we can rotate the vector like this: The pair of matrices represents a rotation of . Note that since , the two matrices must commute. Therefore, there are two commuting subgroups of the group of four dimensional rotations. Arbitrary four-dimensional rotations have 6 degrees of freedom; each matrix represents 3 of those 6 degrees of freedom. Since the generators of the four-dimensional rotations can be represented by pairs of quaternions (as follows), all four-dimensional rotations can also be represented. See also Anti-twister mechanism Binary polyhedral group Biquaternion Charts on SO(3) Clifford algebras Conversion between quaternions and Euler angles Covering space Dual quaternion Applications of dual quaternions to 2D geometry Elliptic geometry Rotation formalisms in three dimensions Rotation (mathematics) Spin group Slerp, spherical linear interpolation Olinde Rodrigues William Rowan Hamilton References Further reading External links and resources on Rosetta Code Quaternions Rotation in three dimensions Rigid bodies mechanics 3D computer graphics
0.772148
0.997977
0.770586
Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanislaw Ulam, was inspired by his uncle's gambling habits. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically. Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs. Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results. Overview Monte Carlo methods vary, but tend to follow a particular pattern: Define a domain of possible inputs Generate inputs randomly from a probability distribution over the domain Perform a deterministic computation of the outputs Aggregate the results For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is , the value of can be approximated using a Monte Carlo method: Draw a square, then inscribe a quadrant within it Uniformly scatter a given number of points over the square Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1 The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, . Multiply the result by 4 to estimate . In this procedure the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the quadrant). Aggregating the results yields our final result, the approximation of . There are two important considerations: If the points are not uniformly distributed, then the approximation will be poor. The approximation is generally poor if only a few points are randomly placed in the whole square. On average, the approximation improves as more points are placed. Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously used for statistical sampling. Application Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases). Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods. In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean ( the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples ( particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. Simple Monte Carlo Suppose one wants to know the expected value μ of a population (and knows that μ exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ by running n simulations and averaging the simulations’ results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ exists. A sufficiently large n will produce a value for m that is arbitrarily close to μ; more formally, it will be the case that, for any ε > 0, |μ – m| ≤ ε. Typically, the algorithm to obtain m is s = 0; for i = 1 to n do run the simulation for the ith time, giving result ri; s = s + ri; repeat m = s / n; An example Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable: s = 0; for i = 1 to n do throw the three dice until T is met or first exceeded; ri = the number of throws; s = s + ri; repeat m = s / n; If n is large enough, m will be within ε of μ for any ε > 0. Determining a sufficiently large n General formula Let ε = |μ – m| > 0. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m is indeed within ε of μ. Let z be the z-score corresponding to that confidence level. Let s2 be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k of “sample” simulations. Choose a k; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable." The following algorithm computes s2 in one pass while minimizing the possibility that accumulated numerical error produces erroneous results: s1 = 0; run the simulation for the first time, producing result r1; m1 = r1; //mi is the mean of the first i simulations for i = 2 to k do run the simulation for the ith time, producing result ri; δi = ri - mi-1; mi = mi-1 + (1/i)δi; si = si-1 + ((i - 1)/i)(δi)2; repeat s2 = sk/(k - 1); Note that, when the algorithm completes, mk is the mean of the k results. n is sufficiently large when If n ≤ k, then mk = m; sufficient sample simulations were done to ensure that mk is within ε of μ. If n > k, then n simulations can be run “from scratch,” or, since k simulations have already been done, one can just run n – k more simulations and add their results into those from the sample simulations: s = mk * k; for i = k + 1 to n do run the simulation for the ith time, giving result ri; s = s + ri; m = s / n; A formula when simulations' results are bounded An alternate formula can be used in the special case where all simulation results are bounded above and below. Choose a value for ε that is twice the maximum allowed difference between μ and m. Let 0 < δ < 100 be the desired confidence level, expressed as a percentage. Let every simulation result r1, r2, …ri, … rn be such that a ≤ ri ≤ b for finite a and b. To have confidence of at least δ that |μ – m| < ε/2, use a value for n such that For example, if δ = 99%, then n ≥ 2(b – a )2ln(2/0.01)/ε2 ≈ 10.6(b – a )2/ε2. Computational costs Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc. History Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing). An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work. In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows: Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble. Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth. The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism. From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996. Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo. Definitions There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior). Here are the examples: Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation. Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation. Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin. Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling." Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic. Monte Carlo and random numbers The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known. Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense. What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary. Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation: the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats) the (pseudo-random) number generator produces values that pass tests for randomness there are enough samples to ensure accurate results the proper sampling technique is used the algorithm used is valid for what is being modeled it simulates the phenomenon in question. Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution. Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods. In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers. Monte Carlo simulation versus "what if" scenarios There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded. By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events". Applications Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include: Physical sciences Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting. Engineering Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example, In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits. In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis. In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms. In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm. In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process. In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response. In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures. Climate change and radiative forcing The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing. Computational biology Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes. The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy. Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields). Computer graphics Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence. Applied statistics The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes: To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions. To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior. To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected). Artificial intelligence for games Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves. The Monte Carlo tree search (MCTS) method has four steps: Starting at root node of the tree, select optimal child nodes until a leaf node is reached. Expand the leaf node and choose one of its children. Play a simulated game starting with that node. Use the results of that simulated game to update the node and its ancestors. The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move. Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa. Design and visuals Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects. Search and rescue The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources. Finance and business Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law. Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions. Law A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole. Library science Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan. Other Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one. Use in mathematics In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration. Integration Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom. Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions. A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm. A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly. Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers. Simulation and optimization Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization. The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account. Inverse problems Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. Philosophy Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich. See also Auxiliary-field Monte Carlo Biology Monte Carlo method Direct simulation Monte Carlo Dynamic Monte Carlo method Ergodicity Genetic algorithms Kinetic Monte Carlo List of software for Monte Carlo molecular modeling Mean-field particle methods Monte Carlo method for photon transport Monte Carlo methods for electron transport Monte Carlo N-Particle Transport Code Morris method Multilevel Monte Carlo method Quasi-Monte Carlo method Sobol sequence Temporal difference learning References Citations Sources External links Numerical analysis Statistical mechanics Computational physics Sampling techniques Statistical approximations Stochastic simulation Randomized algorithms Risk analysis methodologies
0.770932
0.999505
0.77055
Symmetry (physics)
The symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation. A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems. Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. As a kind of invariance Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks". Invariance in force The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well. For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges. In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis. The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged. Local and global Symmetries may be broadly classified as global or local. A global symmetry is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a local symmetry is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories. Continuous The two examples of rotational symmetry described above – spherical and cylindrical – are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries. Spacetime Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time. Time translation: A physical system may have the same features over a certain interval of time Δt; this is expressed mathematically as invariance under the transformation for any real parameters t and in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time t and also at , the particle's total gravitational potential energy will be preserved. Spatial translation: These spatial symmetries are represented by transformations of the form and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room. Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry. Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance. Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity. Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant. Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system. Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries. Discrete A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, . For example, Newton's second law of motion still holds if, in the equation , is replaced by . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height. Spatial inversion: These are represented by transformations of the form and indicate an invariance property of a system when the coordinates are 'inverted'. Stated another way, these are symmetries between a certain object and its mirror image. Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries. C, P, and T The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced. C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle. P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes. This excludes weak interactions as demonstrated by Chien-Shiung Wu. T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (the future and the past are not symmetrical) but explained by the fact that the Standard Model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the Big Bang and the resulting low-entropy state in the "future". Since we perceive the "past" ("future") as having lower (higher) entropy than the present, the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past, and vice versa. These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics. Supersymmetry A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry. Generalized symmetries Generalized symmetries encompass a number of recently recognized generalizations of the concept of a global symmetry. These include higher form symmetries, higher group symmetries, non-invertible symmetries, and subsystem symmetries. Mathematics of physical symmetry The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists. Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group). Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S. A type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.) Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology). Conservation laws and symmetry The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy. The following table summarizes some fundamental symmetries and the associated conserved quantity. Mathematics Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations is equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra. A general coordinate transformation described as the general field (also known as a diffeomorphism) has the infinitesimal effect on a scalar , spinor or vector field that can be expressed (using the Einstein summation convention): Without gravity only the Poincaré symmetries are preserved which restricts to be of the form: where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field: where are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types. Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind: If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form: with D generating scale transformations and K generating special conformal transformations. For example, super-Yang–Mills theory has this symmetry while general relativity does not although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models. In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields. See also Conserved current & Charge Coordinate-free Covariance and contravariance Fictitious force Galilean invariance Principle of covariance General covariance Harmonic coordinate condition Inertial frame of reference List of mathematical topics in relativity Standard Model (mathematical formulation) Wheeler–Feynman absorber theory References General readers Chapter 12 is a gentle introduction to symmetry, invariance, and conservation laws. Technical readers Address to the 2002 meeting of the Philosophy of Science Association. External links The Feynman Lectures on Physics Vol. I Ch. 52: Symmetry in Physical Laws Stanford Encyclopedia of Philosophy: "Symmetry"—by K. Brading and E. Castellani. Pedagogic Aids to Quantum Field Theory Click on link to Chapter 6: Symmetry, Invariance, and Conservation for a simplified, step-by-step introduction to symmetry in physics. Concepts in physics Conservation laws Diffeomorphisms Differential geometry Symmetry
0.778829
0.989312
0.770505
Trajectory
A trajectory or flight path is the path that an object with mass in motion follows through space as a function of time. In classical mechanics, a trajectory is defined by Hamiltonian mechanics via canonical coordinates; hence, a complete trajectory is defined by position and momentum, simultaneously. The mass might be a projectile or a satellite. For example, it can be an orbit — the path of a planet, asteroid, or comet as it travels around a central mass. In control theory, a trajectory is a time-ordered set of states of a dynamical system (see e.g. Poincaré map). In discrete mathematics, a trajectory is a sequence of values calculated by the iterated application of a mapping to an element of its source. Physics of trajectories A familiar example of a trajectory is the path of a projectile, such as a thrown ball or rock. In a significantly simplified model, the object moves only under the influence of a uniform gravitational force field. This can be a good approximation for a rock that is thrown for short distances, for example at the surface of the Moon. In this simple approximation, the trajectory takes the shape of a parabola. Generally when determining trajectories, it may be necessary to account for nonuniform gravitational forces and air resistance (drag and aerodynamics). This is the focus of the discipline of ballistics. One of the remarkable achievements of Newtonian mechanics was the derivation of Kepler's laws of planetary motion. In the gravitational field of a point mass or a spherically-symmetrical extended mass (such as the Sun), the trajectory of a moving object is a conic section, usually an ellipse or a hyperbola. This agrees with the observed orbits of planets, comets, and artificial spacecraft to a reasonably good approximation, although if a comet passes close to the Sun, then it is also influenced by other forces such as the solar wind and radiation pressure, which modify the orbit and cause the comet to eject material into space. Newton's theory later developed into the branch of theoretical physics known as classical mechanics. It employs the mathematics of differential calculus (which was also initiated by Newton in his youth). Over the centuries, countless scientists have contributed to the development of these two disciplines. Classical mechanics became a most prominent demonstration of the power of rational thought, i.e. reason, in science as well as technology. It helps to understand and predict an enormous range of phenomena; trajectories are but one example. Consider a particle of mass , moving in a potential field . Physically speaking, mass represents inertia, and the field represents external forces of a particular kind known as "conservative". Given at every relevant position, there is a way to infer the associated force that would act at that position, say from gravity. Not all forces can be expressed in this way, however. The motion of the particle is described by the second-order differential equation On the right-hand side, the force is given in terms of , the gradient of the potential, taken at positions along the trajectory. This is the mathematical form of Newton's second law of motion: force equals mass times acceleration, for such situations. Examples Uniform gravity, neither drag nor wind The ideal case of motion of a projectile in a uniform gravitational field in the absence of other forces (such as air drag) was first investigated by Galileo Galilei. To neglect the action of the atmosphere in shaping a trajectory would have been considered a futile hypothesis by practical-minded investigators all through the Middle Ages in Europe. Nevertheless, by anticipating the existence of the vacuum, later to be demonstrated on Earth by his collaborator Evangelista Torricelli, Galileo was able to initiate the future science of mechanics. In a near vacuum, as it turns out for instance on the Moon, his simplified parabolic trajectory proves essentially correct. In the analysis that follows, we derive the equation of motion of a projectile as measured from an inertial frame at rest with respect to the ground. Associated with the frame is a right-hand coordinate system with its origin at the point of launch of the projectile. The -axis is tangent to the ground, and the axis is perpendicular to it ( parallel to the gravitational field lines ). Let be the acceleration of gravity. Relative to the flat terrain, let the initial horizontal speed be and the initial vertical speed be . It will also be shown that the range is , and the maximum altitude is . The maximum range for a given initial speed is obtained when , i.e. the initial angle is 45. This range is , and the maximum altitude at the maximum range is . Derivation of the equation of motion Assume the motion of the projectile is being measured from a free fall frame which happens to be at (x,y) = (0,0) at t = 0. The equation of motion of the projectile in this frame (by the equivalence principle) would be . The co-ordinates of this free-fall frame, with respect to our inertial frame would be . That is, . Now translating back to the inertial frame the co-ordinates of the projectile becomes That is: (where v0 is the initial velocity, is the angle of elevation, and g is the acceleration due to gravity). Range and height The range, R, is the greatest distance the object travels along the x-axis in the I sector. The initial velocity, vi, is the speed at which said object is launched from the point of origin. The initial angle, θi, is the angle at which said object is released. The g is the respective gravitational pull on the object within a null-medium. The height, h, is the greatest parabolic height said object reaches within its trajectory Angle of elevation In terms of angle of elevation and initial speed : giving the range as This equation can be rearranged to find the angle for a required range (Equation II: angle of projectile launch) Note that the sine function is such that there are two solutions for for a given range . The angle giving the maximum range can be found by considering the derivative or with respect to and setting it to zero. which has a nontrivial solution at , or . The maximum range is then . At this angle , so the maximum height obtained is . To find the angle giving the maximum height for a given speed calculate the derivative of the maximum height with respect to , that is which is zero when . So the maximum height is obtained when the projectile is fired straight up. Orbiting objects If instead of a uniform downwards gravitational force we consider two bodies orbiting with the mutual gravitation between them, we obtain Kepler's laws of planetary motion. The derivation of these was one of the major works of Isaac Newton and provided much of the motivation for the development of differential calculus. Catching balls If a projectile, such as a baseball or cricket ball, travels in a parabolic path, with negligible air resistance, and if a player is positioned so as to catch it as it descends, he sees its angle of elevation increasing continuously throughout its flight. The tangent of the angle of elevation is proportional to the time since the ball was sent into the air, usually by being struck with a bat. Even when the ball is really descending, near the end of its flight, its angle of elevation seen by the player continues to increase. The player therefore sees it as if it were ascending vertically at constant speed. Finding the place from which the ball appears to rise steadily helps the player to position himself correctly to make the catch. If he is too close to the batsman who has hit the ball, it will appear to rise at an accelerating rate. If he is too far from the batsman, it will appear to slow rapidly, and then to descend. Notes See also Aft-crossing trajectory Displacement (geometry) Galilean invariance Orbit (dynamics) Orbit (group theory) Orbital trajectory Phugoid Planetary orbit Porkchop plot Projectile motion Range of a projectile Rigid body World line References External links Projectile Motion Flash Applet :) Trajectory calculator An interactive simulation on projectile motion Projectile Lab, JavaScript trajectory simulator Parabolic Projectile Motion: Shooting a Harmless Tranquilizer Dart at a Falling Monkey by Roberto Castilla-Meléndez, Roxana Ramírez-Herrera, and José Luis Gómez-Muñoz, The Wolfram Demonstrations Project. Trajectory, ScienceWorld. Java projectile-motion simulation, with first-order air resistance. Java projectile-motion simulation; targeting solutions, parabola of safety. Ballistics Mechanics
0.776795
0.991894
0.770498
Momentum transfer
In particle physics, wave mechanics, and optics, momentum transfer is the amount of momentum that one particle gives to another particle. It is also called the scattering vector as it describes the transfer of wavevector in wave mechanics. In the simplest example of scattering of two colliding particles with initial momenta , resulting in final momenta , the momentum transfer is given by where the last identity expresses momentum conservation. Momentum transfer is an important quantity because is a better measure for the typical distance resolution of the reaction than the momenta themselves. Wave mechanics and optics A wave has a momentum and is a vectorial quantity. The difference of the momentum of the scattered wave to the incident wave is called momentum transfer. The wave number k is the absolute of the wave vector and is related to the wavelength . Momentum transfer is given in wavenumber units in reciprocal space Diffraction The momentum transfer plays an important role in the evaluation of neutron, X-ray, and electron diffraction for the investigation of condensed matter. Laue-Bragg diffraction occurs on the atomic crystal lattice, conserves the wave energy and thus is called elastic scattering, where the wave numbers final and incident particles, and , respectively, are equal and just the direction changes by a reciprocal lattice vector with the relation to the lattice spacing . As momentum is conserved, the transfer of momentum occurs to crystal momentum. The presentation in reciprocal space is generic and does not depend on the type of radiation and wavelength used but only on the sample system, which allows to compare results obtained from many different methods. Some established communities such as powder diffraction employ the diffraction angle as the independent variable, which worked fine in the early years when only a few characteristic wavelengths such as Cu-K were available. The relationship to -space is with and basically states that larger corresponds to larger . See also Atomic form factor Mandelstam variables Momentum-transfer cross section Impulse (physics) Diffraction Momentum Neutron-related techniques Synchrotron-related techniques
0.791068
0.973964
0.770472
Carnot cycle
A Carnot cycle is an ideal thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. By Carnot's theorem, it provides an upper limit on the efficiency of any classical thermodynamic engine during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference through the application of work to the system. In a Carnot cycle, a system or engine transfers energy in the form of heat between two thermal reservoirs at temperatures and (referred to as the hot and cold reservoirs, respectively), and a part of this transferred energy is converted to the work done by the system. The cycle is reversible, and entropy is conserved, merely transferred between the thermal reservoirs and the system without gain or loss. When work is applied to the system, heat moves from the cold to hot reservoir (heat pump or refrigeration). When heat moves from the hot to the cold reservoir, the system applies work to the environment. The work done by the system or engine to the environment per Carnot cycle depends on the temperatures of the thermal reservoirs and the entropy transferred from the hot reservoir to the system per cycle such as , where is heat transferred from the hot reservoir to the system per cycle. Stages A Carnot cycle as an idealized thermodynamic cycle performed by a Carnot heat engine, consisting of the following steps: In this case, since it is a reversible thermodynamic cycle (no net change in the system and its surroundings per cycle) or, This is true as and are both smaller in magnitude and in fact are in the same ratio as . The pressure–volume graph When a Carnot cycle is plotted on a pressure–volume diagram, the isothermal stages follow the isotherm lines for the working fluid, the adiabatic stages move between isotherms, and the area bounded by the complete cycle path represents the total work that can be done during one cycle. From point 1 to 2 and point 3 to 4 the temperature is constant (isothermal process). Heat transfer from point 4 to 1 and point 2 to 3 are equal to zero (adiabatic process). Properties and significance The temperature–entropy diagram The behavior of a Carnot engine or refrigerator is best understood by using a temperature–entropy diagram (T–S diagram), in which the thermodynamic state is specified by a point on a graph with entropy (S) as the horizontal axis and temperature (T) as the vertical axis. For a simple closed system (control mass analysis), any point on the graph represents a particular state of the system. A thermodynamic process is represented by a curve connecting an initial state (A) and a final state (B). The area under the curve is: which is the amount of heat transferred in the process. If the process moves the system to greater entropy, the area under the curve is the amount of heat absorbed by the system in that process; otherwise, it is the amount of heat removed from or leaving the system. For any cyclic process, there is an upper portion of the cycle and a lower portion. In T-S diagrams for a clockwise cycle, the area under the upper portion will be the energy absorbed by the system during the cycle, while the area under the lower portion will be the energy removed from the system during the cycle. The area inside the cycle is then the difference between the two (the absorbed net heat energy), but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system per cycle. Referring to , mathematically, for a reversible process, we may write the amount of work done over a cyclic process as: Since dU is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a T–S diagram is (a) equal to the total work performed by the system on the surroundings if the loop is traversed in a clockwise direction, and (b) is equal to the total work done on the system by the surroundings as the loop is traversed in a counterclockwise direction. The Carnot cycle Evaluation of the above integral is particularly simple for a Carnot cycle. The amount of energy transferred as work is The total amount of heat transferred from the hot reservoir to the system (in the isothermal expansion) will be and the total amount of heat transferred from the system to the cold reservoir (in the isothermal compression) will be Due to energy conservation, the net heat transferred, , is equal to the work performed The efficiency is defined to be: where is the work done by the system (energy exiting the system as work), < 0 is the heat taken from the system (heat energy leaving the system), > 0 is the heat put into the system (heat energy entering the system), is the absolute temperature of the cold reservoir, and is the absolute temperature of the hot reservoir. is the maximum system entropy is the minimum system entropy The expression with the temperature can be derived from the expressions above with the entropy: and . Since , a minus sign appears in the final expression for . This is the Carnot heat engine working efficiency definition as the fraction of the work done by the system to the thermal energy received by the system from the hot reservoir per cycle. This thermal energy is the cycle initiator. Reversed Carnot cycle A Carnot heat-engine cycle described is a totally reversible cycle. That is, all the processes that compose it can be reversed, in which case it becomes the Carnot heat pump and refrigeration cycle. This time, the cycle remains exactly the same except that the directions of any heat and work interactions are reversed. Heat is absorbed from the low-temperature reservoir, heat is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The P–V diagram of the reversed Carnot cycle is the same as for the Carnot heat-engine cycle except that the directions of the processes are reversed. Carnot's theorem It can be seen from the above diagram that for any cycle operating between temperatures and , none can exceed the efficiency of a Carnot cycle. Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. In other words, the maximum efficiency is achieved if and only if entropy does not change per cycle. An entropy change per cycle is made, for example, if there is friction leading to dissipation of work into heat. In that case, the cycle is not reversible and the Clausius theorem becomes an inequality rather than an equality. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a (minimal) reduction in efficiency. So Equation gives the efficiency of any reversible heat engine. In mesoscopic heat engines, work per cycle of operation in general fluctuates due to thermal noise. If the cycle is performed quasi-statically, the fluctuations vanish even on the mesoscale. However, if the cycle is performed faster than the relaxation time of the working medium, the fluctuations of work are inevitable. Nevertheless, when work and heat fluctuations are counted, an exact equality relates the exponential average of work performed by any heat engine to the heat transfer from the hotter heat bath. Efficiency of real heat engines Carnot realized that, in reality, it is not possible to build a thermodynamically reversible engine. So, real heat engines are even less efficient than indicated by Equation . In addition, real engines that operate along the Carnot cycle style (isothermal expansion / isentropic expansion / isothermal compression / isentropic compression) are rare. Nevertheless, Equation is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs. Although Carnot's cycle is an idealization, Equation as the expression of the Carnot efficiency is still useful. Consider the average temperatures, at which the first integral is over a part of a cycle where heat goes into the system and the second integral is over a cycle part where heat goes out from the system. Then, replace TH and TC in Equation by 〈TH〉 and 〈TC〉, respectively, to estimate the efficiency a heat engine. For the Carnot cycle, or its equivalent, the average value 〈TH〉 will equal the highest temperature available, namely TH, and 〈TC〉 the lowest, namely TC. For other less efficient thermodynamic cycles, 〈TH〉 will be lower than TH, and 〈TC〉 will be higher than TC. This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plants and why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants. The first prototype of the diesel engine was based on the principles of the Carnot cycle. As a macroscopic construct The Carnot heat engine is, ultimately, a theoretical construct based on an idealized thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built. See also Carnot heat engine Reversible process (thermodynamics) References Notes Sources Carnot, Sadi, Reflections on the Motive Power of Fire Ewing, J. A. (1910) The Steam-Engine and Other Engines edition 3, page 62, via Internet Archive American Institute of Physics, 2011. . Abstract at: . Full article (24 pages ), also at . External links Hyperphysics article on the Carnot cycle. S. M. Blinder Carnot Cycle on Ideal Gas powered by Wolfram Mathematica Thermodynamic cycles
0.772584
0.997244
0.770455
Rossby wave
Rossby waves, also known as planetary waves, are a type of inertial wave naturally occurring in rotating fluids. They were first identified by Sweden-born American meteorologist Carl-Gustaf Arvid Rossby in the Earth's atmosphere in 1939. They are observed in the atmospheres and oceans of Earth and other planets, owing to the rotation of Earth or of the planet involved. Atmospheric Rossby waves on Earth are giant meanders in high-altitude winds that have a major influence on weather. These waves are associated with pressure systems and the jet stream (especially around the polar vortices). Oceanic Rossby waves move along the thermocline: the boundary between the warm upper layer and the cold deeper part of the ocean. Rossby wave types Atmospheric waves Atmospheric Rossby waves result from the conservation of potential vorticity and are influenced by the Coriolis force and pressure gradient. The image on the left sketches fundamental principles of the wave, e.g., its restoring force and westward phase velocity. The rotation causes fluids to turn to the right as they move in the northern hemisphere and to the left in the southern hemisphere. For example, a fluid that moves from the equator toward the north pole will deviate toward the east; a fluid moving toward the equator from the north will deviate toward the west. These deviations are caused by the Coriolis force and conservation of potential vorticity which leads to changes of relative vorticity. This is analogous to conservation of angular momentum in mechanics. In planetary atmospheres, including Earth, Rossby waves are due to the variation in the Coriolis effect with latitude. One can identify a terrestrial Rossby wave as its phase velocity, marked by its wave crest, always has a westward component. However, the collected set of Rossby waves may appear to move in either direction with what is known as its group velocity. In general, shorter waves have an eastward group velocity and long waves a westward group velocity. The terms "barotropic" and "baroclinic" are used to distinguish the vertical structure of Rossby waves. Barotropic Rossby waves do not vary in the vertical, and have the fastest propagation speeds. The baroclinic wave modes, on the other hand, do vary in the vertical. They are also slower, with speeds of only a few centimeters per second or less. Most investigations of Rossby waves have been done on those in Earth's atmosphere. Rossby waves in the Earth's atmosphere are easy to observe as (usually 4–6) large-scale meanders of the jet stream. When these deviations become very pronounced, masses of cold or warm air detach, and become low-strength cyclones and anticyclones, respectively, and are responsible for day-to-day weather patterns at mid-latitudes. The action of Rossby waves partially explains why eastern continental edges in the Northern Hemisphere, such as the Northeast United States and Eastern Canada, are colder than Western Europe at the same latitudes, and why the Mediterranean is dry during summer (Rodwell–Hoskins mechanism). Poleward-propagating atmospheric waves Deep convection (heat transfer) to the troposphere is enhanced over very warm sea surfaces in the tropics, such as during El Niño events. This tropical forcing generates atmospheric Rossby waves that have a poleward and eastward migration. Poleward-propagating Rossby waves explain many of the observed statistical connections between low- and high-latitude climates. One such phenomenon is sudden stratospheric warming. Poleward-propagating Rossby waves are an important and unambiguous part of the variability in the Northern Hemisphere, as expressed in the Pacific North America pattern. Similar mechanisms apply in the Southern Hemisphere and partly explain the strong variability in the Amundsen Sea region of Antarctica. In 2011, a Nature Geoscience study using general circulation models linked Pacific Rossby waves generated by increasing central tropical Pacific temperatures to warming of the Amundsen Sea region, leading to winter and spring continental warming of Ellsworth Land and Marie Byrd Land in West Antarctica via an increase in advection. Rossby waves on other planets Atmospheric Rossby waves, like Kelvin waves, can occur on any rotating planet with an atmosphere. The Y-shaped cloud feature on Venus is attributed to Kelvin and Rossby waves. Oceanic waves Oceanic Rossby waves are large-scale waves within an ocean basin. They have a low amplitude, in the order of centimetres (at the surface) to metres (at the thermocline), compared with atmospheric Rossby waves which are in the order of hundreds of kilometres. They may take months to cross an ocean basin. They gain momentum from wind stress at the ocean surface layer and are thought to communicate climatic changes due to variability in forcing, due to both the wind and buoyancy. Off-equatorial Rossby waves are believed to propagate through eastward-propagating Kelvin waves that upwell against Eastern Boundary Currents, while equatorial Kelvin waves are believed to derive some of their energy from the reflection of Rossby waves against Western Boundary Currents. Both barotropic and baroclinic waves cause variations of the sea surface height, although the length of the waves made them difficult to detect until the advent of satellite altimetry. Satellite observations have confirmed the existence of oceanic Rossby waves. Baroclinic waves also generate significant displacements of the oceanic thermocline, often of tens of meters. Satellite observations have revealed the stately progression of Rossby waves across all the ocean basins, particularly at low- and mid-latitudes. Due to the beta effect, transit times of Rossby waves increase with latitude. In a basin like the Pacific, waves travelling at the equator may take months, while closer to the poles transit may take decades. Rossby waves have been suggested as an important mechanism to account for the heating of the ocean on Europa, a moon of Jupiter. Waves in astrophysical discs Rossby wave instabilities are also thought to be found in astrophysical discs, for example, around newly forming stars. Amplification of Rossby waves It has been proposed that a number of regional weather extremes in the Northern Hemisphere associated with blocked atmospheric circulation patterns may have been caused by quasiresonant amplification of Rossby waves. Examples include the 2013 European floods, the 2012 China floods, the 2010 Russian heat wave, the 2010 Pakistan floods and the 2003 European heat wave. Even taking global warming into account, the 2003 heat wave would have been highly unlikely without such a mechanism. Normally freely travelling synoptic-scale Rossby waves and quasistationary planetary-scale Rossby waves exist in the mid-latitudes with only weak interactions. The hypothesis, proposed by Vladimir Petoukhov, Stefan Rahmstorf, Stefan Petri, and Hans Joachim Schellnhuber, is that under some circumstances these waves interact to produce the static pattern. For this to happen, they suggest, the zonal (east-west) wave number of both types of wave should be in the range 6–8, the synoptic waves should be arrested within the troposphere (so that energy does not escape to the stratosphere) and mid-latitude waveguides should trap the quasistationary components of the synoptic waves. In this case the planetary-scale waves may respond unusually strongly to orography and thermal sources and sinks because of "quasiresonance". A 2017 study by Mann, Rahmstorf, et al. connected the phenomenon of anthropogenic Arctic amplification to planetary wave resonance and extreme weather events. Mathematical definitions Free barotropic Rossby waves under a zonal flow with linearized vorticity equation To start with, a zonal mean flow, U, can be considered to be perturbed where U is constant in time and space. Let be the total horizontal wind field, where u and v are the components of the wind in the x- and y- directions, respectively. The total wind field can be written as a mean flow, U, with a small superimposed perturbation, u′ and v′. The perturbation is assumed to be much smaller than the mean zonal flow. The relative vorticity and the perturbations and can be written in terms of the stream function (assuming non-divergent flow, for which the stream function completely describes the flow): Considering a parcel of air that has no relative vorticity before perturbation (uniform U has no vorticity) but with planetary vorticity f as a function of the latitude, perturbation will lead to a slight change of latitude, so the perturbed relative vorticity must change in order to conserve potential vorticity. Also the above approximation U >> u''' ensures that the perturbation flow does not advect relative vorticity. with . Plug in the definition of stream function to obtain: Using the method of undetermined coefficients one can consider a traveling wave solution with zonal and meridional wavenumbers k and ℓ, respectively, and frequency : This yields the dispersion relation: The zonal (x-direction) phase speed and group velocity of the Rossby wave are then given by where c is the phase speed, cg is the group speed, U is the mean westerly flow, is the Rossby parameter, k is the zonal wavenumber, and ℓ is the meridional wavenumber. It is noted that the zonal phase speed of Rossby waves is always westward (traveling east to west) relative to mean flow U, but the zonal group speed of Rossby waves can be eastward or westward depending on wavenumber. Rossby parameter The Rossby parameter is defined as the rate of change of the Coriolis frequency along the meridional direction: where is the latitude, ω is the angular speed of the Earth's rotation, and a'' is the mean radius of the Earth. If , there will be no Rossby waves; Rossby waves owe their origin to the gradient of the tangential speed of the planetary rotation (planetary vorticity). A "cylinder" planet has no Rossby waves. It also means that at the equator of any rotating, sphere-like planet, including Earth, one will still have Rossby waves, despite the fact that , because . These are known as Equatorial Rossby waves. See also Atmospheric wave Equatorial wave Equatorial Rossby wave – mathematical treatment Harmonic Kelvin wave Polar vortex Rossby whistle References Bibliography External links Description of Rossby Waves from the American Meteorological Society An introduction to oceanic Rossby waves and their study with satellite data Rossby waves and extreme weather (Video) Physical oceanography Atmospheric dynamics Fluid mechanics Waves
0.778713
0.989387
0.770448