title
stringlengths
3
69
text
stringlengths
776
102k
relevans
float64
0.76
0.82
popularity
float64
0.96
1
ranking
float64
0.76
0.81
Absement
In kinematics, absement (or absition) is a measure of sustained displacement of an object from its initial position, i.e. a measure of how far away and for how long. The word absement is a portmanteau of the words absence and displacement. Similarly, its synonym absition is a portmanteau of the words absence and position. Absement changes as an object remains displaced and stays constant as the object resides at the initial position. It is the first time-integral of the displacement (i.e. absement is the area under a displacement vs. time graph), so the displacement is the rate of change (first time-derivative) of the absement. The dimension of absement is length multiplied by time. Its SI unit is meter second (m·s), which corresponds to an object having been displaced by 1 meter for 1 second. This is not to be confused with a meter per second (m/s), a unit of velocity, the time-derivative of position. For example, opening the gate of a gate valve (of rectangular cross section) by 1 mm for 10 seconds yields the same absement of 10 mm·s as opening it by 5 mm for 2 seconds. The amount of water having flowed through it is linearly proportional to the absement of the gate, so it is also the same in both cases. Occurrence in nature Whenever the rate of change ′ of a quantity is proportional to the displacement of an object, the quantity is a linear function of the object's absement. For example, when the fuel flow rate is proportional to the position of the throttle lever, then the total amount of fuel consumed is proportional to the lever's absement. The first published paper on the topic of absement introduced and motivated it as a way to study flow-based musical instruments, such as the hydraulophone, to model empirical observations of some hydraulophones in which obstruction of a water jet for a longer period of time resulted in a buildup in sound level, as water accumulates in a sounding mechanism (reservoir), up to a certain maximum filling point beyond which the sound level reached a maximum, or fell off (along with a slow decay when a water jet was unblocked). Absement has also been used to model artificial muscles, as well as for real muscle interaction in a physical fitness context. Absement has also been used to model human posture. As the displacement can be seen as a mechanical analogue of electric charge, the absement can be seen as a mechanical analogue of the time-integrated charge, a quantity useful for modelling some types of memory elements. Applications In addition to modeling fluid flow and for Lagrangian modeling of electric circuits, absement is used in physical fitness and kinesiology to model muscle bandwidth, and as a new form of physical fitness training. In this context, it gives rise to a new quantity called actergy, which is to energy as energy is to power. Actergy has the same units as action (joule-seconds) but is the time-integral of total energy (time-integral of the Hamiltonian rather than time-integral of the Lagrangian). Just as displacement and its derivatives form kinematics, so do displacement and its integrals form "integral kinematics". Fluid flow in a throttle: Relation to PID controllers PID controllers are controllers that work on a signal that is proportional to a physical quantity (e.g. displacement, proportional to position) and its integral(s) and derivative(s), thusly defining PID in the context of integrals and derivatives of a position of a control element in the Bratland sense Example of PID controller (Bratland 2014): P, position; I, absement; D, velocity. Strain absement Strain absement is the time-integral of strain, and is used extensively in mechanical systems and memsprings: a quantity called absement which allows mem-spring models to display hysteretic response in great abundance. Anglement Absement originally arose in situations involving valves and fluid flow, for which the opening of a valve was by a long, T-shaped handle, which actually varied in angle rather than position. The time-integral of angle is called "anglement" and it is approximately equal or proportional to absement for small angles, because the sine of an angle is approximately equal to the angle for small angles. Phase space: Absement and momentement In regard to a conjugate variable for absement, the time-integral of momentum, known as momentement, has been proposed. This is consistent with Jeltsema's 2012 treatment with charge and flux as the base units rather than current and voltage. References External links Motion (physics) Vector physical quantities
0.778289
0.98992
0.770444
History of entropy
The concept of entropy developed in response to the observation that a certain amount of functional energy released from combustion reactions is always lost to dissipation or friction and is thus not transformed into useful work. Early heat-powered engines such as Thomas Savery's (1698), the Newcomen engine (1712) and the Cugnot steam tricycle (1769) were inefficient, converting less than two percent of the input energy into useful work output; a great deal of useful energy was dissipated or lost. Over the next two centuries, physicists investigated this puzzle of lost energy; the result was the concept of entropy. In the early 1850s, Rudolf Clausius set forth the concept of the thermodynamic system and posited the argument that in any irreversible process a small amount of heat energy δQ is incrementally dissipated across the system boundary. Clausius continued to develop his ideas of lost energy, and coined the term entropy. Since the mid-20th century the concept of entropy has found application in the field of information theory, describing an analogous loss of data in information transmission systems. Classical thermodynamic views In 1803, mathematician Lazare Carnot published a work entitled Fundamental Principles of Equilibrium and Movement. This work includes a discussion on the efficiency of fundamental machines, i.e. pulleys and inclined planes. Carnot saw through all the details of the mechanisms to develop a general discussion on the conservation of mechanical energy. Over the next three decades, Carnot's theorem was taken as a statement that in any machine the accelerations and shocks of the moving parts all represent losses of moment of activity, i.e. the useful work done. From this Carnot drew the inference that perpetual motion was impossible. This loss of moment of activity was the first-ever rudimentary statement of the second law of thermodynamics and the concept of 'transformation-energy' or entropy, i.e. energy lost to dissipation and friction. Carnot died in exile in 1823. During the following year his son Sadi Carnot, having graduated from the École Polytechnique training school for engineers, but now living on half-pay with his brother Hippolyte in a small apartment in Paris, wrote Reflections on the Motive Power of Fire. In this book, Sadi visualized an ideal engine in which any heat (i.e., caloric) converted into work, could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Building on his father's work, Sadi postulated the concept that "some caloric is always lost" in the conversion into work, even in his idealized reversible heat engine, which excluded frictional losses and other losses due to the imperfections of any real machine. He also discovered that this idealized efficiency was dependent only on the temperatures of the heat reservoirs between which the engine was working, and not on the types of working fluids. Any real heat engine could not realize the Carnot cycle's reversibility, and was condemned to be even less efficient. This loss of usable caloric was a precursory form of the increase in entropy as we now know it. Though formulated in terms of caloric, rather than entropy, this was an early insight into the second law of thermodynamics. 1854 definition In his 1854 memoir, Clausius first develops the concepts of interior work, i.e. that "which the atoms of the body exert upon each other", and exterior work, i.e. that "which arise from foreign influences [to] which the body may be exposed", which may act on a working body of fluid or gas, typically functioning to work a piston. He then discusses the three categories into which heat Q may be divided: Heat employed in increasing the heat actually existing in the body. Heat employed in producing the interior work. Heat employed in producing the exterior work. Building on this logic, and following a mathematical presentation of the first fundamental theorem, Clausius then presented the first-ever mathematical formulation of entropy, although at this point in the development of his theories he called it "equivalence-value", perhaps referring to the concept of the mechanical equivalent of heat which was developing at the time rather than entropy, a term which was to come into use later. He stated: the second fundamental theorem in the mechanical theory of heat may thus be enunciated: If two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value: and the passage of the quantity of heat Q from the temperature T1 to the temperature T2, has the equivalence-value: wherein T is a function of the temperature, independent of the nature of the process by which the transformation is effected. In modern terminology, that is, the terminology introduced by Clausius himself in 1865, we think of this equivalence-value as "entropy", symbolized by S. Thus, using the above description, we can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid, which was typically a body of steam, to the temperature T2 as shown below: If we make the assignment: Then, the entropy change or "equivalence-value" for this transformation is: which equals: and by factoring out Q, we have the following form, as was derived by Clausius: 1856 definition In 1856, Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form: where N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. This equivalence-value was a precursory formulation of entropy. 1862 definition In 1862, Clausius stated what he calls the "theorem respecting the equivalence-values of the transformations" or what is now known as the second law of thermodynamics, as such: Quantitatively, Clausius states the mathematical expression for this theorem is follows. This was an early formulation of the second law and one of the original forms of the concept of entropy. 1865 definition In 1865, Clausius gave irreversible heat loss, or what he had previously been calling "equivalence-value", a name: Clausius did not specify why he chose the symbol "S" to represent entropy, and it is almost certainly untrue that Clausius chose "S" in honor of Sadi Carnot; the given names of scientists are rarely if ever used this way. Later developments In 1876, physicist J. Willard Gibbs, building on the work of Clausius, Hermann von Helmholtz and others, proposed that the measurement of "available energy" ΔG in a thermodynamic system could be mathematically accounted for by subtracting the "energy loss" TΔS from total energy change of the system ΔH. These concepts were further developed by James Clerk Maxwell [1871] and Max Planck [1903]. Statistical thermodynamic views In 1877, Ludwig Boltzmann developed a statistical mechanical evaluation of the entropy , of a body in its own given macrostate of internal thermodynamic equilibrium. It may be written as: where denotes the Boltzmann constant and denotes the number of microstates consistent with the given equilibrium macrostate. Boltzmann himself did not actually write this formula expressed with the named constant , which is due to Planck's reading of Boltzmann. Boltzmann saw entropy as a measure of statistical "mixedupness" or disorder. This concept was soon refined by J. Willard Gibbs, and is now regarded as one of the cornerstones of the theory of statistical mechanics. Erwin Schrödinger made use of Boltzmann's work in his book What is Life? to explain why living systems have far fewer replication errors than would be predicted from Statistical Thermodynamics. Schrödinger used the Boltzmann equation in a different form to show increase of entropy where D is the number of possible energy states in the system that can be randomly filled with energy. He postulated a local decrease of entropy for living systems when (1/D) represents the number of states that are prevented from randomly distributing, such as occurs in replication of the genetic code. Without this correction Schrödinger claimed that statistical thermodynamics would predict one thousand mutations per million replications, and ten mutations per hundred replications following the rule for square root of n, far more mutations than actually occur. Schrödinger's separation of random and non-random energy states is one of the few explanations for why entropy could be low in the past, but continually increasing now. It has been proposed as an explanation of localized decrease of entropy in radiant energy focusing in parabolic reflectors and during dark current in diodes, which would otherwise be in violation of Statistical Thermodynamics. Information theory An analog to thermodynamic entropy is information entropy. In 1948, while working at Bell Telephone Laboratories, electrical engineer Claude Shannon set out to mathematically quantify the statistical nature of "lost information" in phone-line signals. To do this, Shannon developed the very general concept of information entropy, a fundamental cornerstone of information theory. Although the story varies, initially it seems that Shannon was not particularly aware of the close similarity between his new quantity and earlier work in thermodynamics. In 1939, however, when Shannon had been working on his equations for some time, he happened to visit the mathematician John von Neumann. During their discussions, regarding what Shannon should call the "measure of uncertainty" or attenuation in phone-line signals with reference to his new information theory, according to one source: According to another source, when von Neumann asked him how he was getting on with his information theory, Shannon replied: In 1948 Shannon published his seminal paper A Mathematical Theory of Communication, in which he devoted a section to what he calls Choice, Uncertainty, and Entropy. In this section, Shannon introduces an H function of the following form: where K is a positive constant. Shannon then states that "any quantity of this form, where K merely amounts to a choice of a unit of measurement, plays a central role in information theory as measures of information, choice, and uncertainty." Then, as an example of how this expression applies in a number of different fields, he references R.C. Tolman's 1938 Principles of Statistical Mechanics, stating that "the form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics where pi is the probability of a system being in cell i of its phase space ... H is then, for example, the H in Boltzmann's famous H theorem." As such, over the last fifty years, ever since this statement was made, people have been overlapping the two concepts or even stating that they are exactly the same. Shannon's information entropy is a much more general concept than statistical thermodynamic entropy. Information entropy is present whenever there are unknown quantities that can be described only by a probability distribution. In a series of papers by E. T. Jaynes starting in 1957, the statistical thermodynamic entropy can be seen as just a particular application of Shannon's information entropy to the probabilities of particular microstates of a system occurring in order to produce a particular macrostate. Popular use The term entropy is often used in popular language to denote a variety of unrelated phenomena. One example is the concept of corporate entropy as put forward somewhat humorously by authors Tom DeMarco and Timothy Lister in their 1987 classic publication Peopleware, a book on growing and managing productive teams and successful software projects. Here, they view energy waste as red tape and business team inefficiency as a form of entropy, i.e. energy lost to waste. This concept has caught on and is now common jargon in business schools. In another example, entropy is the central theme in Isaac Asimov's short story The Last Question (first copyrighted in 1956). The story plays with the idea that the most important question is how to stop the increase of entropy. Terminology overlap When necessary, to disambiguate between the statistical thermodynamic concept of entropy, and entropy-like formulae put forward by different researchers, the statistical thermodynamic entropy is most properly referred to as the Gibbs entropy. The terms Boltzmann–Gibbs entropy or BG entropy, and Boltzmann–Gibbs–Shannon entropy or BGS entropy are also seen in the literature. See also Entropy Enthalpy History of thermodynamics Thermodynamic free energy References Thermodynamic entropy History of thermodynamics
0.78852
0.97705
0.770423
Flux
Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications to physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort. Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to. Flux as flow rate per unit area In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux. General mathematical definition (transport) Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors. First, flux as a (single) scalar: where In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface. Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface: As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface. Finally, flux as a vector field: In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "argmax" cannot directly compare vectors; we take the vector with the biggest norm instead.) Properties These direct definitions, especially the last, are rather unwieldy. For example, the argmax construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway. If the flux j passes through the area at an angle θ to the area normal , then the dot product That is, the component of flux passing through the surface (i.e. normal to it) is jcosθ, while the component of flux passing tangential to the area is jsinθ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component. For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface: where A (and its infinitesimal) is the vector area combination of the magnitude of the area A through which the property passes and a unit vector normal to the area. Unlike in the second set of equations, the surface here need not be flat. Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1): Transport fluxes Eight of the most common forms of flux from the transport phenomena literature are defined as follows: Momentum flux, the rate of transfer of momentum across a unit area (N·s·m−2·s−1). (Newton's law of viscosity) Heat flux, the rate of heat flow across a unit area (J·m−2·s−1). (Fourier's law of conduction) (This definition of heat flux fits Maxwell's original definition.) Diffusion flux, the rate of movement of molecules across a unit area (mol·m−2·s−1). (Fick's law of diffusion) Volumetric flux, the rate of volume flow across a unit area (m3·m−2·s−1). (Darcy's law of groundwater flow) Mass flux, the rate of mass flow across a unit area (kg·m−2·s−1). (Either an alternate form of Fick's law that includes the molecular mass, or an alternate form of Darcy's law that includes the density.) Radiative flux, the amount of energy transferred in the form of photons at a certain distance from the source per unit area per second (J·m−2·s−1). Used in astronomy to determine the magnitude and spectral class of a star. Also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the electromagnetic spectrum. Energy flux, the rate of transfer of energy through a unit area (J·m−2·s−1). The radiative flux and heat flux are specific cases of energy flux. Particle flux, the rate of transfer of particles through a unit area ([number of particles] m−2·s−1) These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero. Chemical diffusion As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as: where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A. This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux. For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section , and the absolute temperature T by where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient. Quantum mechanics In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as So the probability of finding a particle in a differential volume element d3r is Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux; This is sometimes referred to as the probability current or current density, or probability flux density. Flux as a surface integral General mathematical definition (surface integral) As a mathematical concept, flux is represented by the surface integral of a vector field, where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface. The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is usually directed by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks). See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals. If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux. The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence). If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density. We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas. Electromagnetism Electric flux An electric "charge," such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: and one for the D-field (called the electric displacement): This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: where ε0 is the permittivity of free space. If one considers the flux of the electric field vector, E, for a tube near a point charge in the field of the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's Law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/ε0. In free space the electric displacement is given by the constitutive relation D = ε0 E, so for any bounding surface the D-field flux equals the charge QA within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow", since nothing actually flows along electric field lines. Magnetic flux The magnetic flux density (magnetic field) having the unit Wb/m2 (Tesla) is denoted by B, and magnetic flux is defined analogously: with the same notation above. The quantity arises in Faraday's law of induction, where the magnetic flux is time-dependent either because the boundary is time-dependent or magnetic field is time-dependent. In integral form: where d is an infinitesimal vector line element of the closed curve , with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve , with the sign determined by the integration direction. The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators. Poynting flux Using this definition, the flux of the Poynting vector S over a specified surface is the rate at which electromagnetic energy flows through that surface, defined like before: The flux of the Poynting vector through a surface is the electromagnetic power, or energy per unit time, passing through that surface. This is commonly used in analysis of electromagnetic radiation, but has application to other electromagnetic systems as well. Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above. It has units of watts per square metre (W/m2). SI radiometry units See also AB magnitude Explosively pumped flux compression generator Eddy covariance flux (aka, eddy correlation, eddy flux) Fast Flux Test Facility Fluence (flux of the first sort for particle beams) Fluid dynamics Flux footprint Flux pinning Flux quantization Gauss's law Inverse-square law Jansky (non SI unit of spectral flux density) Latent heat flux Luminous flux Magnetic flux Magnetic flux quantum Neutron flux Poynting flux Poynting theorem Radiant flux Rapid single flux quantum Sound energy flux Volumetric flux (flux of the first sort for fluids) Volumetric flow rate (flux of the second sort for fluids) Notes Further reading External links Physical quantities Vector calculus Rates
0.77205
0.997891
0.770421
Dark energy
In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 26% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: ( in mass-energy), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space. The first observational evidence for dark energy's existence came from measurements of supernovae. Type Ia supernovae have constant luminosity, which means that they can be used as accurate distance measures. Comparing this distance to the redshift (which measures the speed at which the supernova is receding) shows that the universe's expansion is accelerating. Prior to this observation, scientists thought that the gravitational attraction of matter and energy in the universe would cause the universe's expansion to slow over time. Since the discovery of accelerating expansion, several independent lines of evidence have been discovered that support the existence of dark energy. The exact nature of dark energy remains a mystery, and possible explanations abound. The main candidates are a cosmological constant (representing a constant energy density filling space homogeneously) and scalar fields (dynamic quantities having energy densities that vary in time and space) such as quintessence or moduli. A cosmological constant would remain constant across time and space, while scalar fields can vary. Yet other possibilities are interacting dark energy, an observational effect, and cosmological coupling (see the section ). History of discovery and previous speculation Einstein's cosmological constant The "cosmological constant" is a constant term that can be added to Einstein field equations of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy". The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution to the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'. The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. According to Einstein, "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear, thereby causing accelerated expansion. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and is not static. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Inflationary dark energy Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher (negative) energy density than the dark energy we observe today, and inflation is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the cosmic microwave background, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters. The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael S. Turner in 1998. Change in expansion over time High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations. As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration. Nature The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is rarefied and un-massive—roughly 10−27 kg/m3—it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it is believed to uniformly fill otherwise empty space. The vacuum energy, that is, the particle-antiparticle pairs generated and mutually annihilated within a time frame in accord with Heisenberg's uncertainty principle in the energy-time formulation, has been often invoked as the main contribution to dark energy. The mass–energy equivalence postulated by general relativity implies that the vacuum energy should exert a gravitational force. Hence, the vacuum energy is expected to contribute to the cosmological constant, which in turn impinges on the accelerated expansion of the universe. However, the cosmological constant problem asserts that there is a huge disagreement between the observed values of vacuum energy density and the theoretical large value of zero-point energy obtained by quantum field theory; the problem remains unresolved. Independently of its actual nature, dark energy would need to have a strong negative pressure to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure (i.e., tension) in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion". Technical definition In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. This matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., , while radiation is anything whose energy density scales to the inverse fourth power of the scale factor. This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift. The final component is dark energy: it is an intrinsic property of space and has a constant energy density, regardless of the dimensions of the volume under consideration. Thus, unlike ordinary matter, it is not diluted by the expansion of space. Evidence of existence The evidence for dark energy is indirect but comes from three independent sources: Distance measurements and their relation to redshift, which suggest the universe has expanded more in the latter half of its life. The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature). Measurements of large-scale wave patterns of mass density in the universe. Supernovae In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery. Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant. Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity. Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter. Large-scale structure The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density. A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ≈150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae. Cosmic microwave background The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass–energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the cosmic microwave background spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the cosmic microwave background gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter. Late-time integrated Sachs–Wolfe effect Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the cosmic microwave background aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al. Observational Hubble constant data A new approach to test evidence of dark energy through observational Hubble constant data (OHD), also known as cosmic chronometers, has gained significant attention in recent years. The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as "cosmic chronometers". From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter The reliance on a differential quantity, brings more information and is appealing for computation: It can minimize many common issues and systematic effects. Analyses of supernovae and baryon acoustic oscillations (BAO) are based on integrals of the Hubble parameter, whereas measures it directly. For these reasons, this method has been widely used to examine the accelerated cosmic expansion and study properties of dark energy. Theories of dark energy Dark energy's status as a hypothetical force with unknown properties makes it an active target of research. The problem is attacked from a variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data. Cosmological constant The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter (Lambda, hence the name Lambda-CDM model). Since energy and mass are related according to the equation Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called vacuum energy because it is the energy density of empty space – of vacuum. A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, about 120 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero. Also, it is unknown whether there is a metastable vacuum state in string theory with a positive cosmological constant, and it has been conjectured by Ulf Danielsson et al. that no such state exists. This conjecture would not rule out other models of dark energy, such as quintessence, that could be compatible with string theory. Quintessence In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength. In the simplest scenarios, the quintessence field has a canonical kinetic term, is minimally coupled to gravity, and does not feature higher order operations in its Lagrangian. No evidence of quintessence is yet available, nor has it been ruled out. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses. The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter–radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy. In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A no-go theorem has been proved that this scenario requires models with at least two types of quintessence. This scenario is the so-called Quintom scenario. Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip. A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. Interacting dark energy This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of spacetime such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter. Dark energy could in principle interact not only with the rest of the dark sector, but also with ordinary matter. However, cosmology alone is not sufficient to effectively constrain the strength of the coupling between dark energy and baryons, so that other indirect techniques or laboratory searches have to be adopted. It was briefly theorized in the early 2020s that excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy, but further experiments disproved this possibility. Variable dark energy models The density of dark energy might have varied in time during the history of the universe. Modern observational data allows us to estimate the present density of dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are Barboza & Alcaniz (2008), Jassal et al. (2005), Wetterich. (2004), and Oztas et al. (2018). Possibly decreasing levels Researchers using the Dark Energy Spectroscopic Instrument (DESI) to make the largest 3-D map of the universe as of 2024, have obtained an expansion history that has greater than 1% precision. From this level of detail, DESI Director Michael Levi stated:We're also seeing some potentially interesting differences that could indicate that dark energy is evolving over time. Those may or may not go away with more data, so we're excited to start analyzing our three-year dataset soon. Observational skepticism Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy does not actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. A laboratory direct detection attempt failed to detect any force associated with dark energy. Observational skepticism explanations of dark energy have generally not gained much traction among cosmologists. For example, a paper that suggested the anisotropy of the local Universe has been misrepresented as dark energy was quickly countered by another paper claiming errors in the original paper. Another study questioning the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age was also swiftly rebutted by other cosmologists. As a general relativistic effect due to black holes This theory was formulated by researchers of the University of Hawaiʻi at Mānoa in February 2023. The idea is that if one requires the Kerr metric (which describes rotating black holes) to asymptote to the Friedmann-Robertson-Walker metric (which describes the isotropic and homogeneous universe that is the basic assumption of modern cosmology), then one finds that black holes gain mass as the universe expands. The rate is measured to be , where a is the scale factor. This particular rate means that the energy density of black holes remains constant over time, mimicking dark energy (see Dark_energy#Technical_definition). The theory is called "cosmological coupling" because the black holes couple to a cosmological requirement. Other astrophysicists are skeptical, with a variety of papers claiming that the theory fails to explain other observations. Other mechanism driving acceleration Modified gravity The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy. Astrophysicist Ethan Siegel states that, while such alternatives gain mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy. Non-linearities of General Relativity equations The GRSI model explains the accelerating expansion of the universe a suppression of gravity as large distance. Such suppression is a consequence of an increased binding energy within a galaxy due to General Relativity's field self-interaction. The increased binding requires, by energy conservation, a suppression of gravitational attraction outside said galaxy. The suppression is in lieu of dark energy. This is analogous to the central phenomenology of Strong Nuclear Force where the gluons field self-interaction dramatically strengthens the binding of quarks, ultimately leading to their confinement. This in turn suppresses the Strong Nuclear Force outside hadrons. Implications for the fate of the universe Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant). Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away. As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of galaxies of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration. There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility of gravity eventually prevailing and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out. In philosophy of science The astrophysicist David Merritt identifies dark energy as an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. He argues that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper. However, his opinion is not accepted by a majority of physicists. See also Conformal gravity Dark Energy Spectroscopic Instrument Dark matter De Sitter invariant special relativity Illustris project Inhomogeneous cosmology Joint Dark Energy Mission Negative mass Quintessence: The Search for Missing Mass in the Universe Dark Energy Survey Quantum vacuum state Notes References External links Euclid ESA Satellite, a mission to map the geometry of the dark universe "Surveying the dark side" by Roberto Trotta and Richard Bower, Astron.Geophys. 1998 neologisms Concepts in astronomy Dark concepts in astrophysics Energy (physics) Physical cosmological concepts Unsolved problems in astronomy Unsolved problems in physics
0.771142
0.999028
0.770393
Hitting the wall
In endurance sports such as road cycling and long-distance running, hitting the wall or the bonk is a condition of sudden fatigue and loss of energy which is caused by the depletion of glycogen stores in the liver and muscles. Milder instances can be remedied by brief rest and the ingestion of food or drinks containing carbohydrates. Otherwise, it can be remedied by attaining second wind by either resting for approximately 10 minutes or by slowing down considerably and increasing speed slowly over a period of 10 minutes. Ten minutes is approximately the time that it takes for free fatty acids to sufficiently produce ATP in response to increased demand. During a marathon, for instance, runners typically hit the wall around kilometer 30 (mile 20). The condition can usually be avoided by ensuring that glycogen levels are high when the exercise begins, maintaining glucose levels during exercise by eating or drinking carbohydrate-rich substances, or by reducing exercise intensity. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. The lack of glycogen causes a low ATP reservoir within the exercising muscle cells. Until second wind is achieved (increased ATP production primarily from free fatty acids), the symptoms of a low ATP reservoir in exercising muscle due to depleted glycogen include: muscle fatigue, muscle cramping, muscle pain (myalgia), inappropriate rapid heart rate response to exercise (tachycardia), breathlessness (dyspnea) or rapid breathing (tachypnea), exaggerated cardiorespiratory response to exercise (tachycardia & dyspnea/tachypnea). The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and blood borne fuels to the muscle cells for oxidative phosphorylation. Without muscle glycogen, it is important to get into second wind without going too fast, too soon nor trying to push through the pain. Going too fast, too soon encourages protein metabolism over fat metabolism, and the muscle pain in this circumstance is a result of muscle damage due to a severely low ATP reservoir. Protein metabolism occurs through amino acid degradation which converts amino acids into pyruvate, the breakdown of protein to maintain the amino acid pool, the myokinase (adenylate kinase) reaction and purine nucleotide cycle. Amino acids are vital to the purine nucleotide cycle as they are precursors for purines, nucleotides, and nucleosides; as well as branch-chained amino acids are converted into glutamate and aspartate for use in the cycle (see Aspartate and glutamate synthesis). Severe breakdown of muscle leads to rhabdomyolysis and myoglobinuria. Excessive use of the myokinase reaction and purine nucleotide cycle leads to myogenic hyperuricemia. In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience hitting the wall. Instead, signs of exercise intolerance, such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of activity. Etymology, usage, and synonyms The term bonk for fatigue is presumably derived from the original meaning "to hit", and dates back at least half a century. Its earliest citation in the Oxford English Dictionary is a 1952 article in the Daily Mail. The term is used colloquially as a noun ("hitting the bonk") and as a verb ("to bonk halfway through the race"). The condition is also known to long-distance (marathon) runners, who usually refer to it as "hitting the wall". The British may refer to it as "hunger knock," while "hunger bonk" was used by South African cyclists in the 1960s. It can also be referred to as "blowing up" or a "weak attack". In other languages In German, hitting the wall is known as "der Mann mit dem Hammer" ("the man with the hammer"); the phenomenon is thus likened to a man with the hammer coming after the athlete, catching up, and eventually hitting the athlete, causing a sudden drop in performance. In French, marathoners in particular use "frapper le mur (du marathon)", literally hitting the (marathon) wall, just like in English. One may also hear "avoir un coup de barre" (getting smacked by a bar), which means experiencing sudden, incredible fatigue. This expression is used in a wider set of contexts. Mechanisms Athletes engaged in exercise over a long period of time produce energy via two mechanisms, both facilitated by oxygen: via fat metabolism and via breakdown of glycogen into glucose-1-phosphate, followed by glycolysis. How much energy comes from either source depends on the intensity of the exercise. During intense exercise that approaches one's VO2 max, most of the energy comes from glycogen. A typical untrained individual on an average diet is able to store about 380 grams of glycogen, or 1500 kcal, in the body, though much of that amount is spread throughout the muscular system and may not be available for any specific type of exercise. Intense cycling or running can easily consume 600–800 or more kcal per hour. Unless glycogen stores are replenished during exercise, glycogen stores in such an individual will be depleted after less than 2 hours of continuous cycling or 15 miles (24 km) of running. Training and carbohydrate loading can raise these reserves as high as 880 g (3600 kcal), correspondingly raising the potential for uninterrupted exercise. Effects In one study of five male subjects, "reduction in preexercise muscle glycogen from 59.1 to 17.1 μmol × g−1 (n = 3) was associated with a 14% reduction in maximum power output but no change in maximum O2 intake; at any given power output O2 intake, heart rate, and ventilation (VE) were significantly higher, CO2 output (VCO2) was similar, and the respiratory exchange ratio was lower during glycogen depletion compared with control." Five is an extremely small sample size, so this study may not be representative of the general population. Avoidance There are several approaches to prevent glycogen depletion: Carbohydrate loading is used to ensure that the initial glycogen levels are maximized, thus prolonging the exercise. This technique amounts to increasing complex carbohydrate intake during the last few days before the event. Consuming food or drinks containing carbohydrates during the exercise. This is an absolute must for very long distances; it is estimated that Tour de France competitors receive up to 50% of their daily caloric intake from on-the-bike supplements. Lowering the intensity of the exercise to the so-called 'fat max' level (aerobic threshold or "AeT") will lower the fraction of the energy that comes from glycogen as well as the amount of energy burned per unit of time. See also Exercise intolerance Second wind (exercise phenomenon) McArdle Disease (GSD-V) Metabolic myopathy References Sports terminology Endurance games
0.779438
0.988356
0.770361
Compressibility
In thermodynamics and fluid mechanics, the compressibility (also known as the coefficient of compressibility or, if the temperature is held constant, the isothermal compressibility) is a measure of the instantaneous relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. In its simple form, the compressibility (denoted in some fields) may be expressed as , where is volume and is pressure. The choice to define compressibility as the negative of the fraction makes compressibility positive in the (usual) case that an increase in pressure induces a reduction in volume. The reciprocal of compressibility at fixed temperature is called the isothermal bulk modulus. Definition The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is isentropic or isothermal. Accordingly, isothermal compressibility is defined: where the subscript indicates that the partial differential is to be taken at constant temperature. Isentropic compressibility is defined: where is entropy. For a solid, the distinction between the two is usually negligible. Since the density of a material is inversely proportional to its volume, it can be shown that in both cases Relation to speed of sound The speed of sound is defined in classical mechanics as: It follows, by replacing partial derivatives, that the isentropic compressibility can be expressed as: Relation to bulk modulus The inverse of the compressibility is called the bulk modulus, often denoted (sometimes or ).). The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. Thermodynamics The isothermal compressibility is generally related to the isentropic (or adiabatic) compressibility by a few relations: where is the heat capacity ratio, is the volumetric coefficient of thermal expansion, is the particle density, and is the thermal pressure coefficient. In an extensive thermodynamic system, the application of statistical mechanics shows that the isothermal compressibility is also related to the relative size of fluctuations in particle density: where is the chemical potential. The term "compressibility" is also used in thermodynamics to describe deviations of the thermodynamic properties of a real gas from those expected from an ideal gas. The compressibility factor is defined as where is the pressure of the gas, is its temperature, and is its molar volume, all measured independently of one another. In the case of an ideal gas, the compressibility factor is equal to unity, and the familiar ideal gas law is recovered: can, in general, be either greater or less than unity for a real gas. The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results. Earth science The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduce in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement. It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques. Fluid dynamics The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium. Aerodynamics Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond . Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach. One complication occurs in hypersonic aerodynamics, where dissociation causes an increase in the "notional" molar volume because a mole of oxygen, as O2, becomes 2 moles of monatomic oxygen and N2 similarly dissociates to 2 N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter the compressibility factor , defined for an initial 30 gram moles of air, rather than track the varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2,500–4,000 K temperature range, and in the 5,000–10,000 K range for nitrogen. In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity greatly increases. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions. for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recombination process. Negative compressibility For ordinary materials, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, that is, an increase in pressure squeezes the material to a smaller volume. This condition is required for mechanical stability. However, under very specific conditions, materials can exhibit a compressibility that can be negative. See also Mach number Mach tuck Poisson ratio Prandtl–Glauert singularity, associated with supersonic flight Shear strength References Thermodynamic properties Fluid dynamics Mechanical quantities
0.774434
0.994703
0.770332
Radial velocity
The radial velocity or line-of-sight velocity of a target with respect to an observer is the rate of change of the vector displacement between the two points. It is formulated as the vector projection of the target-observer relative velocity onto the relative direction or line-of-sight (LOS) connecting the two points. The radial speed or range rate is the temporal rate of the distance or range between the two points. It is a signed scalar quantity, formulated as the scalar projection of the relative velocity vector onto the LOS direction. Equivalently, radial speed equals the norm of the radial velocity, modulo the sign. In astronomy, the point is usually taken to be the observer on Earth, so the radial velocity then denotes the speed with which the object moves away from the Earth (or approaches it, for a negative radial velocity). Formulation Given a differentiable vector defining the instantaneous relative position of a target with respect to an observer. Let the instantaneous relative velocity of the target with respect to the observer be The magnitude of the position vector is defined as in terms of the inner product The quantity range rate is the time derivative of the magnitude (norm) of , expressed as Substituting into Evaluating the derivative of the right-hand-side by the chain rule using the expression becomes By reciprocity, . Defining the unit relative position vector (or LOS direction), the range rate is simply expressed as i.e., the projection of the relative velocity vector onto the LOS direction. Further defining the velocity direction , with the relative speed , we have: where the inner product is either +1 or -1, for parallel and antiparallel vectors, respectively. A singularity exists for coincident observer target, i.e., ; in this case, range rate is undefined. Applications in astronomy In astronomy, radial velocity is often measured to the first order of approximation by Doppler spectroscopy. The quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity. However, due to relativistic and cosmological effects over the great distances that light typically travels to reach the observer from an astronomical object, this measure cannot be accurately transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer. By contrast, astrometric radial velocity is determined by astrometric observations (for example, a secular change in the annual parallax). Spectroscopic radial velocity Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect, so the frequency of the light decreases for objects that were receding (redshift) and increases for objects that were approaching (blueshift). The radial velocity of a star or other luminous distant objects can be measured accurately by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. A positive radial velocity indicates the distance between the objects is or was increasing; a negative radial velocity indicates the distance between the source and observer is or was decreasing. William Huggins ventured in 1868 to estimate the radial velocity of Sirius with respect to the Sun, based on observed redshift of the star's light. In many binary stars, the orbital motion usually causes radial velocity variations of several kilometres per second (km/s). As the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries. Radial velocity can be used to estimate the ratio of the masses of the stars, and some orbital elements, such as eccentricity and semimajor axis. The same method has also been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function. Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a very high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit. Detection of exoplanets The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. By regularly looking at the spectrum of a star—and so, measuring its velocity—it can be determined if it moves periodically due to the influence of an exoplanet companion. Data reduction From the instrumental perspective, velocities are measured relative to the telescope's motion. So an important first step of the data reduction is to remove the contributions of the Earth's elliptic motion around the Sun at approximately ± 30 km/s, a monthly rotation of ± 13 m/s of the Earth around the center of gravity of the Earth-Moon system, the daily rotation of the telescope with the Earth crust around the Earth axis, which is up to ±460 m/s at the equator and proportional to the cosine of the telescope's geographic latitude, small contributions from the Earth polar motion at the level of mm/s, contributions of 230 km/s from the motion around the Galactic Center and associated proper motions. in the case of spectroscopic measurements corrections of the order of ±20 cm/s with respect to aberration. Sin i degeneracy is the impact caused by not being in the plane of the motion. See also Bistatic range rate Doppler effect Inner product Orbit determination Lp space Notes References Further reading Renze, John; Stover, Christopher; and Weisstein, Eric W. "Inner Product." From MathWorld—A Wolfram Web Resource.http://mathworld.wolfram.com/InnerProduct.html External links The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method ) Astrometry Concepts in astronomy Orbits Velocity
0.777495
0.990787
0.770332
Photophosphorylation
In the process of photosynthesis, the phosphorylation of ADP to form ATP using the energy of sunlight is called photophosphorylation. Cyclic photophosphorylation occurs in both aerobic and anaerobic conditions, driven by the main primary source of energy available to living organisms, which is sunlight. All organisms produce a phosphate compound, ATP, which is the universal energy currency of life. In photophosphorylation, light energy is used to pump protons across a biological membrane, mediated by flow of electrons through an electron transport chain. This stores energy in a proton gradient. As the protons flow back through an enzyme called ATP synthase, ATP is generated from ADP and inorganic phosphate. ATP is essential in the Calvin cycle to assist in the synthesis of carbohydrates from carbon dioxide and NADPH. ATP and reactions Both the structure of ATP synthase and its underlying gene are remarkably similar in all known forms of life. ATP synthase is powered by a transmembrane electrochemical potential gradient, usually in the form of a proton gradient. In all living organisms, a series of redox reactions is used to produce a transmembrane electrochemical potential gradient, or a so-called proton motive force (pmf). Redox reactions are chemical reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants relative to the products. If donor and acceptor (the reactants) are of higher free energy than the reaction products, the electron transfer may occur spontaneously. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system will proceed spontaneously (given that the system is isobaric and also at constant temperature), although the reaction may proceed slowly if it is kinetically inhibited. The fact that a reaction is thermodynamically possible does not mean that it will actually occur. A mixture of hydrogen gas and oxygen gas does not spontaneously ignite. It is necessary either to supply an activation energy or to lower the intrinsic activation energy of the system, in order to make most biochemical reactions proceed at a useful rate. Living systems use complex macromolecular structures to lower the activation energies of biochemical reactions. It is possible to couple a thermodynamically favorable reaction (a transition from a high-energy state to a lower-energy state) to a thermodynamically unfavorable reaction (such as a separation of charges, or the creation of an osmotic gradient), in such a way that the overall free energy of the system decreases (making it thermodynamically possible), while useful work is done at the same time. The principle that biological macromolecules catalyze a thermodynamically unfavorable reaction if and only if a thermodynamically favorable reaction occurs simultaneously, underlies all known forms of life. The transfer of electrons from a donor molecule to an acceptor molecule can be spatially separated into a series of intermediate redox reactions. This is an electron transport chain (ETC). Electron transport chains often produce energy in the form of a transmembrane electrochemical potential gradient. The gradient can be used to transport molecules across membranes. Its energy can be used to produce ATP or to do useful work, for instance mechanical work of a rotating bacterial flagella. Cyclic photophosphorylation This form of photophosphorylation occurs on the stroma lamella, or fret channels. In cyclic photophosphorylation, the high-energy electron released from P700, a pigment in a complex called photosystem I, flows in a cyclic pathway. The electron starts in photosystem I, passes from the primary electron acceptor to ferredoxin and then to plastoquinone, next to cytochrome bf (a similar complex to that found in mitochondria), and finally to plastocyanin before returning to photosystem I. This transport chain produces a proton-motive force, pumping H ions across the membrane and producing a concentration gradient that can be used to power ATP synthase during chemiosmosis. This pathway is known as cyclic photophosphorylation, and it produces neither O nor NADPH. Unlike non-cyclic photophosphorylation, NADP does not accept the electrons; they are instead sent back to the cytochrome bf complex. In bacterial photosynthesis, a single photosystem is used, and therefore is involved in cyclic photophosphorylation. It is favored in anaerobic conditions and conditions of high irradiance and CO compensation points. Non-cyclic photophosphorylation The other pathway, non-cyclic photophosphorylation, is a two-stage process involving two different chlorophyll photosystems in the thylakoid membrane. First, a photon is absorbed by chlorophyll pigments surrounding the reaction core center of photosystem II. The light excites an electron in the pigment P680 at the core of photosystem II, which is transferred to the primary electron acceptor, pheophytin, leaving behind P680. The energy of P680 is used in two steps to split a water molecule into 2H + 1/2 O + 2e (photolysis or light-splitting). An electron from the water molecule reduces P680 back to P680, while the H and oxygen are released. The electron transfers from pheophytin to plastoquinone (PQ), which takes 2e (in two steps) from pheophytin, and two H Ions from the stroma to form PQH. This plastoquinol is later oxidized back to PQ, releasing the 2e to the cytochrome bf complex and the two H ions into the thylakoid lumen. The electrons then pass through Cyt b and Cyt f to plastocyanin, using energy from photosystem I to pump hydrogen ions (H) into the thylakoid space. This creates a H gradient, making H ions flow back into the stroma of the chloroplast, providing the energy for the (re)generation of ATP. The photosystem II complex replaced its lost electrons from HO, so electrons are not returned to photosystem II as they would in the analogous cyclic pathway. Instead, they are transferred to the photosystem I complex, which boosts their energy to a higher level using a second solar photon. The excited electrons are transferred to a series of acceptor molecules, but this time are passed on to an enzyme called ferredoxin-NADP reductase, which uses them to catalyze the reaction NADP + 2H + 2e → NADPH + H This consumes the H ions produced by the splitting of water, leading to a net production of 1/2O, ATP, and NADPH + H with the consumption of solar photons and water. The concentration of NADPH in the chloroplast may help regulate which pathway electrons take through the light reactions. When the chloroplast runs low on ATP for the Calvin cycle, NADPH will accumulate and the plant may shift from noncyclic to cyclic electron flow. Early history of research In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation. In 1954, Daniel I. Arnon et.al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32. His first review on the early research of photophosphorylation was published in 1956. References Professor Luis Gordillo Fenchel T, King GM, Blackburn TH. Bacterial Biogeochemistry: The Ecophysiology of Mineral Cycling. 2nd ed. Elsevier; 1998. Lengeler JW, Drews G, Schlegel HG, editors. Biology of the Prokaryotes. Blackwell Sci; 1999. Nelson DL, Cox MM. Lehninger Principles of Biochemistry. 4th ed. Freeman; 2005. Stumm W, Morgan JJ. Aquatic Chemistry. 3rd ed. Wiley; 1996. Thauer RK, Jungermann K, Decker K. Energy Conservation in Chemotrophic Anaerobic Bacteria. Bacteriol. Rev. 41:100–180; 1977. White D. The Physiology and Biochemistry of Prokaryotes. 2nd ed. Oxford University Press; 2000. Voet D, Voet JG. Biochemistry. 3rd ed. Wiley; 2004. Cj C. Enverg Photosynthesis Light reactions
0.779347
0.98841
0.770315
Electricity
Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts. Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force behind the Second Industrial Revolution, with electricity's versatility driving transformations in both industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society. History Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862. While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life. In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor. Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. Concepts Electric charge By modern convention, the charge carried by electrons is defined as negative, and that by protons is positive. Before these particles were discovered, Benjamin Franklin had defined a positive charge as being the charge acquired by a glass rod when it is rubbed with a silk cloth. A proton by definition carries a charge of exactly . This value is also defined as the elementary charge. No object can have a charge smaller than the elementary charge, and any amount of charge an object may carry is a multiple of the elementary charge. An electron has an equal negative charge, i.e. . Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle. The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract. The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together. Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other. Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer. Electric current The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator. By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires. Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment. In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. Electric field The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects. The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh. The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect. Electric potential The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage. For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable. Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface. The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together. Electromagnets Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too. Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere. This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained. Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work. Electric circuits An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task. The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli. The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp. The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it. The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one. Electric power Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second. Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is where Q is electric charge in coulombs t is time in seconds I is electric current in amperes V is electric potential or voltage in volts Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency. Electronics Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system. Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering. Electromagnetic wave Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics. The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances. Production, storage and uses Generation and transmission In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity. Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect. Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China. Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels. Transmission and storage The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed. Normally, demand of electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower). Applications Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector. The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps). The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership. Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process. Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square. Electricity and the natural world Physiological effects A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century. Electrical phenomena in nature Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly. Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants. Cultural perception It is said that in the 1850s, British politician William Ewart Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com "the anecdote should be considered apocryphal because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death." In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films. As public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers. With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it acquired particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968), are still often cast as heroic, wizard-like figures. See also Ampère's circuital law, connects the direction of an electric current and its associated magnetic currents. Electric potential energy, the potential energy of a system of charges Electricity market, the sale of electrical energy Etymology of electricity, the origin of the word electricity and its current different usages Hydraulic analogy, an analogy between the flow of water and electric current Notes References External links Basic Concepts of Electricity chapter from Lessons In Electric Circuits Vol 1 DC book and series. "One-Hundred Years of Electricity", May 1931, Popular Mechanics Socket and plug standards Electricity Misconceptions Electricity and Magnetism Understanding Electricity and Electronics in about 10 Minutes
0.770647
0.999555
0.770305
Force field (physics)
In physics, a force field is a vector field corresponding with a non-contact force acting on a particle at various positions in space. Specifically, a force field is a vector field , where is the force that a particle would feel if it were at the position . Examples Gravity is the force of attraction between two objects. A gravitational force field models this influence that a massive body (or more generally, any quantity of energy) extends into the space around itself. In Newtonian gravity, a particle of mass M creates a gravitational field , where the radial unit vector points away from the particle. The gravitational force experienced by a particle of light mass m, close to the surface of Earth is given by , where g is Earth's gravity. An electric field exerts a force on a point charge q, given by . In a magnetic field , a point charge moving through it experiences a force perpendicular to its own velocity and to the direction of the field, following the relation: . Work Work is dependent on the displacement as well as the force acting on an object. As a particle moves through a force field along a path C, the work done by the force is a line integral: This value is independent of the velocity/momentum that the particle travels along the path. Conservative force field For a conservative force field, it is also independent of the path itself, depending only on the starting and ending points. Therefore, the work for an object travelling in a closed path is zero, since its starting and ending points are the same: If the field is conservative, the work done can be more easily evaluated by realizing that a conservative vector field can be written as the gradient of some scalar potential function: The work done is then simply the difference in the value of this potential in the starting and end points of the path. If these points are given by x = a and x = b, respectively: See also Classical mechanics Field line Force Mechanical work References External links Conservative and non-conservative force-fields, Classical Mechanics, University of Texas at Austin Force
0.78395
0.982591
0.770302
Renormalization
Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian. For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Mathematics and experiments prove that positrons and more massive particles such as protons exhibit precisely the same observed charge as the electron – even in the presence of much stronger interactions and more intense clouds of virtual particles. Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in further infinities. When describing spacetime as a continuum, certain statistical and quantum mechanical constructions are not well-defined. To define them, or make them unambiguous, a continuum limit must carefully remove "construction scaffolding" of lattices at various scales. Renormalization procedures are based on the requirement that certain physical quantities (such as the mass and charge of an electron) equal observed (experimental) values. That is, the experimental value of the physical quantity yields practical applications, but due to their empirical nature the observed measurement represents areas of quantum field theory that require deeper derivation from theoretical bases. Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Despite his later skepticism, it was Paul Dirac who pioneered renormalization. Today, the point of view has shifted: on the basis of the breakthrough renormalization group insights of Nikolay Bogolyubov and Kenneth Wilson, the focus is on variation of physical quantities across contiguous scales, while distant scales are related to each other through "effective" descriptions. All scales are linked in a broadly systematic way, and the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each. Wilson clarified which variables of a system are crucial and which are redundant. Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales. Self-interactions in classical physics The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century. The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius . The mass–energy in the field is which becomes infinite as . This implies that the point particle would have infinite inertia and thus cannot be accelerated. Incidentally, the value of that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of and ) turns out to be where is the fine-structure constant, and is the reduced Compton wavelength of the electron. Renormalization: The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell (in addition to the mass mentioned above associated with its electric field). If the shell's bare mass is allowed to be negative, it might be possible to take a consistent point limit. This was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory. (See also regularization (physics) for an alternative way to remove infinities from this classical problem, assuming new physics exists at small scales.) When calculating the electromagnetic interactions of charged particles, it is tempting to ignore the back-reaction of a particle's own field on itself. (Analogous to the back-EMF of circuit analysis.) But this back-reaction is necessary to explain the friction on charged particles when they emit radiation. If the electron is assumed to be a point, the value of the back-reaction diverges, for the same reason that the mass diverges, because the field is inverse-square. The Abraham–Lorentz theory had a noncausal "pre-acceleration". Sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent. The trouble was worse in classical field theory than in quantum field theory, because in quantum field theory a charged particle experiences Zitterbewegung due to interference with virtual particle–antiparticle pairs, thus effectively smearing out the charge over a region comparable to the Compton wavelength. In quantum electrodynamics at small coupling, the electromagnetic mass only diverges as the logarithm of the radius of the particle. Divergences in quantum electrodynamics When developing quantum electrodynamics in the 1930s, Max Born, Werner Heisenberg, Pascual Jordan, and Paul Dirac discovered that in perturbative corrections many integrals were divergent (see The problem of infinities). One way of describing the perturbation theory corrections' divergences was discovered in 1947–49 by Hans Kramers, Hans Bethe, Julian Schwinger, Richard Feynman, and Shin'ichiro Tomonaga, and systematized by Freeman Dyson in 1949. The divergences appear in radiative corrections involving Feynman diagrams with closed loops of virtual particles in them. While virtual particles obey conservation of energy and momentum, they can have any energy and momentum, even one that is not allowed by the relativistic energy–momentum relation for the observed mass of that particle (that is, is not necessarily the squared mass of the particle in that process, e.g. for a photon it could be nonzero). Such a particle is called off-shell. When there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles. A variation in the energy of one particle in the loop can be balanced by an equal and opposite change in the energy of another particle in the loop, without affecting the incoming and outgoing particles. Thus many variations are possible. So to find the amplitude for the loop process, one must integrate over all possible combinations of energy and momentum that could travel around the loop. These integrals are often divergent, that is, they give infinite answers. The divergences that are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one that comes from the region in the integral where all particles in the loop have large energies and momenta, very short wavelengths and high-frequencies fluctuations of the fields, in the path integral for the field, very short proper-time between particle emission and absorption, if the loop is thought of as a sum over particle paths. So these divergences are short-distance, short-time phenomena. Shown in the pictures at the right margin, there are exactly three one-loop divergent loop diagrams in quantum electrodynamics: (a) A photon creates a virtual electron–positron pair, which then annihilates. This is a vacuum polarization diagram. (b) An electron quickly emits and reabsorbs a virtual photon, called a self-energy. (c) An electron emits a photon, emits a second photon, and reabsorbs the first. This process is shown in the section below in figure 2, and it is called a vertex renormalization. The Feynman diagram for this is also called a “penguin diagram” due to its shape remotely resembling a penguin. The three divergences correspond to the three parameters in the theory under consideration: The field normalization Z. The mass of the electron. The charge of the electron. The second class of divergence called an infrared divergence, is due to massless particles, like the photon. Every process involving charged particles emits infinitely many coherent photons of infinite wavelength, and the amplitude for emitting any finite number of photons is zero. For photons, these divergences are well understood. For example, at the 1-loop order, the vertex function has both ultraviolet and infrared divergences. In contrast to the ultraviolet divergence, the infrared divergence does not require the renormalization of a parameter in the theory involved. The infrared divergence of the vertex diagram is removed by including a diagram similar to the vertex diagram with the following important difference: the photon connecting the two legs of the electron is cut and replaced by two on-shell (i.e. real) photons whose wavelengths tend to infinity; this diagram is equivalent to the bremsstrahlung process. This additional diagram must be included because there is no physical way to distinguish a zero-energy photon flowing through a loop as in the vertex diagram and zero-energy photons emitted through bremsstrahlung. From a mathematical point of view, the IR divergences can be regularized by assuming fractional differentiation w.r.t. a parameter, for example: is well defined at but is UV divergent; if we take the -th fractional derivative with respect to , we obtain the IR divergence so we can cure IR divergences by turning them into UV divergences. A loop divergence The diagram in Figure 2 shows one of the several one-loop contributions to electron–electron scattering in QED. The electron on the left side of the diagram, represented by the solid line, starts out with 4-momentum and ends up with 4-momentum . It emits a virtual photon carrying to transfer energy and momentum to the other electron. But in this diagram, before that happens, it emits another virtual photon carrying 4-momentum , and it reabsorbs this one after emitting the other virtual photon. Energy and momentum conservation do not determine the 4-momentum uniquely, so all possibilities contribute equally and we must integrate. This diagram's amplitude ends up with, among other things, a factor from the loop of The various factors in this expression are gamma matrices as in the covariant formulation of the Dirac equation; they have to do with the spin of the electron. The factors of are the electric coupling constant, while the provide a heuristic definition of the contour of integration around the poles in the space of momenta. The important part for our purposes is the dependency on of the three big factors in the integrand, which are from the propagators of the two electron lines and the photon line in the loop. This has a piece with two powers of on top that dominates at large values of (Pokorski 1987, p. 122): This integral is divergent and infinite, unless we cut it off at finite energy and momentum in some way. Similar loop divergences occur in other quantum field theories. Renormalized and bare quantities The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtual-particle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic back-reaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under consideration in the first place; so finite measured quantities would, in general, imply divergent bare quantities. To make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams. Renormalization in QED For example, in the Lagrangian of QED the fields and coupling constant are really bare quantities, hence the subscript above. Conventionally the bare quantities are written so that the corresponding Lagrangian terms are multiples of the renormalized ones: Gauge invariance, via a Ward–Takahashi identity, turns out to imply that we can renormalize the two terms of the covariant derivative piece together (Pokorski 1987, p. 115), which is what happened to ; it is the same as . A term in this Lagrangian, for example, the electron–photon interaction pictured in Figure 1, can then be written The physical constant , the electron's charge, can then be defined in terms of some specific experiment: we set the renormalization scale equal to the energy characteristic of this experiment, and the first term gives the interaction we see in the laboratory (up to small, finite corrections from loop diagrams, providing such exotica as the high-order corrections to the magnetic moment). The rest is the counterterm. If the theory is renormalizable (see below for more on this), as it is in QED, the divergent parts of loop diagrams can all be decomposed into pieces with three or fewer legs, with an algebraic form that can be canceled out by the second term (or by the similar counterterms that come from and ). The diagram with the counterterm's interaction vertex placed as in Figure 3 cancels out the divergence from the loop in Figure 2. Historically, the splitting of the "bare terms" into the original terms and counterterms came before the renormalization group insight due to Kenneth Wilson. According to such renormalization group insights, detailed in the next section, this splitting is unnatural and actually unphysical, as all scales of the problem enter in continuous systematic ways. Running couplings To minimize the contribution of loop diagrams to a given calculation (and therefore make it easier to extract results), one chooses a renormalization point close to the energies and momenta exchanged in the interaction. However, the renormalization point is not itself a physical quantity: the physical predictions of the theory, calculated to all orders, should in principle be independent of the choice of renormalization point, as long as it is within the domain of application of the theory. Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the remaining finite parts of loop diagrams. One can exploit this fact to calculate the effective variation of physical constants with changes in scale. This variation is encoded by beta-functions, and the general theory of this kind of scale-dependence is known as the renormalization group. Colloquially, particle physicists often speak of certain physical "constants" as varying with the energy of interaction, though in fact, it is the renormalization scale that is the independent quantity. This running does, however, provide a convenient means of describing changes in the behavior of a field theory under changes in the energies involved in an interaction. For example, since the coupling in quantum chromodynamics becomes small at large energy scales, the theory behaves more like a free theory as the energy exchanged in an interaction becomes large – a phenomenon known as asymptotic freedom. Choosing an increasing energy scale and using the renormalization group makes this clear from simple Feynman diagrams; were this not done, the prediction would be the same, but would arise from complicated high-order cancellations. For example, is ill-defined. To eliminate the divergence, simply change lower limit of integral into and : Making sure , then Regularization Since the quantity is ill-defined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization (Weinberg, 1995). An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals. With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoff-dependent terms. After canceling out these terms with the contributions from cutoff-dependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoff-independent results for calculations. Many different types of regulator are used in quantum field theory calculations, each with its advantages and disadvantages. One of the most popular in modern use is dimensional regularization, invented by Gerardus 't Hooft and Martinus J. G. Veltman, which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions. Another is Pauli–Villars regularization, which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta. Yet another regularization scheme is the lattice regularization, introduced by Kenneth Wilson, which pretends that hyper-cubical lattice constructs our spacetime with fixed grid size. This size is a natural cutoff for the maximal momentum that a particle could possess when propagating on the lattice. And after doing a calculation on several lattices with different grid size, the physical result is extrapolated to grid size 0, or our natural universe. This presupposes the existence of a scaling limit. A rigorous mathematical approach to renormalization theory is the so-called causal perturbation theory, where ultraviolet divergences are avoided from the start in calculations by performing well-defined mathematical operations only within the framework of distribution theory. In this approach, divergences are replaced by ambiguity: corresponding to a divergent diagram is a term which now has a finite, but undetermined, coefficient. Other principles, such as gauge symmetry, must then be used to reduce or eliminate the ambiguity. Attitudes and interpretation The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers. Freeman Dyson argued that these infinities are of a basic nature and cannot be eliminated by any formal mathematical procedures, such as the renormalization method. Dirac's criticism was the most persistent. As late as 1975, he was saying: Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation because this so-called 'good theory' does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it! Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985: The shell game that we play to find n and j is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate. Feynman was concerned that all field theories known in the 1960s had the property that the interactions become infinitely strong at short enough distance scales. This property called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, quantum chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory. The general unease was almost universal in texts up to the 1970s and 1980s. Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac and various others—all of whom belonged to the older generation—never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behavior of phase transitions. In condensed matter physics, a physical short-distance regulator exists: matter ceases to be continuous on the scale of atoms. Short-distance divergences in condensed matter physics do not present a philosophical problem since the field theory is only an effective, smoothed-out representation of the behavior of matter anyway; there are no infinities since the cutoff is always finite, and it makes perfect sense that the bare quantities are cutoff-dependent. If QFT holds all the way down past the Planck length (where it might yield to string theory, causal set theory or something different), then there may be no real problem with short-distance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful. Be that as it may, Salam's remark in 1972 seems still relevant Field-theoretic infinities – first encountered in Lorentz's computation of electron self-mass – have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may, after all, be circumvented — and finite values for the renormalization constants computed – is considered irrational. Compare Russell's postscript to the third volume of his autobiography The Final Years, 1944–1969 (George Allen and Unwin, Ltd., London 1969), p. 221: In the modern world, if communities are unhappy, it is often because they have ignorances, habits, beliefs, and passions, which are dearer to them than happiness or even life. I find many men in our dangerous age who seem to be in love with misery and death, and who grow angry when hopes are suggested to them. They think hope is irrational and that, in sitting down to lazy despair, they are merely facing facts. In QFT, the value of a physical constant, in general, depends on the scale that one chooses as the renormalization point, and it becomes very interesting to examine the renormalization group running of physical constants under changes in the energy scale. The coupling constants in the Standard Model of particle physics vary in different ways with increasing energy scale: the coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force tend to decrease, and the weak hypercharge coupling of the electroweak force tends to increase. At the colossal energy scale of 1015 GeV (far beyond the reach of our current particle accelerators), they all become approximately the same size (Grotz and Klapdor 1990, p. 254), a major motivation for speculations about grand unified theory. Instead of being only a worrisome problem, renormalization has become an important theoretical tool for studying the behavior of field theories in different regimes. If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, "In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things." Renormalizability From this philosophical reassessment, a new concept follows naturally: the notion of renormalizability. Not all theories lend themselves to renormalization in the manner described above, with a finite supply of counterterms and all quantities becoming cutoff-independent at the end of the calculation. If the Lagrangian contains combinations of field operators of high enough dimension in energy units, the counterterms required to cancel all divergences proliferate to infinite number, and, at first glance, the theory would seem to gain an infinite number of free parameters and therefore lose all predictive power, becoming scientifically worthless. Such theories are called nonrenormalizable. The Standard Model of particle physics contains only renormalizable operators, but the interactions of general relativity become nonrenormalizable operators if one attempts to construct a field theory of quantum gravity in the most straightforward manner (treating the metric in the Einstein–Hilbert Lagrangian as a perturbation about the Minkowski metric), suggesting that perturbation theory is not satisfactory in application to quantum gravity. However, in an effective field theory, "renormalizability" is, strictly speaking, a misnomer. In nonrenormalizable effective field theory, terms in the Lagrangian do multiply to infinity, but have coefficients suppressed by ever-more-extreme inverse powers of the energy cutoff. If the cutoff is a real, physical quantity—that is, if the theory is only an effective description of physics up to some maximum energy or minimum distance scale—then these additional terms could represent real physical interactions. Assuming that the dimensionless constants in the theory do not get too large, one can group calculations by inverse powers of the cutoff, and extract approximate predictions to finite order in the cutoff that still have a finite number of free parameters. It can even be useful to renormalize these "nonrenormalizable" interactions. Nonrenormalizable interactions in effective field theories rapidly become weaker as the energy scale becomes much smaller than the cutoff. The classic example is the Fermi theory of the weak nuclear force, a nonrenormalizable effective theory whose cutoff is comparable to the mass of the W particle. This fact may also provide a possible explanation for why almost all of the particle interactions we see are describable by renormalizable theories. It may be that any others that may exist at the GUT or Planck scale simply become too weak to detect in the realm we can observe, with one exception: gravity, whose exceedingly weak interaction is magnified by the presence of the enormous masses of stars and planets. Renormalization schemes In actual calculations, the counterterms introduced to cancel the divergences in Feynman diagram calculations beyond tree level must be fixed using a set of renormalisation conditions. The common renormalization schemes in use include: Minimal subtraction (MS) scheme and the related modified minimal subtraction (MS-bar) scheme On-shell scheme Besides, there exists a "natural" definition of the renormalized coupling (combined with the photon propagator) as a propagator of dual free bosons, which does not explicitly require introducing the counterterms. In statistical physics History A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances. This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982. Principles In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system. Now we consider a certain blocking transformation of the state variables , the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. Renormalization group fixed points The most important information in the RG flow is its fixed points. A fixed point is defined by the vanishing of the beta function associated to the flow. Then, fixed points of the renormalization group are by definition scale invariant. In many cases of physical interest scale invariance enlarges to conformal invariance. One then has a conformal field theory at the fixed point. The ability of several theories to flow to the same fixed point leads to universality. If these fixed points correspond to free field theory, the theory is said to exhibit quantum triviality. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question. See also History of quantum field theory Quantum triviality Zeno's paradoxes Nonoblique correction References Further reading General introduction DeDeo, Simon; Introduction to Renormalization (2017). Santa Fe Institute Complexity Explorer MOOC. Renormalization from a complex systems point of view, including Markov Chains, Cellular Automata, the real space Ising model, the Krohn-Rhodes Theorem, QED, and rate distortion theory. Baez, John; Renormalization Made Easy, (2005). A qualitative introduction to the subject. Blechman, Andrew E.; Renormalization: Our Greatly Misunderstood Friend, (2002). Summary of a lecture; has more information about specific regularization and divergence-subtraction schemes. Shirkov, Dmitry; Fifty Years of the Renormalization Group, C.E.R.N. Courrier 41(7) (2001). Full text available at : I.O.P Magazines. E. Elizalde; Zeta regularization techniques with Applications. Mainly: quantum field theory N. N. Bogoliubov, D. V. Shirkov (1959): The Theory of Quantized Fields. New York, Interscience. The first text-book on the renormalization group theory. Ryder, Lewis H.; Quantum Field Theory (Cambridge University Press, 1985), Highly readable textbook, certainly the best introduction to relativistic Q.F.T. for particle physics. Zee, Anthony; Quantum Field Theory in a Nutshell, Princeton University Press (2003) . Another excellent textbook on Q.F.T. Weinberg, Steven; The Quantum Theory of Fields (3 volumes) Cambridge University Press (1995). A monumental treatise on Q.F.T. written by a leading expert, Nobel laureate 1979. Pokorski, Stefan; Gauge Field Theories, Cambridge University Press (1987) . 't Hooft, Gerard; The Glorious Days of Physics – Renormalization of Gauge theories, lecture given at Erice (August/September 1998) by the Nobel laureate 1999 . Full text available at: hep-th/9812203. Rivasseau, Vincent; An introduction to renormalization, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript. Rivasseau, Vincent; From perturbative to constructive renormalization, Princeton University Press (1991) . Full text available in PostScript and in PDF (draft version). Iagolnitzer, Daniel & Magnen, J.; Renormalization group analysis, Encyclopaedia of Mathematics, Kluwer Academic Publisher (1996). Full text available in PostScript and pdf here. Scharf, Günter; Finite quantum electrodynamics: The causal approach, Springer Verlag Berlin Heidelberg New York (1995) . A. S. Švarc (Albert Schwarz), Математические основы квантовой теории поля, (Mathematical aspects of quantum field theory), Atomizdat, Moscow, 1975. 368 pp. Mainly: statistical physics A. N. Vasil'ev; The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004); Nigel Goldenfeld; Lectures on Phase Transitions and the Renormalization Group, Frontiers in Physics 85, Westview Press (June, 1992) . Covering the elementary aspects of the physics of phases transitions and the renormalization group, this popular book emphasizes understanding and clarity rather than technical manipulations. Zinn-Justin, Jean; Quantum Field Theory and Critical Phenomena, Oxford University Press (4th edition – 2002) . A masterpiece on applications of renormalization methods to the calculation of critical exponents in statistical mechanics, following Wilson's ideas (Kenneth Wilson was Nobel laureate 1982). Zinn-Justin, Jean; Phase Transitions & Renormalization Group: from Theory to Numbers, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript . Domb, Cyril; The Critical Point: A Historical Introduction to the Modern Theory of Critical Phenomena, CRC Press (March, 1996) . Brown, Laurie M. (Ed.); Renormalization: From Lorentz to Landau (and Beyond), Springer-Verlag (New York-1993) . Cardy, John; Scaling and Renormalization in Statistical Physics, Cambridge University Press (1996) . Miscellaneous Shirkov, Dmitry; The Bogoliubov Renormalization Group, JINR Communication E2-96-15 (1996). Full text available at: hep-th/9602024 Zinn-Justin, Jean; Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375–388 (1999). Full text available in PostScript. Connes, Alain; Symétries Galoisiennes & Renormalisation, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . French mathematician Alain Connes (Fields medallist 1982) describe the mathematical underlying structure (the Hopf algebra) of renormalization, and its link to the Riemann-Hilbert problem. Full text (in French) available at . External links Quantum field theory Renormalization group Mathematical physics
0.774493
0.994558
0.770278
Therblig
Therbligs are elemental motions used in the study of workplace motion economy. A workplace task is analyzed by recording each of the therblig units for a process, with the results used for optimization of manual labour by eliminating unneeded movements. Eighteen therbligs have been defined. The word therblig was the creation of Frank Bunker Gilbreth and Lillian Moller Gilbreth, American industrial psychologists who invented the field of time and motion study. It is a reversal of the name Gilbreth, with 'th' transposed. Elements A basic motion element is one of a set of fundamental motions used by a worker to perform a manual operation or task. The set consists of 18 elements, each describing one activity. Transport empty [unloaded] (TE): receiving an object with an empty hand. (Now called "Reach".) Grasp (G): grasping an object with the active hand. Transport loaded (TL): moving an object using a hand motion. Hold (H): holding an object. Release load (RL): releasing control of an object. Pre-position (PP): positioning and/or orienting an object for the next operation and relative to an approximation location. Position (P): positioning and/or orienting an object in the defined location. Use (U): manipulating a tool in the intended way during the course of working. Assemble (A): joining two parts together. Disassemble (DA): separating multiple components that were joined. Search (Sh): attempting to find an object using the eyes and hands. Select (St): Choosing among several objects in a group. Plan (Pn): deciding on a course of action. Inspect (I): determining the quality or the characteristics of an object using the eyes and/or other senses. Unavoidable delay (UD): waiting due to factors beyond the worker's control and included in the work cycle. Avoidable delay (AD): pausing for reasons under the worker's control that is not part of the regular work cycle. Rest (R): resting to overcome a fatigue, consisting of a pause in the motions of the hands and/or body during the work cycles or between them. Find (F): A momentary mental reaction at the end of the Search cycle. Seldom used. Effective and ineffective basic motion elements Effective: Reach Move Grasp Release Load Use Assemble Disassemble Pre-Position Ineffective: Hold Rest Position Search Select Plan Unavoidable Delay Avoidable Delay Inspect Sample usage Here is an example of how therbligs can be used to analyze motion: History In an article published in 1915, Frank Gilbreth wrote of 16 elements: "The elements of a cycle of decisions and motions, either running partly or wholly concurrently with other elements in the same or other cycles, consist of the following, arranged in varying sequences: 1. Search, 2. Find, 3. Select, 4. Grasp, 5. Position, 6. Assemble, 7. Use, 8. Dissemble, or take apart, 9. Inspect, 10. Transport, loaded, 11. Pre-position for next operation, 12. Release load, 13. Transport, empty, 14. Wait (unavoidable delay), 15. Wait (avoidable delay), 16. Rest (for overcoming fatigue)." Notes References * External links The Gilbreth Network: Therbligs Time and motion study
0.78556
0.980528
0.770263
Electric charge
Electric charge (symbol q, sometimes Q) is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects. Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e. Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth. Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics. The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges. Overview Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed. By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign. The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral. An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge. During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral. Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa. Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current. Unit The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer. The elementary charge (the electric charge of the proton) is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as , , or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect. The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e. History From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect. In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon. In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge". Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies. In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia. Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745). Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium. Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward. It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge. Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path. In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity). In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body. In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state. In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level. The role of charge in static electricity Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects. Electrification by sliding When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other. A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena: The two pieces of glass repel each other. Each piece of glass attracts each piece of resin. The two pieces of resin repel each other. This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts. If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified. An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand. The role of charge in electric current Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations. At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma. Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners. Conservation of electric charge The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I: Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result: The charge transferred between times and is obtained by integrating both sides: where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface. Relativistic invariance Aside from the properties described in articles about electromagnetism, charge is a relativistic invariant. This means that any particle that has charge q has the same charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus). See also SI electromagnetism units Color charge Partial charge Positron or antielectron is an antiparticle or antimatter counterpart of the electron References External links How fast does a charge decay? Chemical properties Conservation laws Electricity Flavour (particle physics) Spintronics Electromagnetic quantities
0.770929
0.999098
0.770233
Probability current
In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation. The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation. The relativistic equivalent of the probability current is known as the probability four-current. Definition (non-relativistic 3-current) Free spin-0 particle In non-relativistic quantum mechanics, the probability current of the wave function of a particle of mass in one dimension is defined as where is the reduced Planck constant; denotes the complex conjugate of the wave function; denotes the real part; denotes the imaginary part. Note that the probability current is proportional to a Wronskian In three dimensions, this generalizes to where denotes the del or gradient operator. This can be simplified in terms of the kinetic momentum operator, to obtain These definitions use the position basis (i.e. for a wavefunction in position space), but momentum space is possible. Spin-0 particle in an electromagnetic field The above definition should be modified for a system in an external electromagnetic field. In SI units, a charged particle of mass and electric charge includes a term due to the interaction with the electromagnetic field; where is the magnetic vector potential. The term has dimensions of momentum. Note that used here is the canonical momentum and is not gauge invariant, unlike the kinetic momentum operator . In Gaussian units: where is the speed of light. Spin-s particle in an electromagnetic field If the particle has spin, it has a corresponding magnetic moment, so an extra term needs to be added incorporating the spin interaction with the electromagnetic field. According to Landau-Lifschitz's Course of Theoretical Physics the electric current density is in Gaussian units: And in SI units: Hence the probability current (density) is in SI units: where is the spin vector of the particle with corresponding spin magnetic moment and spin quantum number . It is doubtful if this formula is valid for particles with an interior structure. The neutron has zero charge but non-zero magnetic moment, so would be impossible (except would also be zero in this case). For composite particles with a non-zero charge – like the proton which has spin quantum number s=1/2 and μS= 2.7927·μN or the deuteron (H-2 nucleus) which has s=1 and μS=0.8574·μN – it is mathematically possible but doubtful. Connection with classical mechanics The wave function can also be written in the complex exponential (polar) form: where are real functions of and . Written this way, the probability density is and the probability current is: The exponentials and terms cancel: Finally, combining and cancelling the constants, and replacing with , Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. If we take the familiar formula for the mass flux in hydrodynamics: where is the mass density of the fluid and is its velocity (also the group velocity of the wave). In the classical limit, we can associate the velocity with which is the same as equating with the classical momentum however, it does not represent a physical velocity or momentum at a point since simultaneous measurement of position and velocity violates uncertainty principle. This interpretation fits with Hamilton–Jacobi theory, in which in Cartesian coordinates is given by , where is Hamilton's principal function. The de Broglie-Bohm theory equates the velocity with in general (not only in the classical limit) so it is always well defined. It is an interpretation of quantum mechanics. Motivation Continuity equation for quantum mechanics The definition of probability current and Schrödinger's equation can be used to derive the continuity equation, which has exactly the same forms as those for hydrodynamics and electromagnetism. For some wave function , let: be the probability density (probability per unit volume, denotes complex conjugate). Then, where is any volume and is the boundary of . This is the conservation law for probability in quantum mechanics. The integral form is stated as: whereis the probability current or probability flux (flow per unit area). Here, equating the terms inside the integral gives the continuity equation for probability:and the integral equation can also be restated using the divergence theorem as: . In particular, if is a wavefunction describing a single particle, the integral in the first term of the preceding equation, sans time derivative, is the probability of obtaining a value within when the position of the particle is measured. The second term is then the rate at which probability is flowing out of the volume . Altogether the equation states that the time derivative of the probability of the particle being measured in is equal to the rate at which probability flows into . By taking the limit of volume integral to include all regions of space, a well-behaved wavefunction that goes to zero at infinities in the surface integral term implies that the time derivative of total probability is zero ie. the normalization condition is conserved. This result is in agreement with the unitary nature of time evolution operators which preserve length of the vector by definition. Transmission and reflection through potentials In regions where a step potential or potential barrier occurs, the probability current is related to the transmission and reflection coefficients, respectively and ; they measure the extent the particles reflect from the potential barrier or are transmitted through it. Both satisfy: where and can be defined by: where are the incident, reflected and transmitted probability currents respectively, and the vertical bars indicate the magnitudes of the current vectors. The relation between and can be obtained from probability conservation: In terms of a unit vector normal to the barrier, these are equivalently: where the absolute values are required to prevent and being negative. Examples Plane wave For a plane wave propagating in space: the probability density is constant everywhere; (that is, plane waves are stationary states) but the probability current is nonzero – the square of the absolute amplitude of the wave times the particle's speed; illustrating that the particle may be in motion even if its spatial probability density has no explicit time dependence. Particle in a box For a particle in a box, in one spatial dimension and of length , confined to the region , the energy eigenstates are and zero elsewhere. The associated probability currents are since Discrete definition For a particle in one dimension on we have the Hamiltonian where is the discrete Laplacian, with being the right shift operator on Then the probability current is defined as with the velocity operator, equal to and is the position operator on Since is usually a multiplication operator on we get to safely write As a result, we find: References Further reading Quantum mechanics
0.778472
0.98936
0.77019
Binding energy
In physics and chemistry, binding energy is the smallest amount of energy required to remove a particle from a system of particles or to disassemble a system of particles into individual parts. In the former meaning the term is predominantly used in condensed matter physics, atomic physics, and chemistry, whereas in nuclear physics the term separation energy is used. A bound system is typically at a lower energy level than its unbound constituents. According to relativity theory, a decrease in the total energy of a system is accompanied by a decrease in the total mass, where . Types There are several types of binding energy, each operating over a different distance and energy scale. The smaller the size of a bound system, the higher its associated binding energy. Mass–energy relation A bound system is typically at a lower energy level than its unbound constituents because its mass must be less than the total mass of its unbound constituents. For systems with low binding energies, this "lost" mass after binding may be fractionally small, whereas for systems with high binding energies, the missing mass may be an easily measurable fraction. This missing mass may be lost during the process of binding as energy in the form of heat or light, with the removed energy corresponding to the removed mass through Einstein's equation . In the process of binding, the constituents of the system might enter higher energy states of the nucleus/atom/molecule while retaining their mass, and because of this, it is necessary that they are removed from the system before its mass can decrease. Once the system cools to normal temperatures and returns to ground states regarding energy levels, it will contain less mass than when it first combined and was at high energy. This loss of heat represents the "mass deficit", and the heat itself retains the mass that was lost (from the point of view of the initial system). This mass will appear in any other system that absorbs the heat and gains thermal energy. For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. When the particles either pass through each other without interaction or elastically repel during the collision, the gained kinetic energy (related to speed) begins to revert into potential energy, driving the collided particles apart. The decelerating particles will return to the initial distance and beyond into infinity, or stop and repeat the collision (oscillation takes place). This shows that the system, which loses no energy, does not combine (bind) into a solid object, parts of which oscillate at short distances. Therefore, to bind the particles, the kinetic energy gained due to the attraction must be dissipated by resistive force. Complex objects in collision ordinarily undergo inelastic collision, transforming some kinetic energy into internal energy (heat content, which is atomic movement), which is further radiated in the form of photonsthe light and heat. Once the energy to escape the gravity is dissipated in the collision, the parts will oscillate at a closer, possibly atomic, distance, thus looking like one solid object. This lost energy, necessary to overcome the potential barrier to separate the objects, is the binding energy. If this binding energy were retained in the system as heat, its mass would not decrease, whereas binding energy lost from the system as heat radiation would itself have mass. It directly represents the "mass deficit" of the cold, bound system. Closely analogous considerations apply in chemical and nuclear reactions. Exothermic chemical reactions in closed systems do not change mass, but do become less massive once the heat of reaction is removed, though this mass change is too small to measure with standard equipment. In nuclear reactions, the fraction of mass that may be removed as light or heat, i.e. binding energy, is often a much larger fraction of the system mass. It may thus be measured directly as a mass difference between rest masses of reactants and (cooled) products. This is because nuclear forces are comparatively stronger than the Coulombic forces associated with the interactions between electrons and protons that generate heat in chemistry. Mass change Mass change (decrease) in bound systems, particularly atomic nuclei, has also been termed mass defect, mass deficit, or mass packing fraction. The difference between the unbound system calculated mass and experimentally measured mass of nucleus (mass change) is denoted as Δm. It can be calculated as follows: Mass change = (unbound system calculated mass) − (measured mass of system) e.g. (sum of masses of protons and neutrons) − (measured mass of nucleus) After a nuclear reaction occurs that results in an excited nucleus, the energy that must be radiated or otherwise removed as binding energy in order to decay to the unexcited state may be in one of several forms. This may be electromagnetic waves, such as gamma radiation; the kinetic energy of an ejected particle, such as an electron, in internal conversion decay; or partly as the rest mass of one or more emitted particles, such as the particles of beta decay. No mass deficit can appear, in theory, until this radiation or this energy has been emitted and is no longer part of the system. When nucleons bind together to form a nucleus, they must lose a small amount of mass, i.e. there is a change in mass to stay bound. This mass change must be released as various types of photon or other particle energy as above, according to the relation . Thus, after the binding energy has been removed, binding energy = mass change × . This energy is a measure of the forces that hold the nucleons together. It represents energy that must be resupplied from the environment for the nucleus to be broken up into individual nucleons. For example, an atom of deuterium has a mass defect of 0.0023884 Da, and its binding energy is nearly equal to 2.23 MeV. This means that energy of 2.23 MeV is required to disintegrate an atom of deuterium. The energy given off during either nuclear fusion or nuclear fission is the difference of the binding energies of the "fuel", i.e. the initial nuclide(s), from that of the fission or fusion products. In practice, this energy may also be calculated from the substantial mass differences between the fuel and products, which uses previous measurements of the atomic masses of known nuclides, which always have the same mass for each species. This mass difference appears once evolved heat and radiation have been removed, which is required for measuring the (rest) masses of the (non-excited) nuclides involved in such calculations. See also Semi-empirical mass formula Separation energy (binding energy of one nucleon) Virial mass Prout's hypothesis, an early model of the atom that did not account for mass defect References External links Nuclear Binding Energy Mass and Nuclide Stability Experimental atomic mass data compiled Nov. 2003 Energy (physics) Mass spectrometry Nuclear physics Forms of energy
0.775272
0.993418
0.770169
Granularity
Granularity (also called graininess) is the degree to which a material or system is composed of distinguishable pieces, "granules" or "grains" (metaphorically). It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities. Precision and ambiguity Coarse-grained materials or systems have fewer, larger discrete components than fine-grained materials or systems. A coarse-grained description of a system regards large subcomponents. A fine-grained description regards smaller components of which the larger ones are composed. The concepts granularity, coarseness, and fineness are relative; and are used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all cities in those states, etc. Physics A fine-grained description of a system is a detailed, exhaustive, low-level model of it. A coarse-grained description is a model where some of this fine detail has been smoothed over or averaged out. The replacement of a fine-grained description with a lower-resolution coarse-grained model is called coarse-graining. (See for example the second law of thermodynamics) Molecular dynamics In molecular dynamics, coarse graining consists of replacing an atomistic description of a biological molecule with a lower-resolution coarse-grained model that averages or smooths away fine details. Coarse-grained models have been developed for investigating the longer time- and length-scale dynamics that are critical to many biological processes, such as lipid membranes and proteins. These concepts not only apply to biological molecules but also inorganic molecules. Coarse graining may remove certain degrees of freedom, such as the vibrational modes between two atoms, or represent the two atoms as a single particle. The ends to which systems may be coarse-grained is simply bound by the accuracy in the dynamics and structural properties one wishes to replicate. This modern area of research is in its infancy, and although it is commonly used in biological modeling, the analytic theory behind it is poorly understood. Computing In parallel computing, granularity means the amount of computation in relation to communication, i.e., the ratio of computation to the amount of communication. Fine-grained parallelism means individual tasks are relatively small in terms of code size and execution time. The data is transferred among processors frequently in amounts of one or a few memory words. Coarse-grained is the opposite: data is communicated infrequently, after larger amounts of computation. The finer the granularity, the greater the potential for parallelism and hence speed-up, but the greater the overheads of synchronization and communication. Granularity disintegrators exist as well and are important to understand in order to determine the accurate level of granularity. In order to attain the best parallel performance, the best balance between load and communication overhead needs to be found. If the granularity is too fine, the performance can suffer from the increased communication overhead. On the other side, if the granularity is too coarse, the performance can suffer from load imbalance. Reconfigurable computing and supercomputing In reconfigurable computing and in supercomputing these terms refer to the data path width. The use of about one-bit wide processing elements like the configurable logic blocks (CLBs) in an FPGA is called fine-grained computing or fine-grained reconfigurability, whereas using wide data paths, such as, for instance, 32 bits wide resources, like microprocessor CPUs or data-stream-driven data path units (DPUs) like in a reconfigurable datapath array (rDPA) is called coarse-grained computing or coarse-grained reconfigurability. Data and information The granularity of data refers to the size in which data fields are sub-divided. For example, a postal address can be recorded, with coarse granularity, as a single field: address = 200 2nd Ave. South #358, St. Petersburg, FL 33701-4313 USA or with fine granularity, as multiple fields: street address = 200 2nd Ave. South #358 city = St. Petersburg state = FL postal code = 33701-4313 country = USA or even finer granularity: street = 2nd Ave. South address number = 200 suite/apartment = #358 city = St. Petersburg state = FL postal-code = 33701 postal-code-add-on = 4313 country = USA Finer granularity has overheads for data input and storage. This manifests itself in a higher number of objects and methods in the object-oriented programming paradigm or more subroutine calls for procedural programming and parallel computing environments. It does however offer benefits in flexibility of data processing in treating each data field in isolation if required. A performance problem caused by excessive granularity may not reveal itself until scalability becomes an issue. Within database design and data warehouse design, data grain can also refer to the smallest combination of columns in a table which makes the rows (also called records) unique. See also Complex systems Complexity Cybernetics Granular computing Granularity (parallel computing) Dennett's three stances High- and low-level Levels of analysis Meta-systems Multiple granularity locking Precision (computer science) Self-organization Specificity (linguistics) Systems thinking Notes References Statistical mechanics Business terms
0.781766
0.98512
0.770133
MIL-STD-810
MIL-STD-810, U.S. Department of Defense Test Method Standard, Environmental Engineering Considerations and Laboratory Tests, is a United States Military Standard that emphasizes tailoring an equipment's environmental design and test limits to the conditions that it will experience throughout its service life, and establishing chamber test methods that replicate the effects of environments on the equipment rather than imitating the environments themselves. Although prepared specifically for U.S. military applications, the standard is often applied for commercial products as well. The standard's guidance and test methods are intended to: define environmental stress sequences, durations, and levels of equipment life cycles; be used to develop analysis and test criteria tailored to the equipment and its environmental life cycle; evaluate equipment's performance when exposed to a life cycle of environmental stresses identify deficiencies, shortcomings, and defects in equipment design, materials, manufacturing processes, packaging techniques, and maintenance methods; and demonstrate compliance with contractual requirements. The document revision as of 2019 is U.S. MIL-STD-810H. It supersedes MIL-STD-810G, Change Notice 1 which was issued in 2014. Cognizant agency MIL-STD-810 is maintained by a Tri-Service partnership that includes the United States Air Force, Army, and Navy. The U.S. Army Test and Evaluation Command, or ATEC, serves as Lead Standardization Activity / Preparing Activity, and is chartered under the Defense Standardization Program (DSP) with maintaining the functional expertise and serving as the DoD-wide technical focal point for the standard. The Institute of Environmental Sciences and Technology is the Administrator for WG-DTE043: MIL-STD-810, the Working Group tasked with reviewing the current environmental testing guidance and recommending improvements to the DOD Tri-Service Working Group. Scope and purpose MIL-STD-810 addresses a broad range of environmental conditions that include: low pressure for altitude testing; exposure to high and low temperatures plus temperature shock (both operating and in storage); rain (including wind blown and freezing rain); humidity, fungus, salt fog for corrosion testing; sand and dust exposure; explosive atmosphere; leakage; acceleration; shock and transport shock; gunfire vibration; and random vibration. The standard describes environmental management and engineering processes that can be of enormous value to generate confidence in the environmental worthiness and overall durability of a system design. The standard contains military acquisition program planning and engineering direction to consider the influences that environmental stresses have on equipment throughout all phases of its service life. The document does not impose design or test specifications. Rather, it describes the environmental tailoring process that results in realistic materiel designs and test methods based on materiel system performance requirements. Finally, there are limitations inherent in laboratory testing that make it imperative to use proper engineering judgment to extrapolate laboratory results to results that may be obtained under actual service conditions. In many cases, real-world environmental stresses (singularly or in combination) cannot be duplicated in test laboratories. Therefore, users should not assume that an item that passes laboratory testing also will pass field/fleet verification tests. History and evolution In 1945, the Army Air Force (AAF) released the first specification providing a formal methodology for testing equipment under simulated environmental conditions. That document, entitled AAF Specification 41065, Equipment - General Specification for Environmental Test of, is the direct ancestor of MIL-STD-810. In 1965, the USAF released a technical report with data and information on the origination and development of natural and induced environmental tests intended for aerospace and ground equipment. By using that document, the design engineer obtained a clearer understanding of the interpretation, application, and relationship of environmental testing to military equipment and materiel. The Institute of Environmental Sciences and Technology (IEST), a non-profit technical society, released the publication History and Rationale of MIL-STD-810 to capture the thought process behind the evolution of MIL-STD-810. It also provides a development history of test methods, rationale for many procedural changes, tailoring guidance for many test procedures, and insight into the future direction of the standard. The MIL-STD-810 test series originally addressed generic laboratory environmental testing. The first edition of MIL-STD-810 in 1962 included only a single sentence allowing users to modify tests to reflect environmental conditions. Subsequent editions contained essentially the same phrase, but did not elaborate on the subject until MIL-STD-810D was issued marking one of the more significant revisions of the standard with its focus more on shock and vibration tests that closely mirrored real-world operating environments. MIL-STD-810F further defined test methods while continuing the concept of creating test chambers that simulate conditions likely to be encountered during a product's useful life rather than simply replicating the actual environments. More recently, MIL-STD-810G implements Test Method 527 calling for the use of multiple vibration exciters to perform multi-axis shaking that simultaneously excites all test article resonances and simulates real-world vibrations. This approach replaces the legacy approach of three distinct tests, that is, shaking a load first in its x axis, then its y axis, and finally in its z axis. A matrix of the tests and methods of MIL-STD-810 through Revision G is available on the web and quite useful in comparing the changes among the various revisions . The following table traces the specification's evolution in terms of environmental tailoring to meet a specific user's needs. Part one - General program guidelines Part One of MIL-STD-810 describes management, engineering, and technical roles in the environmental design and test tailoring process. It focuses on the process of tailoring design and test criteria to the specific environmental conditions an equipment item is likely to encounter during its service life. New appendices support the succinctly presented text of Part One. It describes the tailoring process (i.e., systematically considering detrimental effects that various environmental factors may have on a specific equipment throughout its service life) and applies this process throughout the equipment's life cycle to meet user and interoperability needs. Part two - Laboratory test methods Part Two of MIL-STD-810 contains the environmental laboratory test methods to be applied using the test tailoring guidelines described in Part One of the document. With the exception of Test Method 528, these methods are not mandatory, but rather the appropriate method is selected and tailored to generate the most relevant test data possible. Each test method in Part Two contains some environmental data and references, and it identifies particular tailoring opportunities. Each test method supports the test engineer by describing preferred laboratory test facilities and methodologies. These environmental management and engineering processes can be of enormous value to generate confidence in the environmental worthiness and overall durability of equipment and materiel. Still, the user must recognize that there are limitations inherent in laboratory testing that make it imperative to use engineering judgment when extrapolating from laboratory results to results that may be obtained under actual service conditions. In many cases, real-world environmental stresses (singularly or in combination) cannot be duplicated practically or reliably in test laboratories. Therefore, users should not assume that a system or component that passes laboratory tests of this standard also would pass field/fleet verification trials. Specific examples of Test Methods called out in MIL-STD-810 are listed below: Test Method 500.6 Low Pressure (Altitude) Test Method 501.6 High Temperature Test Method 502.6 Low Temperature Test Method 503.6 Temperature Shock Test Method 504.2 Contamination by Fluids Test Method 505.6 Solar Radiation (Sunshine) Test Method 506.6 Rain Test Method 507.6 Humidity Test Method 508.7 Fungus Test Method 509.6 Salt Fog Test Method 510.6 Sand and Dust Test Method 511.6 Explosive Atmosphere Test Method 512.5 Immersion Test Method 513.7 Acceleration Test Method 514.7 Vibration Test Method 515.7 Acoustic Noise Test Method 516.7 Shock Test Method 517.2 Pyroshock Test Method 518.2 Acidic Atmosphere Test Method 519.7 Gunfire Shock Test Method 520.4 Temperature, Humidity, Vibration, and Altitude Test Method 521.4 Icing/Freezing Rain Test Method 522.2 Ballistic Shock Test Method 523.4 Vibro-Acoustic/Temperature Test Method 524.1 Freeze / Thaw Test Method 525.1 Time Waveform Replication Test Method 526.1 Rail Impact. Test Method 527.1 Multi-Exciter Test Method 528.1 Mechanical Vibrations of Shipboard Equipment (Type I – Environmental and Type II – Internally Excited) Part three - World climatic regions Part Three contains a compendium of climatic data and guidance assembled from several sources, including AR 70-38, Research, Development, Test and Evaluation of Materiel for Extreme Climatic Conditions (1979), a draft version of AR 70-38 (1990) that was developed using Air Land Battlefield Environment (ALBE) report information, Environmental Factors and Standards for Atmospheric Obscurants, Climate, and Terrain (1987), and MIL-HDBK-310, Global Climatic Data for Developing Military Products. It also provides planning guidance for realistic consideration (i.e., starting points) of climatic conditions in various regions throughout the world. Applicability to "ruggedized" consumer products U.S. MIL-STD-810 is a flexible standard that allows users to tailor test methods to fit the application. As a result, a vendor's claims of "...compliance to U.S. MIL-STD-810..." can be misleading, because no commercial organization or agency certifies compliance, commercial vendors can create the test methods or approaches to fit their product. Suppliers can — and some do — take significant latitude with how they test their products, and how they report the test results. Consumers who require rugged products should verify which test methods that compliance is claimed against and which parameter limits were selected for testing. Also, if some testing was actually done they would have to specify: (i) against which test methods of the standard the compliance is claimed; (ii) to which parameter limits the items were actually tested; and (iii) whether the testing was done internally or externally by an independent testing facility. Related documents Environmental Conditions for Airborne Equipment: The document DO-160G, Environmental Conditions and Test Procedures for Airborne Equipment outlines a set of minimal standard environmental test conditions (categories) and corresponding test procedures for airborne equipment. It is published by the RTCA, Inc, formerly known as Radio Technical Commission for Aeronautics until their re-incorporation in 1991 as a not-for-profit corporation that functions as a Federal Advisory Committee pursuant to the United States Federal Advisory Committee Act. Environmental Test Methods for Defense Materiel: The Ministry of Defence (United Kingdom) provides requirements for environmental conditions experienced by defence materiel in service via the Defence Standard 00-35, Environmental Handbook for Defence Materiel (Part 3) Environmental Test Methods. The document contains environmental descriptions, a range of tests procedures and default test severities representing conditions that may be encountered during the equipment's life. NATO Environmental Guidelines for Defence Equipment: The North Atlantic Treaty Organization (NATO) provides guidance to project managers, programme engineers, and environmental engineering specialists in the planning and implementation of environmental tasks via the Allied Environmental Conditions and Test Publication (AECTP) 100, Environmental Guidelines for Defence Materiel. The current document, AECTP-100 (Edition 3), was released January 2006. Shock Testing Requirements for Naval Ships: The military specification entitled MIL-DTL-901E, Detail Specification, Shock Tests, H.I. (High-Impact) Shipboard Machinery, Equipment, and Systems, Requirements for (often mistakenly referred to as MIL-STD-901) covers shock testing requirements for ship board machinery, equipment, systems, and structures, excluding submarine pressure hull penetrations. Compliance to the document verifies the ability of shipboard installations to withstand shock loadings which may be incurred during wartime service due to the effects of nuclear or conventional weapons. The current specification was released 20 June 2017. IEST Vibration and Shock Testing Recommended Practices: These documents are peer-reviewed documents that outline how to do specific tests. They are published by the Institute of Environmental Sciences and Technology. See also IP Code Rugged computer Rugged smartphone EN 62262 Industrial PC References External links DOD MIL-STD-810 standard, Environmental Engineering Considerations and Laboratory Tests. Military of the United States standards Environmental testing
0.772075
0.997484
0.770133
Nuclear binding energy
Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means. The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products). These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen. Introduction Nuclear energy An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation. The best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion. Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat. In order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation. The nuclear force Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds. The electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 newtons. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized. However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible. After the proton and neutron magnetic moments were measured and verified, it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10−13 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy. Therefore, another force, called the nuclear force (or residual strong force) holds the nucleons of nuclei together. This force is a residuum of the strong interaction, which binds quarks into nucleons at an even smaller level of distance. The fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero. Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place. Physics of nuclei There are around 94 naturally occurring elements on Earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons, which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element. The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. The most common isotope of helium contains two protons and two neutrons, and those of carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect. Mass defect Mass defect (also called "mass deficit") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc2, which describes the equivalence of energy and mass. The decrease in mass is equal to the energy emitted in the reaction of an atom's creation divided by c2. By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass. For lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron/nickel. For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity. As a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory. The reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction, which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up. As nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years). Nuclear reactions in the Sun The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed. Thermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force, nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure. Different nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium. A branch of physics, the study of controlled nuclear fusion, has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second. Combining nuclei Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge. For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron. With the nuclei of elements heavier than lead, the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles. This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei. Nuclei heavier than lead (except for bismuth, thorium, and uranium) spontaneously break up too quickly to appear in nature as primordial elements, though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay. Iron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do. Nuclear binding energy An example that illustrates nuclear binding energy is the nucleus of 12C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons. The energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula —gives the binding energy of the nucleus. Nuclear fusion The binding energy of helium is the energy source of the Sun and of most stars. The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons. The conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force. The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two. The protons of hydrogen combine to helium only if they have enough velocity to overcome each other's mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Sun's strong gravity. The process of combining protons to form helium is an example of nuclear fusion. Producing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium. Research is being undertaken on developing a process using deuterium and tritium. The Earth's oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium, and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earth's radiation belt) are guided by magnetic field lines. The binding energy maximum and ways to approach it by decay In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated. The net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 (56Fe) is the most efficiently bound nucleus meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. (Nickel-62's higher binding energy does not translate to a larger mean mass loss than 56Fe, because 62Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62's average mass per nucleon). To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay, meaning the nuclide will be radioactive. The two methods for this conversion are mediated by the weak force, and involve types of beta decay. In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron. Among the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles, which consist of two protons and two neutrons (alpha particles are fast helium nuclei). (Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104. The curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium 238U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions preceding the formation of the Solar System. The most common isotope of thorium, 232Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead. Calculation of nuclear binding energy Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the nuclear mass defect, converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon. Conversion of nuclear mass defect into energy Nuclear mass defect is defined as the difference between the nuclear mass, and the sum of the masses of the constituent nucleons. It is given by where: Z is the proton number (atomic number). A is the nucleon number (mass number). mp is the mass of proton. mn is the mass of neutron. M is the nuclear mass. N is the neutron number. The nuclear mass defect is usually converted into nuclear binding energy, which is the minimum energy required to disassemble the nucleus into its constituent nucleons. This conversion is done with the mass-energy equivalence: E = ∆mc². However it must be expressed as energy per mole of atoms or as energy per nucleon. Fission and fusion Nuclear energy is released by the splitting (fission) or merging (fusion) of the nuclei of atom(s). The conversion of nuclear mass–energy to a form of energy, which can remove some mass when the energy is removed, is consistent with the mass–energy equivalence formula: ΔE = Δm c2, where ΔE = energy release, Δm = mass defect, and c = the speed of light in vacuum. Nuclear energy was first discovered by French physicist Henri Becquerel in 1896, when he found that photographic plates stored in the dark near uranium were blackened like X-ray plates (X-rays had recently been discovered in 1895). Nickel-62 has the highest binding energy per nucleon of any isotope. If an atom of lower average binding energy per nucleon is changed into two atoms of higher average binding energy per nucleon, energy is emitted. (The average here is the weighted average.) Also, if two atoms of lower average binding energy fuse into an atom of higher average binding energy, energy is emitted. The chart shows that fusion, or combining, of hydrogen nuclei to form heavier atoms releases energy, as does fission of uranium, the breaking up of a larger nucleus into smaller parts. Nuclear energy is released by three exoenergetic (or exothermic) processes: Radioactive decay, where a neutron or proton in the radioactive nucleus decays spontaneously by emitting either particles, electromagnetic radiation (gamma rays), or both. Note that for radioactive decay, it is not strictly necessary for the binding energy to increase. What is strictly necessary is that the mass decrease. If a neutron turns into a proton and the energy of the decay is less than 0.782343 MeV, the difference between the masses of the neutron and proton multiplied by the speed of light squared, (such as rubidium-87 decaying to strontium-87), the average binding energy per nucleon will actually decrease. Fusion, two atomic nuclei fuse together to form a heavier nucleus Fission, the breaking of a heavy nucleus into two (or more rarely three) lighter nuclei, and some neutrons The energy-producing nuclear interaction of light elements requires some clarification. Frequently, all light element energy-producing nuclear interactions are classified as fusion, however by the given definition above fusion requires that the products include a nucleus that is heavier than the reactants. Light elements can undergo energy-producing nuclear interactions by fusion or fission. All energy-producing nuclear interactions between two hydrogen isotopes and between hydrogen and helium-3 are fusion, as the product of these interactions include a heavier nucleus. However, the energy-producing nuclear interaction of a neutron with lithium–6 produces Hydrogen-3 and Helium-4, each a lighter nucleus. By the definition above, this nuclear interaction is fission, not fusion. When fission is caused by a neutron, as in this case, it is called induced fission. Binding energy for atoms The binding energy of an atom (including its electrons) is not exactly the same as the binding energy of the atom's nucleus. The measured mass deficits of isotopes are always listed as mass deficits of the neutral atoms of that isotope, and mostly in . As a consequence, the listed mass deficits are not a measure of the stability or binding energy of isolated nuclei, but for the whole atoms. There is a very practical reason for this, namely that it is very hard to totally ionize heavy elements, i.e. strip them of all of their electrons. This practice is useful for other reasons, too: stripping all the electrons from a heavy unstable nucleus (thus producing a bare nucleus) changes the lifetime of the nucleus, or the nucleus of a stable neutral atom can likewise become unstable after stripping, indicating that the nucleus cannot be treated independently. Examples of this have been shown in bound-state β decay experiments performed at the GSI heavy ion accelerator. This is also evident from phenomena like electron capture. Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not orbit in a strict sense, but has a non-vanishing probability of being located inside the nucleus). A nuclear decay happens to the nucleus, meaning that properties ascribed to the nucleus change in the event. In the field of physics the concept of "mass deficit" as a measure for "binding energy" means "mass deficit of the neutral atom" (not just the nucleus) and is a measure for stability of the whole atom. Nuclear binding energy curve In the periodic table of elements, the series of light elements from hydrogen up to sodium is observed to exhibit generally increasing binding energy per nucleon as the atomic mass increases. This increase is generated by increasing forces per nucleon in the nucleus, as each additional nucleon is attracted by other nearby nucleons, and thus more tightly bound to the whole. Helium-4 and oxygen-16 are particularly stable exceptions to the trend (see figure on the right). This is because they are doubly magic, meaning their protons and neutrons both fill their respective nuclear shells. The region of increasing binding energy is followed by a region of relative stability (saturation) in the sequence from about mass 30 through about mass 90. In this region, the nucleus has become large enough that nuclear forces no longer completely extend efficiently across its width. Attractive nuclear forces in this region, as atomic mass increases, are nearly balanced by repellent electromagnetic forces between protons, as the atomic number increases. Finally, in the heavier elements, there is a gradual decrease in binding energy per nucleon as atomic number increases. In this region of nuclear size, electromagnetic repulsive forces are beginning to overcome the strong nuclear force attraction. At the peak of binding energy, nickel-62 is the most tightly bound nucleus (per nucleon), followed by iron-58 and iron-56. This is the approximate basic reason why iron and nickel are very common metals in planetary cores, since they are produced profusely as end products in supernovae and in the final stages of silicon burning in stars. However, it is not binding energy per defined nucleon (as defined above), which controls exactly which nuclei are made, because within stars, neutrons and protons can inter-convert to release even more energy per generic nucleon. In fact, it has been argued that photodisintegration of 62Ni to form 56Fe may be energetically possible in an extremely hot star core, due to this beta decay conversion of neutrons to protons. This favors the creation of 56Fe, the nuclide with the lowest mass per nucleon. However, at high temperatures not all matter will be in the lowest energy state. This energetic maximum should also hold for ambient conditions, say and , for neutral condensed matter consisting of 56Fe atoms—however, in these conditions nuclei of atoms are inhibited from fusing into the most stable and low energy state of matter. Elements with high binding energy per nucleon, like iron and nickel, cannot undergo fission, but they can theoretically undergo fusion with hydrogen, deuterium, helium, and carbon, for instance: Ni + C → Se Q = 5.467 MeV It is generally believed that iron-56 is more common than nickel isotopes in the universe for mechanistic reasons, because its unstable progenitor nickel-56 is copiously made by staged build-up of 14 helium nuclei inside supernovas, where it has no time to decay to iron before being released into the interstellar medium in a matter of a few minutes, as the supernova explodes. However, nickel-56 then decays to cobalt-56 within a few weeks, then this radioisotope finally decays to iron-56 with a half life of about 77.3 days. The radioactive decay-powered light curve of such a process has been observed to happen in type II supernovae, such as SN 1987A. In a star, there are no good ways to create nickel-62 by alpha-addition processes, or else there would presumably be more of this highly stable nuclide in the universe. Binding energy and nuclide masses The fact that the maximum binding energy is found in medium-sized nuclei is a consequence of the trade-off in the effects of two opposing forces that have different range characteristics. The attractive nuclear force (strong nuclear force), which binds protons and neutrons equally to each other, has a limited range due to a rapid exponential decrease in this force with distance. However, the repelling electromagnetic force, which acts between protons to force nuclei apart, falls off with distance much more slowly (as the inverse square of distance). For nuclei larger than about four nucleons in diameter, the additional repelling force of additional protons more than offsets any binding energy that results between further added nucleons as a result of additional strong force interactions. Such nuclei become increasingly less tightly bound as their size increases, though most of them are still stable. Finally, nuclei containing more than 209 nucleons (larger than about 6 nucleons in diameter) are all too large to be stable, and are subject to spontaneous decay to smaller nuclei. Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). The nuclear fission of a few light elements (such as Lithium) occurs because Helium-4 is a product and a more tightly bound element than slightly heavier elements. Both processes produce energy as the sum of the masses of the products is less than the sum of the masses of the reacting nuclei. As seen above in the example of deuterium, nuclear binding energies are large enough that they may be easily measured as fractional mass deficits, according to the equivalence of mass and energy. The atomic binding energy is simply the amount of energy (and mass) released, when a collection of free nucleons are joined to form a nucleus. Nuclear binding energy can be computed from the difference in mass of a nucleus, and the sum of the masses of the number of free neutrons and protons that make up the nucleus. Once this mass difference, called the mass defect or mass deficiency, is known, Einstein's mass–energy equivalence formula can be used to compute the binding energy of any nucleus. Early nuclear physicists used to refer to computing this value as a "packing fraction" calculation. For example, the dalton (1 Da) is defined as 1/12 of the mass of a 12C atom—but the atomic mass of a 1H atom (which is a proton plus electron) is 1.007825 Da, so each nucleon in 12C has lost, on average, about 0.8% of its mass in the form of binding energy. Semiempirical formula for nuclear binding energy For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula for the binding energy (EB) per nucleon is: where the coefficients are given by: ; ; ; ; . The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term is the Coulomb electrostatic repulsion; this becomes more important as increases. The symmetry correction term takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n–p interaction in a nucleus is stronger than either the n−n or p−p interaction. The pairing term is purely empirical; it is + for even–even nuclei and − for odd–odd nuclei. When A is odd, the pairing term is identically zero. Example values deduced from experimentally measured atom nuclide masses The following table lists some binding energies and mass defect values. Notice also that we use 1 Da = . To calculate the binding energy we use the formula Z (mp + me) + N mn − mnuclide where Z denotes the number of protons in the nuclides and N their number of neutrons. We take , and . The letter A denotes the sum of Z and N (number of nucleons in the nuclide). If we assume the reference nucleon has the mass of a neutron (so that all "total" binding energies calculated are maximal) we could define the total binding energy as the difference from the mass of the nucleus, and the mass of a collection of A free neutrons. In other words, it would be (Z + N) mn − mnuclide. The "total binding energy per nucleon" would be this value divided by A. 56Fe has the lowest nucleon-specific mass of the four nuclides listed in this table, but this does not imply it is the strongest bound atom per hadron, unless the choice of beginning hadrons is completely free. Iron releases the largest energy if any 56 nucleons are allowed to build a nuclide—changing one to another if necessary, The highest binding energy per hadron, with the hadrons starting as the same number of protons Z and total nucleons A as in the bound nucleus, is 62Ni. Thus, the true absolute value of the total binding energy of a nucleus depends on what we are allowed to construct the nucleus out of. If all nuclei of mass number A were to be allowed to be constructed of A neutrons, then 56Fe would release the most energy per nucleon, since it has a larger fraction of protons than 62Ni. However, if nuclei are required to be constructed of only the same number of protons and neutrons that they contain, then nickel-62 is the most tightly bound nucleus, per nucleon. In the table above it can be seen that the decay of a neutron, as well as the transformation of tritium into helium-3, releases energy; hence, it manifests a stronger bound new state when measured against the mass of an equal number of neutrons (and also a lighter state per number of total hadrons). Such reactions are not driven by changes in binding energies as calculated from previously fixed N and Z numbers of neutrons and protons, but rather in decreases in the total mass of the nuclide/per nucleon, with the reaction. (Note that the Binding Energy given above for hydrogen-1 is the atomic binding energy, not the nuclear binding energy which would be zero.) See also Gravitational binding energy Bond-dissociation energy (binding energy between the atoms in a chemical bond) Electron binding energy (energy required to free an electron from its atomic orbital or from a solid) Atomic binding energy (energy required to disassemble an atom into free electrons and a nucleus) Quantum chromodynamics binding energy (addresses the mass and kinetic energy of the parts that bind the various quarks together inside a hadron) References External links Nuclear physics Nuclear chemistry Nuclear fusion Binding energy ml:ആണവോർജ്ജം
0.773695
0.995336
0.770086
Gravitational wave
Gravitational waves are transient displacements in a gravitational fieldgenerated by the relative motion of gravitating massesthat radiate outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves. In 1916, Albert Einstein demonstrated that gravitational waves result from his general theory of relativity as ripples in spacetime. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, instead asserting that gravity has instantaneous effect everywhere. Gravitational waves therefore stand as an important relativistic phenomenon that is absent from Newtonian physics. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. The first direct observation of gravitational waves was made in 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. Introduction In Albert Einstein's general theory of relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. If the masses move, the curvature of spacetime changes. If the motion is not spherically symmetric, the motion can cause gravitational waves which will propagate away at the speed of light. As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance (not distance squared) from the source. Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists demonstrate the existence of these waves with highly-sensitive detectors at multiple observation sites. , the LIGO and VIRGO observatories were the most sensitive detectors, operating at resolutions of about one part in . The Japanese detector KAGRA was completed in 2019; its first joint detection with LIGO and VIRGO was reported in 2021. Another European ground-based detector, the Einstein Telescope, is under development. A space-based observatory, the Laser Interferometer Space Antenna (LISA), is also being developed by the European Space Agency. Gravitational waves do not strongly interact with matter in the way that electromagnetic radiation does. This allows for the observation of events involving exotic objects in the distant universe that cannot be observed with more traditional means such as optical telescopes or radio telescopes; accordingly, gravitational wave astronomy gives new insights into the workings of the universe. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity. In principle, gravitational waves can exist at any frequency. Very low frequency waves can be detected using pulsar timing arrays. In this technique, the timing of approximately 100 pulsars spread widely across our galaxy is monitored over the course of years. Detectable changes in the arrival time of their signals can result from passing gravitational waves generated by merging supermassive black holes with wavelengths measured in lightyears. These timing changes can be used to locate the source of the waves. Using this technique, astronomers have discovered the 'hum' of various SMBH mergers occurring in the universe. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz. Speed of gravity The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, . Within the theory of special relativity, the constant is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and, further, the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity). In August 2017, the LIGO and Virgo detectors received gravitational wave signals at nearly the same time as gamma ray satellites and optical telescopes saw signals from a source located about 130 million light years away. History The possibility of gravitational waves and that those might travel at the speed of light was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. In 1915 Einstein published his general theory of relativity, a complete relativistic theory of gravitation. He conjectured, like Poincare, that the equation would produce gravitational waves, but, as he mentions in a letter to Schwarzschild in February 1916, these could not be similar to electromagnetic waves. Electromagnetic waves can be produced by dipole motion, requiring both a positive and a negative charge. Gravitation has no equivalent to negative charge. Einstein continued to work through the complexity of the equations of general relativity to find an alternative wave model. The result was published in June 1916, and there he came to the conclusion that the gravitational wave must propagate with the speed of light, and there must, in fact, be three types of gravitational waves dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl. However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they "propagate at the speed of thought". This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor. At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy. This matter was settled by a thought experiment proposed by Richard Feynman during the first "GR" conference at Chapel Hill in 1957. In short, his argument known as the "sticky bead argument" notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work. Shortly after, Hermann Bondi published a detailed version of the "sticky bead argument". This later led to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves. Paul Dirac further postulated the existence of gravitational waves, declaring them to have "physical significance" in his 1959 lecture at the Lindau Meetings. Further, it was Dirac who predicted gravitational waves with a well defined energy density in 1964. After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars. In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was "detecting" signals regularly from the Galactic Center; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious. In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, which earned them the 1993 Nobel Prize in Physics. Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity. This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M.E. Gertsenshtein and V. I. Pustovoit in 1962, and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600, LIGO, and Virgo. After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves, from a signal (dubbed GW150914) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The confidence level of this being an observation of gravitational waves was 99.99994%. A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, they were later forced to retract this result. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background. North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars. Similar results are published by European Pulsar Timing Array, who claimed a -significance. They expect that a -significance will be achieved by 2025 by combining the measurements of several collaborations. Effects of passing Gravitational waves are constantly passing Earth; however, even the strongest have a minuscule effect and their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years, as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors. The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a "cruciform" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation. The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula. As with other waves, there are a number of characteristics used to describe a gravitational wave: Amplitude: Usually denoted h, this is the size of the wave the fraction of stretching or squeezing in the animation. The amplitude shown here is roughly h = 0.5 (or 50%). Gravitational waves passing through the Earth are many sextillion times weaker than this h ≈ 10−20. Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeeze. Speed: This is the speed at which a point on the wave (for example, a point of maximum stretch or squeeze) travels. For gravitational waves with small amplitudes, this wave speed is equal to the speed of light (c). The speed, wavelength, and frequency of a gravitational wave are related by the equation , just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth. In the above example, it is assumed that the wave is linearly polarized with a "plus" polarization, written h+. Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. In particular, in a "cross"-polarized gravitational wave, h×, the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source. Sources In general terms, gravitational waves are radiated large, coherent motions of immense mass especially in regions where gravity is so strong that Newtonian gravity begins to fail. The effect does not occur in a purely spherically symmetric system. A simple example of this principle is a spinning dumbbell. If the dumbbell spins around its axis of symmetry, it will not radiate gravitational waves; if it tumbles end over end, as in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. In an extreme case, such as when the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off. Some more detailed examples: Two objects orbiting each other, as a planet would orbit the Sun, will radiate. A spinning non-axisymmetric planetoid say with a large bump or dimple on the equator will radiate. A supernova will radiate except in the unlikely event that the explosion is perfectly symmetric. An isolated non-spinning solid object moving at a constant velocity will not radiate. This can be regarded as a consequence of the principle of conservation of linear momentum. A spinning disk will not radiate. This can be regarded as a consequence of the principle of conservation of angular momentum. However, it will show gravitomagnetic effects. A spherically pulsating spherical star (non-zero monopole moment or mass, but zero quadrupole moment) will not radiate, in agreement with Birkhoff's theorem. More technically, the second time derivative of the quadrupole moment (or the l-th time derivative of the l-th multipole moment) of an isolated system's stress–energy tensor must be non-zero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current that is necessary for the emission of electromagnetic radiation. Binaries Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an in-spiral or decrease in orbit. Imagine for example a simple system of two masses such as the Earth–Sun system moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the x–y plane. To a good approximation, the masses follow simple Keplerian orbits. However, such an orbit represents a changing quadrupole moment. That is, the system will give off gravitational waves. In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun. However, the total energy of the Earth orbiting the Sun (kinetic energy + gravitational potential energy) is about 1.14 joules of which only 200 watts (joules per second) is lost through gravitational radiation, leading to a decay in the orbit by about 1 meters per day or roughly the diameter of a proton. At this rate, it would take the Earth approximately 3 times more than the current age of the universe to spiral onto the Sun. This estimate overlooks the decrease in r over time, but the radius varies only slowly for most of the time and plunges at later stages, as with the initial radius and the total time needed to fully coalesce. More generally, the rate of orbital decay can be approximated by where r is the separation between the bodies, t time, G the gravitational constant, c the speed of light, and m1 and m2 the masses of the bodies. This leads to an expected time to merger of Compact binaries Compact stars like white dwarfs and neutron stars can be constituents of binaries. For example, a pair of solar mass neutron stars in a circular orbit at a separation of 1.89 m (189,000 km) has an orbital period of 1,000 seconds, and an expected lifetime of 1.30 seconds or about 414,000 years. Such a system could be observed by LISA if it were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses in the order of the Sun, and diameters in the order of the Earth. They cannot get much closer together than 10,000 km before they will merge and explode in a supernova which would also end the emission of gravitational waves. Until then, their gravitational radiation would be comparable to that of a neutron star binary. When the orbit of a neutron star binary has decayed to 1.89 m (1890 km), its remaining lifetime is about 130,000 seconds or 36 hours. The orbital frequency will vary from 1 orbit per second at the start, to 918 orbits per second when the orbit has shrunk to 20 km at merger. The majority of gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral could be observed by LIGO if such a binary were close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. In August 2017, LIGO and Virgo observed the first binary neutron star inspiral in GW170817, and 70 observatories collaborated to detect the electromagnetic counterpart, a kilonova in the galaxy NGC 4993, 40 megaparsecs away, emitting a short gamma ray burst (GRB 170817A) seconds after the merger, followed by a longer optical transient (AT 2017gfo) powered by r-process nuclei. Advanced LIGO detectors should be able to detect such events up to 200 megaparsecs away; at this range, around 40 detections per year would be expected. Black hole binaries Black hole binaries emit gravitational waves during their in-spiral, merger, and ring-down phases. Hence, in the early 1990s the physics community rallied around a concerted effort to predict the waveforms of gravitational waves from these systems with the Binary Black Hole Grand Challenge Alliance. The largest amplitude of emission occurs during the merger phase, which can be modeled with the techniques of numerical relativity. The first direct detection of gravitational waves, GW150914, came from the merger of two black holes. Supernova A supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. This explosion can happen in one of many ways, but in all of them a significant proportion of the matter in the star is blown away into the surrounding space at extremely high velocities (up to 10% of the speed of light). Unless there is perfect spherical symmetry in these explosions (i.e., unless matter is spewed out evenly in all directions), there will be gravitational radiation from the explosion. This is because gravitational waves are generated by a changing quadrupole moment, which can happen only when there is asymmetrical movement of masses. Since the exact mechanism by which supernovae take place is not fully understood, it is not easy to model the gravitational radiation emitted by them. Spinning neutron stars As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called "mountains", which are bumps extending no more than 10 centimeters (4 inches) above the surface, that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out. Inflation Many models of the Universe suggest that there was an inflationary epoch in the early history of the Universe when space expanded by a large factor in a very short amount of time. If this expansion was not symmetric in all directions, it may have emitted gravitational radiation detectable today as a gravitational wave background. This background signal is too weak for any currently operational gravitational wave detector to observe, and it is thought it may be decades before such an observation can be made. Properties and behaviour Energy, momentum, and angular momentum Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each otherthe angular momentum is radiated away by gravitational waves. The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a "naked quasar". The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole. Redshifting Like electromagnetic waves, gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect), but also due to distortions of spacetime, such as cosmic expansion. Redshifting of gravitational waves is different from redshifting due to gravity (gravitational redshift). Quantum gravity, wave-particle aspects, and graviton In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted. If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin-2 boson. It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress-energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. Such a discovery would unite quantum theory with gravity. Significance for study of the early universe Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become "transparent", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe. Determining direction of travel The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other "noise" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation. This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds, this is sufficient to identify the direction of the origin of the wave with considerable precision. Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 more accurate than before. Gravitational wave astronomy During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have discovered pulsars and quasars, for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang, a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves. Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans. The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. At these high frequencies it is potentially possible that the sources may be "man made" that is, gravitational waves generated and detected in the laboratory. A supermassive black hole, created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope, is theorized to have been ejected from the merger center by gravitational waves. Detection Indirect detection Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about and the size of their orbits is about 1/75 of the Earth–Sun orbit, just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system roughly 1022 times as much. The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for "the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation." The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years. Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments. The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly h ≈ 10−26. There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of h ≈ 10−20. At least eight other binary pulsars have been discovered. Difficulties Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10−21, meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. Gravitational waves are expected to have frequencies 10−16 Hz < f < 104 Hz. Ground-based detectors Though the Hulse–Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/R term in the formulas for h above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10−20, but generally no bigger. Resonant antennas A simple device theorised to detect the expected wave motion is called a Weber bar a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. There are currently two detectors focused on the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of h ~ , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of h ~ , with an expectation to reach a sensitivity of h ~ . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈1011 Hz (100 GHz) and h ≈10−30 to 10−32. Interferometers A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development ground-based interferometers made the first detection of gravitational waves in 2015. Currently, the most sensitive is LIGO the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana, one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India. Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is the motion to which an interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 m. LIGO should be able to detect gravitational waves as small as h ~ . Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA, which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event. Einstein@Home The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic: a pure tone in acoustics. Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Space-based interferometers Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being 2.5 million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise, and artifacts caused by cosmic rays and solar wind. Using pulsar timing arrays Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct quadrupolar pattern of correlation and anti-correlation between the time of arrival of pulses from different pulsar pairs as a function of their angular separation in the sky. Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second. The most likely source of GWs to which pulsar timing arrays are sensitive are supermassive black hole binaries, which form from the collision of galaxies. In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation. Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. These three groups also collaborate under the title of the International Pulsar Timing Array project. In June 2023, NANOGrav published the 15-year data release, which contained the first evidence for a stochastic gravitational wave background. In particular, it included the first measurement of the Hellings-Downs curve, the tell-tale sign of the gravitational wave origin of the observed background. Primordial gravitational wave Primordial gravitational waves are gravitational waves observed in the cosmic microwave background. They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 ("the signal can be entirely attributed to dust in the Milky Way"). LIGO and Virgo observations On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves, from a signal detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The gravitational waves were observed in the region more than 5 sigma (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in an experiment of statistical physics. Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries. On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. In contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst (GRB 170817A) was detected by the Fermi Gamma-ray Space Telescope, occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993, was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event (AT 2017gfo), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova. In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation. Microscopic sources In 1964, L. Halpern and B. Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible. In 1998, the possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by Cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave Cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave Cooper pairs from a low temperature superconductor, for instance lead or niobium, which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance, and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in "High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER". Chapter 3 of this book. In fiction An episode of the 1962 Russian science-fiction novel Space Apprentice by Arkady and Boris Strugatsky shows an experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest. In Stanislaw Lem's 1986 novel Fiasco, a "gravity gun" or "gracer" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey. In Greg Egan's 1997 novel Diaspora, the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth. In Liu Cixin's 2006 Remembrance of Earth's Past series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy. See also 2017 Nobel Prize in Physics, which was awarded to three individual physicists for their role in the discovery of and testing for the waves Anti-gravity Artificial gravity First observation of gravitational waves Gravitational plane wave Gravitational field Gravitational-wave astronomy Gravitational wave background Gravitational-wave observatory Gravitomagnetism Graviton Hawking radiation, for gravitationally induced electromagnetic radiation from black holes HM Cancri LISA, DECIGO and BBO – proposed space-based detectors LIGO, Virgo interferometer, GEO600, KAGRA, and TAMA 300 – Ground-based gravitational-wave detectors Linearized gravity Peres metric pp-wave spacetime, for an important class of exact solutions modelling gravitational radiation PSR B1913+16, the first binary pulsar discovered and the first experimental evidence for the existence of gravitational waves. Spin-flip, a consequence of gravitational wave emission from binary supermassive black holes Sticky bead argument, for a physical way to see that gravitational radiation should carry energy Tidal force References Further reading Bartusiak, Marcia. Einstein's Unfinished Symphony. Washington, DC: Joseph Henry Press, 2000. Landau, L.D. and Lifshitz, E.M., The Classical Theory of Fields (Pergamon Press), 1987. Bibliography Berry, Michael, Principles of Сosmology and Gravitation (Adam Hilger, Philadelphia, 1989). Collins, Harry, Gravity's Shadow: The Search for Gravitational Waves, University of Chicago Press, 2004. Collins, Harry, Gravity's Kiss: The Detection of Gravitational Waves (The MIT Press, Cambridge MA, 2017). . Davies, P.C.W., The Search for Gravity Waves (Cambridge University Press, 1980). . Grote, Hartmut, Gravitational Waves: A history of discovery (CRC Press, Taylor & Francis Group, Boca Raton/London/New York, 2020). . P. J. E. Peebles, Principles of Physical Cosmology (Princeton University Press, Princeton, 1993). . Wheeler, John Archibald and Ciufolini, Ignazio, Gravitation and Inertia (Princeton University Press, Princeton, 1995). . Woolf, Harry, ed., Some Strangeness in the Proportion (Addison–Wesley, Reading, MA, 1980). . External links Laser Interferometer Gravitational Wave Observatory. LIGO Laboratory, operated by the California Institute of Technology and the Massachusetts Institute of Technology Gravitational Waves – Collected articles at Nature Journal Gravitational Waves – Collected articles Scientific American Video (94:34) – Scientific Talk on Discovery, Barry Barish, CERN (11 February 2016) Binary stars Black holes Effects of gravity Concepts in astronomy Unsolved problems in physics
0.771761
0.997801
0.770063
Newton's law of universal gravitation
Newton's law of universal gravitation states that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Separated objects attract and are attracted as if all their mass were concentrated at their centers. The publication of the law has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors. This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. The equation for universal gravitation thus takes the form: where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant. The first test of Newton's law of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's Principia and approximately 71 years after his death. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has charge in place of mass and a different constant. Newton's law was later superseded by Albert Einstein's theory of general relativity, but the universality of the gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme accuracy, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at small distances (such as Mercury's orbit around the Sun). History Around 1600, the scientific method began to take root. René Descartes started over with a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the orbit of the Moon around the Earth and then to all objects on Earth. The analysis required assuming that the gravitation force acted as if all of the mass of the Earth were concentrated at its center, an unproven conjecture at that time. His calculations of the Moon orbit time was within 16% of the known value. By 1680, new values for the diameter of the Earth improved his orbit time to within 1.6%, but more importantly Newton had found a proof of his earlier conjecture. In 1687 Newton published his Principia which combined his laws of motion with new mathematical analysis to explain Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to their separation squared. Newton's original formula was: where the symbol means "is proportional to". To make this into an equal-sided formula or equation, there needed to be a multiplying factor or constant that would give the correct force of gravity no matter the value of the masses or distance between them (the gravitational constant). Newton would need an accurate measure of this constant to prove his inverse-square law. When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him, ultimately a frivolous accusation. Newton's "causes hitherto unknown" While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it." He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity (although he invented two mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. And in Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses.... It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies." Modern form In modern language, the law states the following: Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G. This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force. Bodies with spatial extent If the bodies in question have spatial extent (as opposed to being point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses that constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies. In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. (This is not generally true for non-spherically symmetrical bodies.) For points inside a spherically symmetric distribution of matter, Newton's shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution: The portion of the mass that is located at radii causes the same force at the radius r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above). The portion of the mass that is located at radii exerts no net gravitational force at the radius r0 from the center. That is, the individual gravitational forces exerted on a point at radius r0 by the elements of the mass outside the radius r0 cancel each other. As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere. Vector form Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors. where F21 is the force applied on body 2 exerted by body 1, G is the gravitational constant, m1 and m2 are respectively the masses of bodies 1 and 2, r21 = r2 − r1 is the displacement vector between bodies 1 and 2, and is the unit vector from body 1 to body 2. It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21. Gravity field The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point. It is a generalisation of the vector form, which becomes particularly useful if more than two objects are involved (such as a rocket between the Earth and the Moon). For two objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as: so that we can write: This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2. Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case As per Gauss's law, field in a symmetric body can be found by the mathematical equation: where is a closed surface and is the mass enclosed by the surface. Hence, for a hollow sphere of radius and total mass , For a uniform solid sphere of radius and total mass , Limitations Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities and are both much less than one, where is the gravitational potential, is the velocity of the objects being studied, and is the speed of light in vacuum. For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since where is the radius of the Earth's orbit around the Sun. In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity. Observations conflicting with Newton's formula Newton's theory does not fully explain the precession of the perihelion of the orbits of the planets, especially that of Mercury, which was detected long after the life of Newton. There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th century. The predicted angular deflection of light rays by gravity (treated as particles travelling at the expected speed) that is calculated by using Newton's theory is only one-half of the deflection that is observed by astronomers. Calculations using general relativity are in much closer agreement with the astronomical observations. In spiral galaxies, the orbiting of stars around their centers seems to strongly disobey both Newton's law of universal gravitation and general relativity. Astrophysicists, however, explain this marked phenomenon by assuming the presence of large amounts of dark matter. Einstein's solution The first two conflicts with observations above were explained by Einstein's theory of general relativity, in which gravitation is a manifestation of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force resulting from the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime. Extensions In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry. Solutions of Newton's law of universal gravitation The n-body problem is an ancient, classical problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem — from the time of the Greeks and on — has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too. The n-body problem in general relativity is considerably more difficult to solve. The classical physical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time) of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times. The two-body problem has been completely solved, as has the restricted three-body problem. See also References External links Newton's Law of Universal Gravitation Javascript calculator Theories of gravity Isaac Newton Articles containing video clips Scientific laws Concepts in astronomy Newtonian gravity Eponymous laws of physics
0.770706
0.999157
0.770057
Electromagnetic pulse
An electromagnetic pulse (EMP), also referred to as a transient electromagnetic disturbance (TED), is a brief burst of electromagnetic energy. The origin of an EMP can be natural or artificial, and can occur as an electromagnetic field, as an electric field, as a magnetic field, or as a conducted electric current. The electromagnetic interference caused by an EMP can disrupt communications and damage electronic equipment. An EMP such as a lightning strike can physically damage objects such as buildings and aircraft. The management of EMP effects is a branch of electromagnetic compatibility (EMC) engineering. The first recorded damage from an electromagnetic pulse came with the solar storm of August 1859, or the Carrington Event. In modern warfare, weapons delivering a high energy EMP are designed to disrupt communications equipment, the computers needed to operate modern warplanes, or even put the entire electrical network of a target country out of commission. General characteristics An electromagnetic pulse is a short surge of electromagnetic energy. Its short duration means that it will be spread over a range of frequencies. Pulses are typically characterized by: The mode of energy transfer (radiated, electric, magnetic or conducted). The range or spectrum of frequencies present. Pulse waveform: shape, duration and amplitude. The frequency spectrum and the pulse waveform are interrelated via the Fourier transform which describes how component waveforms may sum to the observed frequency spectrum. Types of energy EMP energy may be transferred in any of four forms: Electric field Magnetic field Electromagnetic radiation Electrical conduction According to Maxwell's equations, a pulse of electric energy will always be accompanied by a pulse of magnetic energy. In a typical pulse, either the electric or the magnetic form will dominate. It can be shown that the non-linear Maxwell's equations can have time-dependent self-similar electromagnetic shock wave solutions where the electric and the magnetic field components have a discontinuity. In general, only radiation acts over long distances, with the magnetic and electric fields acting over short distances. There are a few exceptions, such as a solar magnetic flare. Frequency ranges A pulse of electromagnetic energy typically comprises many frequencies from very low to some upper limit depending on the source. The range defined as EMP, sometimes referred to as "DC [direct current] to daylight", excludes the highest frequencies comprising the optical (infrared, visible, ultraviolet) and ionizing (X and gamma rays) ranges. Some types of EMP events can leave an optical trail, such as lightning and sparks, but these are side effects of the current flow through the air and are not part of the EMP itself. Pulse waveforms The waveform of a pulse describes how its instantaneous amplitude (field strength or current) changes over time. Real pulses tend to be quite complicated, so simplified models are often used. Such a model is typically described either in a diagram or as a mathematical equation. Most electromagnetic pulses have a very sharp leading edge, building up quickly to their maximum level. The classic model is a double-exponential curve which climbs steeply, quickly reaches a peak and then decays more slowly. However, pulses from a controlled switching circuit often approximate the form of a rectangular or "square" pulse. EMP events usually induce a corresponding signal in the surrounding environment or material. Coupling usually occurs most strongly over a relatively narrow frequency band, leading to a characteristic damped sine wave. Visually it is shown as a high frequency sine wave growing and decaying within the longer-lived envelope of the double-exponential curve. A damped sinewave typically has much lower energy and a narrower frequency spread than the original pulse, due to the transfer characteristic of the coupling mode. In practice, EMP test equipment often injects these damped sinewaves directly rather than attempting to recreate the high-energy threat pulses. In a pulse train, such as from a digital clock circuit, the waveform is repeated at regular intervals. A single complete pulse cycle is sufficient to characterise such a regular, repetitive train. Types An EMP arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the surrounding environment. Some types are generated as repetitive and regular pulse trains. Different types of EMP arise from natural, man-made, and weapons effects. Types of natural EMP events include: Lightning electromagnetic pulse (LEMP). The discharge is typically an initial current flow of perhaps millions of amps, followed by a train of pulses of decreasing energy. Electrostatic discharge (ESD), as a result of two charged objects coming into proximity or even contact. Meteoric EMP. The discharge of electromagnetic energy resulting from either the impact of a meteoroid with a spacecraft or the explosive breakup of a meteoroid passing through the Earth's atmosphere. Coronal mass ejection (CME), sometimes referred to as a solar EMP. A burst of plasma and accompanying magnetic field, ejected from the solar corona and released into the solar wind. Types of (civil) man-made EMP events include: Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train). Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates. Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired. Continual switching actions of digital electronic circuitry. Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected. Types of military EMP include: Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion. A variant of this is the high altitude nuclear EMP (HEMP), which produces a secondary pulse due to particle interactions with the Earth's atmosphere and magnetic field. Non-nuclear electromagnetic pulse (NNEMP) weapons. Lightning Lightning is unusual in that it typically has a preliminary "leader" discharge of low energy building up to the main pulse, which in turn may be followed at intervals by several smaller bursts. Electrostatic discharge (ESD) ESD events are characterized by high voltages of many kV, but small currents sometimes cause visible sparks. ESD is treated as a small, localized phenomenon, although technically a lightning flash is a very large ESD event. ESD can also be man-made, as in the shock received from a Van de Graaff generator. An ESD event can damage electronic circuitry by injecting a high-voltage pulse, besides giving people an unpleasant shock. Such an ESD event can also create sparks, which may in turn ignite fires or fuel-vapour explosions. For this reason, before refueling an aircraft or exposing any fuel vapor to the air, the fuel nozzle is first connected to the aircraft to safely discharge any static. Switching pulses The switching action of an electrical circuit creates a sharp change in the flow of electricity. This sharp change is a form of EMP. Simple electrical sources include inductive loads such as relays, solenoids, and brush contacts in electric motors. These typically send a pulse down any electrical connections present, as well as radiating a pulse of energy. The amplitude is usually small and the signal may be treated as "noise" or "interference". The switching off or "opening" of a circuit causes an abrupt change in the current flowing. This can in turn cause a large pulse in the electric field across the open contacts, causing arcing and damage. It is often necessary to incorporate design features to limit such effects. Electronic devices such as vacuum tubes or valves, transistors, and diodes can also switch on and off very quickly, causing similar issues. One-off pulses may be caused by solid-state switches and other devices used only occasionally. However, the many millions of transistors in a modern computer may switch repeatedly at frequencies above 1  GHz, causing interference that appears to be continuous. Nuclear electromagnetic pulse (NEMP) A nuclear electromagnetic pulse is the abrupt pulse of electromagnetic radiation resulting from a nuclear explosion. The resulting rapidly changing electric fields and magnetic fields may couple with electrical/electronic systems to produce damaging current and voltage surges. The intense gamma radiation emitted can also ionize the surrounding air, creating a secondary EMP as the atoms of air first lose their electrons and then regain them. NEMP weapons are designed to maximize such EMP effects as the primary damage mechanism, and some are capable of destroying susceptible electronic equipment over a wide area. A high-altitude electromagnetic pulse (HEMP) weapon is a NEMP warhead designed to be detonated far above the Earth's surface. The explosion releases a blast of gamma rays into the mid-stratosphere, which ionizes as a secondary effect and the resultant energetic free electrons interact with the Earth's magnetic field to produce a much stronger EMP than is normally produced in the denser air at lower altitudes. Non-nuclear electromagnetic pulse (NNEMP) Non-nuclear electromagnetic pulse (NNEMP) is a weapon-generated electromagnetic pulse without use of nuclear technology. Devices that can achieve this objective include a large low-inductance capacitor bank discharged into a single-loop antenna, a microwave generator, and an explosively pumped flux compression generator. To achieve the frequency characteristics of the pulse needed for optimal coupling into the target, wave-shaping circuits or microwave generators are added between the pulse source and the antenna. Vircators are vacuum tubes that are particularly suitable for microwave conversion of high-energy pulses. NNEMP generators can be carried as a payload of bombs, cruise missiles (such as the CHAMP missile) and drones, with diminished mechanical, thermal and ionizing radiation effects, but without the consequences of deploying nuclear weapons. The range of NNEMP weapons is much less than nuclear EMP. Nearly all NNEMP devices used as weapons require chemical explosives as their initial energy source, producing only one millionth the energy of nuclear explosives of similar weight. The electromagnetic pulse from NNEMP weapons must come from within the weapon, while nuclear weapons generate EMP as a secondary effect. These facts limit the range of NNEMP weapons, but allow finer target discrimination. The effect of small e-bombs has proven to be sufficient for certain terrorist or military operations. Examples of such operations include the destruction of electronic control systems critical to the operation of many ground vehicles and aircraft. The concept of the explosively pumped flux compression generator for generating a non-nuclear electromagnetic pulse was conceived as early as 1951 by Andrei Sakharov in the Soviet Union, but nations kept work on non-nuclear EMP classified until similar ideas emerged in other nations. Effects Minor EMP events, and especially pulse trains, cause low levels of electrical noise or interference which can affect the operation of susceptible devices. For example, a common problem in the mid-twentieth century was interference emitted by the ignition systems of gasoline engines, which caused radio sets to crackle and TV sets to show stripes on the screen. CISPR 25 was established to set threshold standards that vehicles must meet for electromagnetic interference(EMI) emissions. At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them. A large and energetic EMP can induce high currents and voltages in the victim unit, temporarily disrupting its function or even permanently damaging it. A powerful EMP can also directly affect magnetic materials and corrupt the data stored on media such as magnetic tape and computer hard drives. Hard drives are usually shielded by heavy metal casings. Some IT asset disposal service providers and computer recyclers use a controlled EMP to wipe such magnetic media. A very large EMP event, such as a lightning strike or an air bursted nuclear weapon, is also capable of damaging objects such as trees, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating. Most engineered structures and systems require some form of protection against lightning to be designed in. A good means of protection is a Faraday shield designed to protect certain items from being destroyed. Control Like any electromagnetic interference, the threat from EMP is subject to control measures. This is true whether the threat is natural or man-made. Therefore, most control measures focus on the susceptibility of equipment to EMP effects, and hardening or protecting it from harm. Man-made sources, other than weapons, are also subject to control measures in order to limit the amount of pulse energy emitted. The discipline of ensuring correct equipment operation in the presence of EMP and other RF threats is known as electromagnetic compatibility (EMC). Test simulation To test the effects of EMP on engineered systems and equipment, an EMP simulator may be used. Induced pulse simulation Induced pulses are of much lower energy than threat pulses and so are more practicable to create, but they are less predictable. A common test technique is to use a current clamp in reverse, to inject a range of damped sine wave signals into a cable connected to the equipment under test. The damped sine wave generator is able to reproduce the range of induced signals likely to occur. Threat pulse simulation Sometimes the threat pulse itself is simulated in a repeatable way. The pulse may be reproduced at low energy in order to characterise the subject's response prior to damped sinewave injection, or at high energy to recreate the actual threat conditions. A small-scale ESD simulator may be hand-held. Bench- or room-sized simulators come in a range of designs, depending on the type and level of threat to be generated. At the top end of the scale, large outdoor test facilities incorporating high-energy EMP simulators have been built by several countries. The largest facilities are able to test whole vehicles including ships and aircraft for their susceptibility to EMP. Nearly all of these large EMP simulators used a specialized version of a Marx generator. Examples include the huge wooden-structured ATLAS-I simulator (also known as TRESTLE) at Sandia National Labs, New Mexico, which was at one time the world's largest EMP simulator. Papers on this and other large EMP simulators used by the United States during the latter part of the Cold War, along with more general information about electromagnetic pulses, are now in the care of the SUMMA Foundation, which is hosted at the University of New Mexico. The US Navy also has a large facility called the Electro Magnetic Pulse Radiation Environmental Simulator for Ships I (EMPRESS I). Safety High-level EMP signals can pose a threat to human safety. In such circumstances, direct contact with a live electrical conductor should be avoided. Where this occurs, such as when touching a Van de Graaff generator or other highly charged object, care must be taken to release the object and then discharge the body through a high resistance, in order to avoid the risk of a harmful shock pulse when stepping away. Very high electric field strengths can cause breakdown of the air and a potentially lethal arc current similar to lightning to flow, but electric field strengths of up to 200 kV/m are regarded as safe. According to research from Edd Gent, a 2019 report by the Electric Power Research Institute, which is funded by utility companies, found that a large EMP attack would probably cause regional blackouts but not a nationwide grid failure and that recovery times would be similar to those of other large-scale outages. It is not known how long these electrical blackouts would last, or what extent of damage would occur across the country. It is possible that neighboring countries of the U.S. could also be affected by such an attack, depending on the targeted area and people. According to an article from Naureen Malik, with North Korea's increasingly successful missile and warhead tests in mind, Congress moved to renew funding for the Commission to Assess the Threat to the U.S. from Electromagnetic Pulse Attack as part of the National Defense Authorization Act. According to research from Yoshida Reiji, in a 2016 article for the Tokyo-based nonprofit organization Center for Information and Security Trade Control, Onizuka warned that a high-altitude EMP attack would damage or destroy Japan's power, communications and transport systems as well as disable banks, hospitals and nuclear power plants. In popular culture By 1981, a number of articles on electromagnetic pulse in the popular press spread knowledge of the EMP phenomenon into the popular culture. EMP has been subsequently used in a wide variety of fiction and other aspects of popular culture. Popular media often depict EMP effects incorrectly, causing misunderstandings among the public and even professionals. Official efforts have been made in the U.S. to remedy these misconceptions. The novel One Second After by William R. Forstchen and the following books One Year After, The Final Day and Five Years After portrait the story of a fictional character named John Matherson and his community in Black Mountain, North Carolina that after the US loses a war and an EMP attack "sends our nation [the US] back to the Dark Ages". See also References Citations Sources Katayev, I.G. (1966). Electromagnetic Shock Waves Iliffe Books Ltd. Dorset House, Stanford Street, London, England External links TRESTLE: Landmark of the Cold War, a short documentary film on the SUMMA Foundation website Electromagnetic compatibility Electromagnetic radiation Electronic warfare Energy weapons Nuclear weapons Pulsed power Nuclear warfare
0.770747
0.999088
0.770044
Vorticity equation
The vorticity equation of fluid dynamics describes the evolution of the vorticity of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity). The governing equation is:where is the material derivative operator, is the flow velocity, is the local fluid density, is the local pressure, is the viscous stress tensor and represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching. The equation is valid in the absence of any concentrated torques and line forces for a compressible, Newtonian fluid. In the case of incompressible flow (i.e., low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation: where is the kinematic viscosity and is the Laplace operator. Under the further assumption of two-dimensional flow, the equation simplifies to: Physical interpretation The term on the left-hand side is the material derivative of the vorticity vector . It describes the rate of change of vorticity of the moving fluid particle. This change can be attributed to unsteadiness in the flow (, the unsteady term) or due to the motion of the fluid particle as it moves from one point to another (, the convection term). The term on the right-hand side describes the stretching or tilting of vorticity due to the flow velocity gradients. Note that is a vector quantity, as is a scalar differential operator, while is a nine-element tensor quantity. The term describes stretching of vorticity due to flow compressibility. It follows from the Navier-Stokes equation for continuity, namely where is the specific volume of the fluid element. One can think of as a measure of flow compressibility. Sometimes the negative sign is included in the term. The term is the baroclinic term. It accounts for the changes in the vorticity due to the intersection of density and pressure surfaces. The term , accounts for the diffusion of vorticity due to the viscous effects. The term provides for changes due to external body forces. These are forces that are spread over a three-dimensional region of the fluid, such as gravity or electromagnetic forces. (As opposed to forces that act only over a surface (like drag on a wall) or a line (like surface tension around a meniscus). Simplifications In case of conservative body forces, . For a barotropic fluid, . This is also true for a constant density fluid (including incompressible fluid) where . Note that this is not the same as an incompressible flow, for which the barotropic term cannot be neglected. For inviscid fluids, the viscosity tensor is zero. Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to Alternately, in case of incompressible, inviscid fluid with conservative body forces, For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to. Derivation The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains: Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation: where is any scalar field. Tensor notation The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol : In specific sciences Atmospheric sciences In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is Here, is the polar component of the vorticity, is the atmospheric density, , , and w are the components of wind velocity, and is the 2-dimensional (i.e. horizontal-component-only) del. See also Vorticity Barotropic vorticity equation Vortex stretching Burgers vortex References Further reading Equations of fluid dynamics Transport phenomena
0.779688
0.987601
0.770021
Forces on sails
Forces on sails result from movement of air that interacts with sails and gives them motive power for sailing craft, including sailing ships, sailboats, windsurfers, ice boats, and sail-powered land vehicles. Similar principles in a rotating frame of reference apply to windmill sails and wind turbine blades, which are also wind-driven. They are differentiated from forces on wings, and propeller blades, the actions of which are not adjusted to the wind. Kites also power certain sailing craft, but do not employ a mast to support the airfoil and are beyond the scope of this article. Forces on sails depend on wind speed and direction and the speed and direction of the craft. The direction that the craft is traveling with respect to the "true wind" (the wind direction and speed over the surface) is called the point of sail. The speed of the craft at a given point of sail contributes to the "apparent wind"—the wind speed and direction as measured on the moving craft. The apparent wind on the sail creates a total aerodynamic force, which may be resolved into drag—the force component in the direction of the apparent wind—and lift—the force component normal (90°) to the apparent wind. Depending on the alignment of the sail with the apparent wind, lift or drag may be the predominant propulsive component. Total aerodynamic force also resolves into a forward, propulsive, driving force—resisted by the medium through or over which the craft is passing (e.g. through water, air, or over ice, sand)—and a lateral force, resisted by the underwater foils, ice runners, or wheels of the sailing craft. For apparent wind angles aligned with the entry point of the sail, the sail acts as an airfoil and lift is the predominant component of propulsion. For apparent wind angles behind the sail, lift diminishes and drag increases as the predominant component of propulsion. For a given true wind velocity over the surface, a sail can propel a craft to a higher speed, on points of sail when the entry point of the sail is aligned with the apparent wind, than it can with the entry point not aligned, because of a combination of the diminished force from airflow around the sail and the diminished apparent wind from the velocity of the craft. Because of limitations on speed through the water, displacement sailboats generally derive power from sails generating lift on points of sail that include close-hauled through broad reach (approximately 40° to 135° off the wind). Because of low friction over the surface and high speeds over the ice that create high apparent wind speeds for most points of sail, iceboats can derive power from lift further off the wind than displacement boats. Various mathematical models address lift and drag by taking into account the density of air, coefficients of lift and drag that result from the shape and area of the sail, and the speed and direction of the apparent wind, among other factors. This knowledge is applied to the design of sails in such a manner that sailors can adjust sails to the strength and direction of the apparent wind in order to provide motive power to sailing craft. Overview The combination of a sailing craft's speed and direction with respect to the wind, together with wind strength, generate an apparent wind velocity. When the craft is aligned in a direction where the sail can be adjusted to align with its leading edge parallel to the apparent wind, the sail acts as an airfoil to generate lift in a direction perpendicular to the apparent wind. A component of this lift pushes the craft crosswise to its course, which is resisted by a sailboat's keel, an ice boat's blades or a land-sailing craft's wheels. An important component of lift is directed forward in the direction of travel and propels the craft. Language of velocity and force To understand forces and velocities, discussed here, one must understand what is meant by a "vector" and a "scalar." Velocity (V), denoted as boldface in this article, is an example of a vector, because it implies both direction and speed. The corresponding speed (V ), denoted as italics in this article is a scalar value. Likewise, a force vector, F, denotes direction and strength, whereas its corresponding scalar (F ) denotes strength alone. Graphically, each vector is represented with an arrow that shows direction and a length that shows speed or strength. Vectors of consistent units (e.g. V in m/s or F in N) may be added and subtracted, graphically, by positioning tips and tails of the arrows, representing the input variables and drawing the resulting derived vector. Components of force: lift vs. drag and driving vs. lateral force Lift on a sail (L), acting as an airfoil, occurs in a direction perpendicular to the incident airstream (the apparent wind velocity, VA, for the head sail) and is a result of pressure differences between the windward and leeward surfaces and depends on angle of attack, sail shape, air density, and speed of the apparent wind. Pressure differences result from the normal force per unit area on the sail from the air passing around it. The lift force results from the average pressure on the windward surface of the sail being higher than the average pressure on the leeward side. These pressure differences arise in conjunction with the curved air flow. As air follows a curved path along the windward side of a sail, there is a pressure gradient perpendicular to the flow direction with lower pressure on the outside of the curve and higher pressure on the inside. To generate lift, a sail must present an "angle of attack" (α) between the chord line of the sail and the apparent wind velocity (VA). Angle of attack is a function of both the craft's point of sail and how the sail is adjusted with respect to the apparent wind. As the lift generated by a sail increases, so does lift-induced drag, which together with parasitic drag constitutes total drag, (D). This occurs when the angle of attack increases with sail trim or change of course to cause the lift coefficient to increase up to the point of aerodynamic stall, so does the lift-induced drag coefficient. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasitic drag, increases due to the formation of separated flow on the surface of the sail. Sails with the apparent wind behind them (especially going downwind) operate in a stalled condition. Lift and drag are components of the total aerodynamic force on sail (FT). Since the forces on the sail are resisted by forces in the water (for a boat) or on the traveled surface (for an ice boat or land sailing craft), their corresponding forces can also be decomposed from total aerodynamic force into driving force (FR) and lateral force (FLAT). Driving force overcomes resistance to forward motion. Lateral force is met by lateral resistance from a keel, blade or wheel, but also creates a heeling force. Effect of points of sail on forces Apparent wind (VA) is the air velocity acting upon the leading edge of the most forward sail or as experienced by instrumentation or crew on a moving sailing craft. It is the vector sum of true wind velocity and the apparent wind component resulting from boat velocity (VA = -VB + VT). In nautical terminology, wind speeds are normally expressed in knots and wind angles in degrees. The craft's point of sail affects its velocity (VB) for a given true wind velocity (VT). Conventional sailing craft cannot derive power from the wind in a "no-go" zone that is approximately 40° to 50° away from the true wind, depending on the craft. Likewise, the directly downwind speed of all conventional sailing craft is limited to the true wind speed. Effect of apparent wind on sailing craft at three points of sail Boat velocity (in black) generates an equal and opposite apparent wind component (not shown), which adds to the true wind to become apparent wind. Sailing craft A is close-hauled. Sailing craft B is on a beam reach. Sailing craft C is on a broad reach. A sailboat's speed through the water is limited by the resistance that results from hull drag in the water. Sail boats on foils are much less limited. Ice boats typically have the least resistance to forward motion of any sailing craft. Craft with the higher forward resistance achieve lower forward velocities for a given wind velocity than ice boats, which can travel at speeds several multiples of the true wind speed. Consequently, a sailboat experiences a wider range of apparent wind angles than does an ice boat, whose speed is typically great enough to have the apparent wind coming from a few degrees to one side of its course, necessitating sailing with the sail sheeted in for most points of sail. On conventional sail boats, the sails are set to create lift for those points of sail where it's possible to align the leading edge of the sail with the apparent wind. For a sailboat, point of sail affects lateral force significantly. The higher the boat points to the wind under sail, the stronger the lateral force, which requires resistance from a keel or other underwater foils, including daggerboard, centerboard, skeg and rudder. Lateral force also induces heeling in a sailboat, which requires resistance by weight of ballast from the crew or the boat itself and by the shape of the boat, especially with a catamaran. As the boat points off the wind, lateral force and the forces required to resist it become less important. On ice boats, lateral forces are countered by the lateral resistance of the blades on ice and their distance apart, which generally prevents heeling. Forces on sailing craft Each sailing craft is a system that mobilizes wind force through its sails—supported by spars and rigging—which provide motive power and reactive force from the underbody of a sailboat—including the keel, centerboard, rudder or other underwater foils—or the running gear of an ice boat or land craft, which allows it to be kept on a course. Without the ability to mobilize reactive forces in directions different from the wind direction, a craft would simply be adrift before the wind. Accordingly, motive and heeling forces on sailing craft are either components of or reactions to the total aerodynamic force (FT) on sails, which is a function of apparent wind velocity (VA) and varies with point of sail. The forward driving force (FR) component contributes to boat velocity (VB), which is, itself, a determinant of apparent wind velocity. Absent lateral reactive forces to FT from a keel (in water), a skate runner (on ice) or a wheel (on land), a craft would only be able to move downwind and the sail would not be able to develop lift. At a stable angle of heel (for a sailboat) and a steady speed, aerodynamic and hydrodynamic forces are in balance. Integrated over the sailing craft, the total aerodynamic force (FT) is located at the centre of effort (CE), which is a function of the design and adjustment of the sails on a sailing craft. Similarly, the total hydrodynamic force (Fl) is located at the centre of lateral resistance (CLR), which is a function of the design of the hull and its underwater appendages (keel, rudder, foils, etc.). These two forces act in opposition to one another with Fl a reaction to FT. Whereas ice boats and land-sailing craft resist lateral forces with their wide stance and high-friction contact with the surface, sailboats travel through water, which provides limited resistance to side forces. In a sailboat, side forces are resisted in two ways: Leeway: Leeway is the rate of travel perpendicular to the course. It is constant when the lateral force on the sail (FLAT) equals the lateral force on the boat's keel and other underwater appendages (PLAT). This causes the boat to travel through the water on a course that is different from the direction in which the boat is pointed by the angle (λ ), which is called the "leeway angle." Heeling: The heeling angle (θ) is constant when the torque between the centre of effort (CE) on the sail and the centre of resistance on the hull (CR) over moment arm (h) equals the torque between the boat's centre of buoyancy (CB) and its centre of gravity (CG) over moment arm (b), described as heeling moment. All sailing craft reach a constant forward speed (VB) for a given wind speed (VT) and point of sail, when the forward driving force (FR) equals the forward resisting force (Rl). For an ice boat, the dominant forward resisting force is aerodynamic, since the coefficient of friction on smooth ice is as low as 0.02. Accordingly, high-performance ice boats are streamlined to minimize aerodynamic drag. Aerodynamic forces in balance with hydrodynamic forces on a close-hauled sailboat Force components on sails The approximate locus of net aerodynamic force on a craft with a single sail is the centre of effort (CE ) at the geometric centre of the sail. Filled with wind, the sail has a roughly spherical polygon shape and if the shape is stable, then the location of centre of effort is stable. On sailing craft with multiple sails, the position of centre of effort varies with the sail plan. Sail trim or airfoil profile, boat trim and point of sail also affect CE. On a given sail, the net aerodynamic force on the sail is located approximately at the maximum draught intersecting the camber of the sail and passing through a plane intersecting the centre of effort, normal to the leading edge (luff), roughly perpendicular to the chord of the sail (a straight line between the leading edge (luff) and the trailing edge (leech)). Net aerodynamic force with respect to the air stream is usually considered in reference to the direction of the apparent wind (VA) over the surface plane (ocean, land or ice) and is decomposed into lift (L), perpendicular with VA, and drag (D), in line with VA. For windsurfers, lift component vertical to the surface plane is important, because in strong winds windsurfer sails are leaned into the wind to create a vertical lifting component ( FVERT) that reduces drag on the board (hull) through the water. Note that FVERT acts downwards for boats heeling away from the wind, but is negligible under normal conditions. The three dimensional vector relationship for net aerodynamic force with respect to apparent wind (VA) is: Likewise, net aerodynamic force may be decomposed into the three translational directions with respect to a boat's course over the surface: surge (forward/astern), sway (starboard/port—relevant to leeway), and heave (up/down). The scalar values and direction of these components can be dynamic, depending on wind and waves (for a boat). In this case, FT is considered in reference to the direction of the boat's course and is decomposed into driving force (FR), in line with the boat's course, and lateral force (FLAT), perpendicular with the boat's course. Again for windsurfers, the lift component vertical to the surface plane ( FVERT) is important. The three dimensional vector relationship for net aerodynamic force with respect to the course over the surface is: The values of driving force (FR ) and lateral force (FLAT ) with apparent wind angle (α), assuming no heeling, relate to the values of lift (L ) and drag (D ), as follows: Reactive forces on sailing craft Reactive forces on sailing craft include forward resistance—sailboat's hydrodynamic resistance (Rl), an ice boat's sliding resistance or a land sailing craft's rolling resistance in the direction of travel—which are to be minimized in order to increase speed, and lateral force, perpendicular to the direction of travel, which is to be made sufficiently strong to minimize sideways motion and to guide the craft on course. Forward resistance comprises the types of drag that impede a sailboat's speed through water (or an ice boat's speed over the surface) include components of parasitic drag, consisting primarily of form drag, which arises because of the shape of the hull, and skin friction, which arises from the friction of the water (for boats) or air (for ice boats and land sailing craft) against the "skin" of the hull that is moving through it. Displacement vessels are also subject to wave resistance from the energy that goes into displacing water into waves and that is limited by hull speed, which is a function of waterline length, Wheeled vehicles' forward speed is subject to rolling friction and ice boats are subject to kinetic or sliding friction. Parasitic drag in water or air increases with the square of speed (VB2 or VA2, respectively); rolling friction increases linearly with velocity; whereas kinetic friction is normally a constant, but on ice may become reduced with speed as it transitions to lubricated friction with melting. Ways to reduce wave-making resistance used on sailing vessels include reduced displacement—through planing or (as with a windsurfer) offsetting vessel weight with a lifting sail—and fine entry, as with catamarans, where a narrow hull minimizes the water displaced into a bow wave. Sailing hydrofoils also substantially reduce forward friction with an underwater foil that lifts the vessel free of the water. Sailing craft with low forward resistance and high lateral resistance. Sailing craft with low forward resistance can achieve high velocities with respect to the wind velocity: High-performance catamarans, including the Extreme 40 catamaran and International C-class catamaran can sail at speeds up to twice the speed of the wind. Sailing hydrofoils achieve boat speeds up to twice the speed of the wind, as did the AC72 catamarans used for the 2013 America's Cup. Ice boats can sail up to five times the speed of the wind. Lateral force is a reaction supplied by the underwater shape of a sailboat, the blades of an ice boat and the wheels of a land sailing craft. Sailboats rely on keels, centerboards, and other underwater foils, including rudders, that provide lift in the lateral direction, to provide hydrodynamic lateral force (PLAT) to offset the lateral force component acting on the sail (FLAT) and minimize leeway. Such foils provide hydrodynamic lift and, for keels, ballast to offset heeling. They incorporate a wide variety of design considerations. Rotational forces on sailing craft The forces on sails that contribute to torque and cause rotation with respect to the boat's longitudinal (fore and aft), horizontal (abeam) and vertical (aloft) rotational axes result in: roll (e.g. heeling). pitch (e.g. pitch-poling), and yaw (e.g. broaching). Heeling, which results from the lateral force component (FLAT), is the most significant rotational effect of total aerodynamic force (FT). In stasis, heeling moment from the wind and righting moment from the boat's heel force (FH ) and its opposing hydrodynamic lift force on hull (Fl ), separated by a distance (h = "heeling arm"), versus its hydrostatic displacement weight (W ) and its opposing buoyancy force (Δ), separated by a distance (b = "righting arm") are in balance: (heeling arm × heeling force = righting arm × buoyancy force = heeling arm × hydrodynamic lift force on hull = righting arm × displacement weight) Sails come in a wide variety of configurations that are designed to match the capabilities of the sailing craft to be powered by them. They are designed to stay within the limitations of a craft's stability and power requirements, which are functions of hull (for boats) or chassis (for land craft) design. Sails derive power from wind that varies in time and with height above the surface. In order to do so, they are designed to adjust to the wind force for various points of sail. Both their design and method for control include means to match their lift and drag capabilities to the available apparent wind, by changing surface area, angle of attack, and curvature. Wind variation with elevation Wind speed increases with height above the surface; at the same time, wind speed may vary over short periods of time as gusts. These considerations may be described empirically. Measurements show that wind speed, (V (h ) ) varies, according to a power law with height (h ) above a non-zero measurement height datum (h0 —e.g. at the height of the foot of a sail), using a reference wind speed measured at the datum height (V (h0 ) ), as follows: Where the power law exponent (p) has values that have been empirically determined to range from 0.11 over the ocean to 0.31 over the land. This means that a V (3 m) = 5-m/s (≈10-knot) wind at 3 m above the water would be approximately V (15 m) = 6 m/s (≈12 knots) at 15 m above the water. In hurricane-force winds with V (3 m) = 40-m/s (≈78 knots) the speed at 15 m would be V (15 m) = 49 m/s (≈95 knots) with p = 0.128. This suggests that sails that reach higher above the surface can be subject to stronger wind forces that move the centre of effort (CE ) higher above the surface and increase the heeling moment. Additionally, apparent wind direction moves aft with height above water, which may necessitate a corresponding twist in the shape of the sail to achieve attached flow with height. Wind variation with time Hsu gives a simple formula for a gust factor (G ) for winds as a function of the exponent (p ), above, where G is the ratio of the wind gust speed to baseline wind speed at a given height: So, for a given windspeed and Hsu's recommended value of p = 0.126, one can expect G = 1.5 (a 10-knot wind might gust up to 15 knots). This, combined with changes in wind direction suggest the degree to which a sailing craft must adjust to wind gusts on a given course. Forces on sails A sailing craft's motive system comprises one or more sails, supported by spars and rigging, that derive power from the wind and induce reactive force from the underbody of a sailboat or the running gear of an ice boat or land craft. Depending on the angle of attack of a set of sails with respect to the apparent wind, each sail is providing motive force to the sailing craft either from lift-dominant attached flow or drag-dominant separated flow. Additionally, sails may interact with one another to create forces that are different from the sum of the individual contributions each sail, when used alone. Lift predominant (attached flow) Sails allow progress of a sailing craft to windward, thanks to their ability to generate lift (and the craft's ability to resist the lateral forces that result). Each sail configuration has a characteristic coefficient of lift and attendant coefficient of drag, which can be determined experimentally and calculated theoretically. Sailing craft orient their sails with a favorable angle of attack between the entry point of the sail and the apparent wind as their course changes. The ability to generate lift is limited by sailing too close to the wind when no effective angle of attack is available to generate lift (luffing) and sailing sufficiently off the wind that the sail cannot be oriented at a favorable angle of attack (running downwind). Instead, past a critical angle of attack, the sail stalls and promotes flow separation. Effect of angle of attack on coefficients of lift and drag Each type of sail, acting as an airfoil, has characteristic coefficients of lift (CL ) and lift-induced drag (CD ) at a given angle of attack, which follow that same basic form of: Where force (F) equals lift (L) for forces measured perpendicular to the airstream to determine C = CL or force (F) equals drag (D) for forces measured in line with the airstream to determine C = CD on a sail of area (A) and a given aspect ratio (length to average cord width). These coefficients vary with angle of attack (αj for a headsail) with respect to the incident wind (VA for a headsail). This formulation allows determination of CL and CD experimentally for a given sail shape by varying angle of attack at an experimental wind velocity and measuring force on the sail in the direction of the incident wind (D—drag) and perpendicular to it (L—lift). As the angle of attack grows larger, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the convex surface of the sail; there is less deflection of air to windward, so the sail as airfoil generates less lift. The sail is said to be stalled. At the same time, induced drag increases with angle of attack (for the headsail: αj ). Determination of coefficients of lift (CL ) and drag (CD ) for angle of attack and aspect ratio Fossati presents polar diagrams that relate coefficients of lift and drag for different angles of attack based on the work of Gustave Eiffel, who pioneered wind tunnel experiments on airfoils, which he published in 1910. Among them were studies of cambered plates. The results shown are for plates of varying camber and aspect ratios, as shown. They show that, as aspect ratio decreases, maximum lift shifts further towards increased drag (rightwards in the diagram). They also show that, for lower angles of attack, a higher aspect ratio generates more lift and less drag than for lower aspect ratios. Effect of coefficients of lift and drag on forces If the lift and drag coefficients (CL and CD) for a sail at a specified angle of attack are known, then the lift (L) and drag (D) forces produced can be determined, using the following equations, which vary as the square of apparent wind speed (VA ): Garrett demonstrates how those diagrams translate into lift and drag, for a given sail, on different points of sail, in diagrams similar to these: Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for upwind points of sail In these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the purpose of illustration. In reality, for a constant true wind, apparent wind would vary with point of sail. Constant VA in these examples means that either VT or VB varies with point of sail; this allows the same polar diagram to be used for comparison with the same conversion of coefficients into units of force (in this case Newtons). In the examples for close-hauled and reach (left and right), the sail's angle of attack (α ) is essentially constant, although the boom angle over the boat changes with point of sail to trim the sail close to the highest lift force on the polar curve. In these cases, lift and drag are the same, but the decomposition of total aerodynamic force (FT) into forward driving force (FR) and lateral force (FLAT) vary with point of sail. Forward driving force (FR) increases, as the direction of travel is more aligned with the wind, and lateral force (FLAT) decreases. In reference to the above diagrams relating lift and drag, Garrett explains that for a maximum speed made good to windward, the sail must be trimmed to an angle of attack that is greater than the maximum lift/drag ratio (more lift), while the hull is operated in a manner that is lower than its maximum lift/drag ratio (more drag). Drag predominant (separated flow) When sailing craft are on a course where the angle of attack between the sail and the apparent wind (α ) exceeds the point of maximum lift on the CL–CD polar diagram, separation of flow occurs. The separation becomes more pronounced until at α = 90° lift becomes small and drag predominates. In addition to the sails used upwind, spinnakers provide area and curvature appropriate for sailing with separated flow on downwind points of sail. Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for downwind points of sail Again, in these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the sake of illustration, but would in reality vary with point of sail for a constant true wind. In the left-hand diagram (broad reach), the boat is on a point of sail, where the sail can no longer be aligned into the apparent wind to create an optimum angle of attack. Instead, the sail is in a stalled condition, creating about 80% of the lift as in the upwind examples and drag has doubled. Total aerodynamic force (FT) has moved away from the maximum lift value. In the right-hand diagram (running before the wind), lift is one-fifth of the upwind cases (for the same strength apparent wind) and drag has almost quadrupled. Downwind sailing with a spinnaker A velocity prediction program can translate sail performance and hull characteristics into a polar diagram, depicting boat speed for various windspeeds at each point of sail. Displacement sailboats exhibit a change in what course has the best velocity made good (VMG), depending on windspeed. For the example given, the sailboat achieves best downwind VMG for windspeed of 10 knots and less at a course about 150° off the wind. For higher windspeed the optimum downwind VMG occurs at more than 170° off the wind. This "downwind cliff" (abrupt change in optimum downwind course) results from the change of balance in drag forces on the hull with speed. Sail interactions Sailboats often have a jib that overlaps the mainsail—called a genoa. Arvel Gentry demonstrated in his series of articles published in "Best of sail trim" published in 1977 (and later reported and republished in summary in 1981) that the genoa and the mainsail interact in a symbiotic manner, owing to the circulation of air between them slowing down in the gap between the two sails (contrary to traditional explanations), which prevents separation of flow along the mainsail. The presence of a jib causes the stagnation line on the mainsail to move forward, which reduces the suction velocities on the main and reduces the potential for boundary layer separation and stalling. This allows higher angles of attack. Likewise, the presence of the mainsail causes the stagnation line on the jib to be shifted aft and allows the boat to point closer to the wind, owing to higher leeward velocities of the air over both sails. The two sails cause an overall larger displacement of air perpendicular to the direction of flow when compared to one sail. They act to form a larger wing, or airfoil, around which the wind must pass. The total length around the outside has also increased and the difference in air speed between windward and leeward sides of the two sails is greater, resulting in more lift. The jib experiences a greater increase in lift with the two sail combination. Sail performance design variables Sails characteristically have a coefficient of lift (CL) and coefficient of drag (CD) for each apparent wind angle. The planform, curvature and area of a given sail are dominant determinants of each coefficient. Sail terminology Sails are classified as "triangular sails", "quadrilateral fore-and-aft sails" (gaff-rigged, etc.), and "square sails". The top of a triangular sail, the head, is raised by a halyard, The forward lower corner of the sail, the tack, is shackled to a fixed point on the boat in a manner to allow pivoting about that point—either on a mast, e.g. for a mainsail, or on the deck, e.g. for a jib or staysail. The trailing lower corner, the clew, is positioned with an outhaul on a boom or directly with a sheet, absent a boom. Symmetrical sails have two clews, which may be adjusted forward or back. The windward edge of a sail is called the luff, the trailing edge, the leach, and the bottom edge the foot. On symmetrical sails, either vertical edge may be presented to windward and, therefore, there are two leaches. On sails attached to a mast and boom, these edges may be curved, when laid on a flat surface, to promote both horizontal and vertical curvature in the cross-section of the sail, once attached. The use of battens allows a sail have an arc of material on the leech, beyond a line drawn from the head to the clew, called the roach. Lift variables As with aircraft wings, the two dominant factors affecting sail efficiency are its planform—primarily sail width versus sail height, expressed as an aspect ratio—and cross-sectional curvature or draft. Aspect ratio In aerodynamics, the aspect ratio of a sail is the ratio of its length to its breadth (chord). A high aspect ratio indicates a long, narrow sail, whereas a low aspect ratio indicates a short, wide sail. For most sails, the length of the chord is not a constant but varies along the wing, so the aspect ratio AR is defined as the square of the sail height b divided by the area A of the sail planform: Aspect ratio and planform can be used to predict the aerodynamic performance of a sail. For a given sail area, the aspect ratio, which is proportional to the square of the sail height, is of particular significance in determining lift-induced drag, and is used to calculate the induced drag coefficient of a sail : where is the Oswald efficiency number that accounts for the variable sail shapes. This formula demonstrates that a sail's induced drag coefficient decreases with increased aspect ratio. Sail curvature The horizontal curvature of a sail is termed "draft" and corresponds to the camber of an airfoil. Increasing the draft generally increases the sail's lift force. The Royal Yachting Association categorizes draft by depth and by the placement of the maximum depth as a percentage of the distance from the luff to the leach. Sail draft is adjusted for wind speed to achieve a flatter sail (less draft) in stronger winds and a fuller sails (more draft) in lighter winds. Staysails and sails attached to a mast (e.g. a mainsail) have different, but similar controls to achieve draft depth and position. On a staysail, tightening the luff with the halyard helps flatten the sail and adjusts the position of maximum draft. On a mainsail curving the mast to fit the curvature of the luff helps flatten the sail. Depending on wind strength, Dellenbaugh offers the following advice on setting the draft of a sailboat mainsail: For light air (less than 8 knots), the sail is at its fullest with the depth of draft between 13-16% of the cord and maximum fullness 50% aft from the luff. For medium air (8-15 knots), the mainsail has minimal twist with a depth of draft set between 11-13% of the cord and maximum fullness 45% aft from the luff. For heavy (greater than15 knots), the sail is flattened and allowed to twist in a manner that dumps lift with a depth of draft set between 9-12% of cord and maximum fullness 45% aft of the luff. Plots by Larsson et al show that draft is a much more significant factor affecting sail propulsive force than the position of maximum draft. Coefficients of propulsive forces and heeling forces as a function of draft (camber) depth or position. The primary tool for adjusting mainsail shape is mast bend; a straight mast increases draft and lift; a curved mast decreases draft and lift—the backstay tensioner is a primary tool for bending the mast. Secondary tools for sail shape adjustment are the mainsheet, traveler, outhaul, and Cunningham. Drag variables Spinnakers have traditionally been optimized to mobilize drag as a more important propulsive component than lift. As sailing craft are able to achieve higher speeds, whether on water, ice or land, the velocity made good (VMG) at a given course off the wind occurs at apparent wind angles that are increasingly further forward with speed. This suggests that the optimum VMG for a given course may be in a regime where a spinnaker may be providing significant lift. Traditional displacement sailboats may at times have optimum VMG courses close to downwind; for these the dominant force on sails is from drag. According to Kimball, CD ≈ 4/3 for most sails with the apparent wind angle astern, so drag force on a downwind sail becomes substantially a function of area and wind speed, approximated as follows: Measurement and computation tools Sail design relies on empirical measurements of pressures and their resulting forces on sails, which validate modern analysis tools, including computational fluid dynamics. Measurement of pressure on the sail Modern sail design and manufacture employs wind tunnel studies, full-scale experiments, and computer models as a basis for efficiently harnessing forces on sails. Instruments for measuring air pressure effects in wind tunnel studies of sails include pitot tubes, which measure air speed and manometers, which measure static pressures and atmospheric pressure (static pressure in undisturbed flow). Researchers plot pressure across the windward and leeward sides of test sails along the chord and calculate pressure coefficients (static pressure difference over wind-induced dynamic pressure). Research results describe airflow around the sail and in the boundary layer. Wilkinson, modelling the boundary layer in two dimensions, described nine regions around the sail: Upper mast attached airflow. Upper separation bubble. Upper reattachment region. Upper aerofoil attached flow region. Trailing edge separation region. Lower mast attached flow region. Lower separation bubble. Lower reattachment region. Lower aerofoil attached flow region. Analysis Sail design differs from wing design in several respects, especially since on a sail air flow varies with wind and boat motion and sails are usually deformable airfoils, sometimes with a mast for a leading edge. Often simplifying assumptions are employed when making design calculations, including: a flat travel surface—water, ice or land, constant wind velocity and unchanging sail adjustment. The analysis of the forces on sails takes into account the aerodynamic surface force, its centre of effort on a sail, its direction, and its variable distribution over the sail. Modern analysis employs fluid mechanics and aerodynamics airflow calculations for sail design and manufacture, using aeroelasticity models, which combine computational fluid dynamics and structural analysis. Secondary effects pertaining to turbulence and separation of the boundary layer are secondary factors. Computational limitations persist. Theoretical results require empirical confirmation with wind tunnel tests on scale models and full-scale testing of sails. Velocity prediction programs combine elements of hydrodynamic forces (mainly drag) and aerodynamic forces (lift and drag) to predict sailboat performance at various windspeed for all points of sail See also Sail Sailing Sailcloth Point of sail Polar diagram (sailing) Sail-plan Rigging Wing Sail twist High-performance sailing Stays (nautical) Sheet (sailing) References Aerodynamics Naval architecture Sailing Marine propulsion
0.780776
0.986171
0.769978
Theoretical chemistry
Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding, chemical reaction, valence, the surface of potential energy, molecular orbitals, orbital interactions, and molecule activation. Overview Theoretical chemistry unites principles and concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws, principles and rules, their refinement and detailing, the construction of a hierarchy. The central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics. In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods, often uses semi-empirical and empirical methods. In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, polymerization, catalysis, molecular magnetism and spectroscopy. Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: electronic structure, potential energy surfaces, and force fields; vibrational-rotational motion; equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer; unimolecular rate theory and metastable states; condensed-phase and macromolecular aspects of dynamics. Branches of theoretical chemistry Quantum chemistry The application of quantum mechanics or fundamental interactions to chemical and physico-chemical problems. Spectroscopic and magnetic properties are between the most frequently modelled. Computational chemistryThe application of scientific computing to chemistry, involving approximation schemes such as Hartree–Fock, post-Hartree–Fock, density functional theory, semiempirical methods (such as PM3) or force field methods. Molecular shape is the most frequently predicted property. Computers can also predict vibrational spectra and vibronic coupling, but also acquire and Fourier transform Infra-red Data into frequency information. The comparison with predicted vibrations supports the predicted shape. Molecular modelling Methods for modelling molecular structures without necessarily referring to quantum mechanics. Examples are molecular docking, protein-protein docking, drug design, combinatorial chemistry. The fitting of shape and electric potential are the driving factor in this graphical approach. Molecular dynamics Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules. The rearrangement of molecules within an ensemble is controlled by Van der Waals forces and promoted by temperature. Molecular mechanics Modeling of the intra- and inter-molecular interaction potential energy surfaces via potentials. The latter are usually parameterized from ab initio calculations. Mathematical chemistry Discussion and prediction of the molecular structure using mathematical methods without necessarily referring to quantum mechanics. Topology is a branch of mathematics that allows researchers to predict properties of flexible finite size bodies like clusters. Chemical kinetics Theoretical study of the dynamical systems associated to reactive chemicals, the activated complex and their corresponding differential equations. Cheminformatics (also known as chemoinformatics) The use of computer and informational techniques, applied to crop information to solve problems in the field of chemistry. Chemical engineering The application of chemistry to industrial processes to conduct research and development. This allows for development and improvement of new and existing products and manufacturing processes. Chemical thermodynamics The study of the relationship between heat, work, and energy in chemical reactions and processes, with focus on entropy, enthalpy, and Gibbs free energy to understand reaction spontaneity and equilibrium. Statistical mechanics The application of statistical mechanics to predict and explain thermodynamic properties of chemical systems, connecting molecular behavior with macroscopic properties. Closely related disciplines Historically, the major field of application of theoretical chemistry has been in the following fields of research: Atomic physics: The discipline dealing with electrons and atomic nuclei. Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules. Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc. The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague. Many-body theory: The discipline studying the effects which appear in systems with large number of constituents. It is based on quantum physics – mostly second quantization formalism – and quantum electrodynamics. Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry, condensed matter physics, nanotechnology or molecular biology. See also List of unsolved problems in chemistry Bibliography Attila Szabo and Neil S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Dover Publications; New Ed edition (1996) , Robert G. Parr and Weitao Yang, Density-Functional Theory of Atoms and Molecules, Oxford Science Publications; first published in 1989; , D. J. Tannor, V. Kazakov and V. Orlov, Control of Photochemical Branching: Novel Procedures for Finding Optimal Pulses and Global Upper Bounds, in Time Dependent Quantum Molecular Dynamics, J. Broeckhove and L. Lathouwers, eds., 347-360 (Plenum, 1992) Chemistry Physical chemistry Chemical physics
0.783048
0.983287
0.769961
Computable general equilibrium
Computable general equilibrium (CGE) models are a class of economic models that use actual economic data to estimate how an economy might react to changes in policy, technology or other external factors. CGE models are also referred to as AGE (applied general equilibrium) models. A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions. Main features A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. However, most CGE models conform only loosely to the theoretical general equilibrium paradigm. For example, they may allow for: non-market clearing, especially for labour (unemployment) or for commodities (inventories) imperfect competition (e.g., monopoly pricing) demands not influenced by price (e.g., government demands) CGE models always contain more variables than equations—so some variables must be set outside the model. These variables are termed exogenous; the remainder, determined by the model, is called endogenous. The choice of which variables are to be exogenous is called the model closure, and may give rise to controversy. For example, some modelers hold employment and the trade balance fixed; others allow these to vary. Variables defining technology, consumer tastes, and government instruments (such as tax rates) are usually exogenous. A CGE model database consists of: tables of transaction values, showing, for example, the value of coal used by the iron industry. Usually the database is presented as an input-output table or as a social accounting matrix (SAM). In either case, it covers the whole economy of a country (or even the whole world), and distinguishes a number of sectors, commodities, primary factors and perhaps types of households. Sectoral coverage ranges from relatively simple representations of capital, labor and intermediates to highly detailed representations of specific sub-sectors (e.g., the electricity sector in GTAP-Power.) elasticities: dimensionless parameters that capture behavioural response. For example, export demand elasticities specify by how much export volumes might fall if export prices went up. Other elasticities may belong to the constant elasticity of substitution class. Amongst these are Armington elasticities, which show whether products of different countries are close substitutes, and elasticities measuring how easily inputs to production may be substituted for one another. Income elasticity of demand shows how household demands respond to income changes. History CGE models are descended from the input–output models pioneered by Wassily Leontief, but assign a more important role to prices. Thus, where Leontief assumed that, say, a fixed amount of labour was required to produce a ton of iron, a CGE model would normally allow wage levels to (negatively) affect labour demands. CGE models derive too from the models for planning the economies of poorer countries constructed (usually by a foreign expert) from 1960 onwards. Compared to the Leontief model, development planning models focused more on constraints or shortages—of skilled labour, capital, or foreign exchange. CGE modelling of richer economies descends from Leif Johansen's 1960 MSG model of Norway, and the static model developed by the Cambridge Growth Project in the UK. Both models were pragmatic in flavour, and traced variables through time. The Australian MONASH model is a modern representative of this class. Perhaps the first CGE model similar to those of today was that of Taylor and Black (1974). Areas of use CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. For example, a tax on flour might affect bread prices, the CPI, and hence perhaps wages and employment. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions. Trade policy CGE models have been used widely to analyse trade policy. Today there are many CGE models of different countries. One of the most well-known CGE models is global: the GTAP model of world trade. Developing economies CGE models are useful to model the economies of countries for which time series data are scarce or not relevant (perhaps because of disturbances such as regime changes). Here, strong, reasonable, assumptions embedded in the model must replace historical evidence. Thus developing economies are often analysed using CGE models, such as those based on the IFPRI template model. Climate policy CGE models can specify consumer and producer behaviour and ‘simulate’ effects of climate policy on various economic outcomes. They can show economic gains and losses across different groups (e.g., households that differ in income, or in different regions). The equations include assumptions about the behavioural response of different groups. By optimising the prices paid for various outputs the direct burdens are shifted from one taxpayer to another. Comparative-static and dynamic CGE models Many CGE models are comparative static: they model the reactions of the economy at only one point in time. For policy analysis, results from such a model are often interpreted as showing the reaction of the economy in some future period to one or a few external shocks or policy changes. That is, the results show the difference (usually reported in percent change form) between two alternative future states (with and without the policy shock). The process of adjustment to the new equilibrium, in particular the reallocation of labor and capital across sectors, usually is not explicitly represented in such a model. In contrast, long-run models focus on adjustments to the underlying resource base when modeling policy changes. This can include dynamic adjustment to the labor supply, adjustments in installed and overall capital stocks, and even adjustment to overall productivity and market structure. There are two broad approaches followed in the policy literature to such long-run adjustment. One involves what is called "comparative steady state" analysis. Under such an approach, long-run or steady-state closure rules are used, under either forward-looking or recursive dynamic behavior, to solve for long-run adjustments. The alternative approach involves explicit modeling of dynamic adjustment paths. These models can seem more realistic, but are more challenging to construct and solve. They require for instance that future changes are predicted for all exogenous variables, not just those affected by a possible policy change. The dynamic elements may arise from partial adjustment processes or from stock/flow accumulation relations: between capital stocks and investment, and between foreign debt and trade deficits. However there is a potential consistency problem because the variables that change from one equilibrium solution to the next are not necessarily consistent with each other during the period of change. The modeling of the path of adjustment may involve forward-looking expectations, where agents' expectations depend on the future state of the economy and it is necessary to solve for all periods simultaneously, leading to full multi-period dynamic CGE models. An alternative is recursive dynamics. Recursive-dynamic CGE models are those that can be solved sequentially (one period at a time). They assume that behaviour depends only on current and past states of the economy. Recursive dynamic models where a single period is solved for, comparative steady-state analysis, is a special case of recursive dynamic modeling over what can be multiple periods. Techniques Early CGE models were often solved by a program custom-written for that particular model. Models were expensive to construct and sometimes appeared as a 'black box' to outsiders. Now, most CGE models are formulated and solved using one of the GAMS or GEMPACK software systems. AMPL, Excel and MATLAB are also used. Use of such systems has lowered the cost of entry to CGE modelling; allowed model simulations to be independently replicated; and increased the transparency of the models. See also Macroeconomic model References Further reading Adelman, Irma and Sherman Robinson (1978). Income Distribution Policy in Developing Countries: A Case Study of Korea, Stanford University Press Baldwin, Richard E., and Joseph F. Francois, eds. Dynamic Issues in Commercial Policy Analysis. Cambridge University Press, 1999. Bouët, Antoine (2008). The Expected Benefits of Trade Liberalization for World Income and Development: Opening the "Black Box" of Global Trade Modeling Burfisher, Mary, Introduction to Computable General Equilibrium Models, Cambridge University Press: Cambridge, 2011, Cardenete, M. Alejandro, Guerra, Ana-Isabel and Sancho, Ferran (2012). Applied General Equilibrium: An Introduction. Springer Corong, Erwin L.; et al. (2017). "The Standard GTAP Model, Version 7". Journal of Global Economic Analysis. 2 (1): 1–119. Dervis, Kemal; Jaime de Melo and Sherman Robinson (1982). General Equilibrium Models for Development Policy. Cambridge University Press Dixon, Peter; Brian Parmenter; John Sutton and Dave Vincent (1982). ORANI: A Multisectoral Model of the Australian Economy, North-Holland Dixon, Peter; Brian Parmenter; Alan Powell and Peter Wilcoxen (1992). Notes and Problems in Applied General Equilibrium Economics, North Holland Dixon, Peter (2006). Evidence-based Trade Policy Decision Making in Australia and the Development of Computable General Equilibrium Modelling, CoPS/IMPACT Working Paper Number G-163 Dixon, Peter and Dale W. Jorgenson, ed. (2013). Handbook of Computable General Equilibrium Modeling, vols. 1A and 1B, North Holland, Ginsburgh, Victor and Michiel Keyzer (1997). The Structure of Applied General Equilibrium Models, MIT Press Hertel, Thomas, Global Trade Analysis: Modeling and Applications (Modelling and Applications), Cambridge University Press: Cambridge, 1999, Kehoe, Patrick J. and Timothy J. Kehoe (1994) "A Primer on Static Applied General Equilibrium Models", Federal Reserve Bank of Minneapolis Quarterly Review, 18(2) Kehoe, Timothy J. and Edward C. Prescott (1995) Edited volume on "Applied General Equilibrium", Economic Theory, 6 Lanz, Bruno and Rutherford, Thomas F. (2016) "GTAPinGAMS: Multiregional and Small Open Economy Models". Journal of Global Economic Analysis, vol. 1(2):1–77. Reinert, Kenneth A., and Joseph F. Francois, eds. Applied Methods for Trade Policy Analysis: A Handbook. Cambridge University Press, 1997. Shoven, John and John Whalley (1984). "Applied General-Equilibrium Models of Taxation and International Trade: An Introduction and Survey". Journal of Economic Literature, vol. 22(3) 1007–51 Shoven, John and John Whalley (1992). Applying General Equilibrium, Cambridge University Press External links gEcon – software for DSGE and CGE modeling General equilibrium theory Mathematical and quantitative methods (economics)
0.784964
0.980887
0.769961
Wigner effect
The Wigner effect (named for its discoverer, Eugene Wigner), also known as the discomposition effect or Wigner's disease, is the displacement of atoms in a solid caused by neutron radiation. Any solid can display the Wigner effect. The effect is of most concern in neutron moderators, such as graphite, intended to reduce the speed of fast neutrons, thereby turning them into thermal neutrons capable of sustaining a nuclear chain reaction involving uranium-235. Cause To cause the Wigner effect, neutrons that collide with the atoms in a crystal structure must have enough energy to displace them from the lattice. This amount (threshold displacement energy) is approximately 25 eV. A neutron's energy can vary widely, but it is not uncommon to have energies up to and exceeding 10 MeV (10,000,000 eV) in the centre of a nuclear reactor. A neutron with a significant amount of energy will create a displacement cascade in a matrix via elastic collisions. For example, a 1 MeV neutron striking graphite will create 900 displacements. Not all displacements will create defects, because some of the struck atoms will find and fill the vacancies that were either small pre-existing voids or vacancies newly formed by the other struck atoms. Frenkel defect The atoms that do not find a vacancy come to rest in non-ideal locations; that is, not along the symmetrical lines of the lattice. These interstitial atoms (or simply "interstitials") and their associated vacancies are a Frenkel defect. Because these atoms are not in the ideal location, they have a Wigner energy associated with them, much as a ball at the top of a hill has gravitational potential energy. When a large number of interstitials have accumulated, they risk releasing all of their energy suddenly, creating a rapid, great increase in temperature. Sudden, unplanned increases in temperature can present a large risk for certain types of nuclear reactors with low operating temperatures. One such release was the indirect cause of the Windscale fire. Accumulation of energy in irradiated graphite has been recorded as high as 2.7 kJ/g, but is typically much lower than this. Not linked to Chernobyl disaster Despite some reports, Wigner energy buildup had nothing to do with the cause of the Chernobyl disaster: this reactor, like all contemporary power reactors, operated at a high enough temperature to allow the displaced graphite structure to realign itself before any potential energy could be stored. Wigner energy may have played some part following the prompt critical neutron spike, when the accident entered the graphite fire phase of events. Dissipation of Wigner energy A buildup of Wigner energy can be relieved by heating the material. This process is known as annealing. In graphite this occurs at . Intimate Frenkel pairs In 2003, it was postulated that Wigner energy can be stored by the formation of metastable defect structures in graphite. Notably, the large energy release observed at 200–250 °C has been described in terms of a metastable interstitial-vacancy pair. The interstitial atom becomes trapped on the lip of the vacancy, and there is a barrier for it to recombine to give perfect graphite. Citations General references Glasstone, Samuel, and Alexander Sesonske [1963] (1994). Nuclear Reactor Engineering. Boston: Springer. . . Condensed matter physics Crystallographic defects Neutron Nuclear technology Physical phenomena Radiation effects
0.783199
0.983045
0.76992
Moving magnet and conductor problem
The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest". However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer. This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity. Introduction Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem: An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised the transformation of forces in moving reference frames to be consistent with Lorentz invariance. The details of these transformations are discussed below. In addition to consistency, it would be nice to consolidate the descriptions so they appear to be frame-independent. A clue to a framework-independent description is the observation that magnetic fields in one reference frame become electric fields in another frame. Likewise, the solenoidal portion of electric fields (the portion that is not originated by electric charges) becomes a magnetic field in another frame: that is, the solenoidal electric fields and magnetic fields are aspects of the same thing. That means the paradox of different descriptions may be only semantic. A description that uses scalar and vector potentials φ and A instead of B and E avoids the semantical trap. A Lorentz-invariant four vector Aα = (φ / c, A) replaces E and B and provides a frame-independent description (albeit less visceral than the E– B–description). An alternative unification of descriptions is to think of the physical entity as the electromagnetic field tensor, as described later on. This tensor contains both E and B fields as components, and has the same form in all frames of reference. Background Electromagnetic fields are not directly observable. The existence of classical electromagnetic fields can be inferred from the motion of charged particles, whose trajectories are observable. Electromagnetic fields do explain the observed motions of classical charged particles. A strong requirement in physics is that all observers of the motion of a particle agree on the trajectory of the particle. For instance, if one observer notes that a particle collides with the center of a bullseye, then all observers must reach the same conclusion. This requirement places constraints on the nature of electromagnetic fields and on their transformation from one reference frame to another. It also places constraints on the manner in which fields affect the acceleration and, hence, the trajectories of charged particles. Perhaps the simplest example, and one that Einstein referenced in his 1905 paper introducing special relativity, is the problem of a conductor moving in the field of a magnet. In the frame of the magnet, a conductor experiences a magnetic force. In the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The magnetic field in the magnet frame and the electric field in the conductor frame must generate consistent results in the conductor. At the time of Einstein in 1905, the field equations as represented by Maxwell's equations were properly consistent. Newton's law of motion, however, had to be modified to provide consistent particle trajectories. Transformation of fields, assuming Galilean transformations Assuming that the magnet frame and the conductor frame are related by a Galilean transformation, it is straightforward to compute the fields and forces in both frames. This will demonstrate that the induced current is indeed the same in both frames. As a byproduct, this argument will also yield a general formula for the electric and magnetic fields in one frame in terms of the fields in another frame. In reality, the frames are not related by a Galilean transformation, but by a Lorentz transformation. Nevertheless, it will be a Galilean transformation to a very good approximation, at velocities much less than the speed of light. Unprimed quantities correspond to the rest frame of the magnet, while primed quantities correspond to the rest frame of the conductor. Let v be the velocity of the conductor, as seen from the magnet frame. Magnet frame In the rest frame of the magnet, the magnetic field is some fixed field B(r), determined by the structure and shape of the magnet. The electric field is zero. In general, the force exerted upon a particle of charge q in the conductor by the electric field and magnetic field is given by (SI units): where is the charge on the particle, is the particle velocity and F is the Lorentz force. Here, however, the electric field is zero, so the force on the particle is Conductor frame In the conductor frame, there is a time-varying magnetic field B′ related to the magnetic field B in the magnet frame according to: where In this frame, there is an electric field, and its curl is given by the Maxwell-Faraday equation: This yields: To make this explicable: if a conductor moves through a B-field with a gradient , along the z-axis with constant velocity , it follows that in the frame of the conductor It can be seen that this equation is consistent with by determining and from this expression and substituting it in the first expression while using that Even in the limit of infinitesimal small gradients these relations hold, and therefore the Lorentz force equation is also valid if the magnetic field in the conductor frame is not varying in time. At relativistic velocities a correction factor is needed, see below and Classical electromagnetism and special relativity and Lorentz transformation. A charge q in the conductor will be at rest in the conductor frame. Therefore, the magnetic force term of the Lorentz force has no effect, and the force on the charge is given by This demonstrates that the force is the same in both frames (as would be expected), and therefore any observable consequences of this force, such as the induced current, would also be the same in both frames. This is despite the fact that the force is seen to be an electric force in the conductor frame, but a magnetic force in the magnet's frame. Galilean transformation formula for fields A similar sort of argument can be made if the magnet's frame also contains electric fields. (The Ampere-Maxwell equation also comes into play, explaining how, in the conductor's frame, this moving electric field will contribute to the magnetic field.) The result is that, in general, with c the speed of light in free space. By plugging these transformation rules into the full Maxwell's equations, it can be seen that if Maxwell's equations are true in one frame, then they are almost true in the other, but contain incorrect terms proportional to the quantity v/c raised to the second or higher power. Accordingly, these are not the exact transformation rules, but are a close approximation at low velocities. At large velocities approaching the speed of light, the Galilean transformation must be replaced by the Lorentz transformation, and the field transformation equations also must be changed, according to the expressions given below. Transformation of fields as predicted by Maxwell's equations In a frame moving at velocity v, the E-field in the moving frame when there is no E-field in the stationary magnet frame Maxwell's equations transform as: where is called the Lorentz factor and c is the speed of light in free space. This result is a consequence of requiring that observers in all inertial frames arrive at the same form for Maxwell's equations. In particular, all observers must see the same speed of light c. That requirement leads to the Lorentz transformation for space and time. Assuming a Lorentz transformation, invariance of Maxwell's equations then leads to the above transformation of the fields for this example. Consequently, the force on the charge is This expression differs from the expression obtained from the nonrelativistic Newton's law of motion by a factor of . Special relativity modifies space and time in a manner such that the forces and fields transform consistently. Modification of dynamics for consistency with Maxwell's equations The Lorentz force has the same form in both frames, though the fields differ, namely: See Figure 1. To simplify, let the magnetic field point in the z-direction and vary with location x, and let the conductor translate in the positive x-direction with velocity v. Consequently, in the magnet frame where the conductor is moving, the Lorentz force points in the negative y-direction, perpendicular to both the velocity, and the B-field. The force on a charge, here due only to the B-field, is while in the conductor frame where the magnet is moving, the force is also in the negative y-direction, and now due only to the E-field with a value: The two forces differ by the Lorentz factor γ. This difference is expected in a relativistic theory, however, due to the change in space-time between frames, as discussed next. Relativity takes the Lorentz transformation of space-time suggested by invariance of Maxwell's equations and imposes it upon dynamics as well (a revision of Newton's laws of motion). In this example, the Lorentz transformation affects the x-direction only (the relative motion of the two frames is along the x-direction). The relations connecting time and space are ( primes denote the moving conductor frame ) : These transformations lead to a change in the y-component of a force: That is, within Lorentz invariance, force is not the same in all frames of reference, unlike Galilean invariance. But, from the earlier analysis based upon the Lorentz force law: which agrees completely. So the force on the charge is not the same in both frames, but it transforms as expected according to relativity. See also Annus Mirabilis Papers Darwin Lagrangian Eddy current Electric motor Einstein's thought experiments Faraday's law Faraday paradox Galilean invariance Inertial frame Lenz's law Lorentz transformation Principle of relativity Relativistic electromagnetism Special theory of relativity References and notes Further reading (The relativity of magnetic and electric fields) External links Magnets and conductors in special relativity Electromagnetism Special relativity Thought experiments in physics
0.787004
0.978277
0.769908
Chemiosmosis
Chemiosmosis is the movement of ions across a semipermeable membrane bound structure, down their electrochemical gradient. An important example is the formation of adenosine triphosphate (ATP) by the movement of hydrogen ions (H+) across a membrane during cellular respiration or photosynthesis. Hydrogen ions, or protons, will diffuse from a region of high proton concentration to a region of lower proton concentration, and an electrochemical concentration gradient of protons across a membrane can be harnessed to make ATP. This process is related to osmosis, the movement of water across a selective membrane, which is why it is called "chemiosmosis". ATP synthase is the enzyme that makes ATP by chemiosmosis. It allows protons to pass through the membrane and uses the free energy difference to convert phosphorylate adenosine diphosphate (ADP) into ATP. The ATP synthase contains two parts: CF0 (present in thylakoid membrane) and CF1 (protrudes on the outer surface of thylakoid membrane). The breakdown of the proton gradient leads to conformational change in CF1—providing enough energy in the process to convert ADP to ATP. The generation of ATP by chemiosmosis occurs in mitochondria and chloroplasts, as well as in most bacteria and archaea. For instance, in chloroplasts during photosynthesis, an electron transport chain pumps H+ ions (protons) in the stroma (fluid) through the thylakoid membrane to the thylakoid spaces. The stored energy is used to photophosphorylate ADP, making ATP, as protons move through ATP synthase. The chemiosmotic hypothesis Peter D. Mitchell proposed the chemiosmotic hypothesis in 1961. In brief, the hypothesis was that most adenosine triphosphate (ATP) synthesis in respiring cells comes from the electrochemical gradient across the inner membranes of mitochondria by using the energy of NADH and FADH2 formed during the oxidative breakdown of energy-rich molecules such as glucose. Molecules such as glucose are metabolized to produce acetyl CoA as a fairly energy-rich intermediate. The oxidation of acetyl coenzyme A (acetyl-CoA) in the mitochondrial matrix is coupled to the reduction of a carrier molecule such as nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD). The carriers pass electrons to the electron transport chain (ETC) in the inner mitochondrial membrane, which in turn pass them to other proteins in the ETC. The energy at every redox transfer step is used to pump protons from the matrix into the intermembrane space, storing energy in the form of a transmembrane electrochemical gradient. The protons move back across the inner membrane through the enzyme ATP synthase. The flow of protons back into the matrix of the mitochondrion via ATP synthase provides enough energy for ADP to combine with inorganic phosphate to form ATP. This was a radical proposal at the time, and was not well accepted. The prevailing view was that the energy of electron transfer was stored as a stable high potential intermediate, a chemically more conservative concept. The problem with the older paradigm is that no high energy intermediate was ever found, and the evidence for proton pumping by the complexes of the electron transfer chain grew too great to be ignored. Eventually the weight of evidence began to favor the chemiosmotic hypothesis, and in 1978 Peter D. Mitchell was awarded the Nobel Prize in Chemistry. Chemiosmotic coupling is important for ATP production in mitochondria, chloroplasts and many bacteria and archaea. Proton-motive force The movement of ions across the membrane depends on a combination of two factors: Diffusion force caused by a concentration gradient - all particles tend to diffuse from higher concentration to lower. Electrostatic force caused by electrical potential gradient - cations like protons H+ tend to diffuse down the electrical potential, from the positive (P) side of the membrane to the negative (N) side. Anions diffuse spontaneously in the opposite direction. These two gradients taken together can be expressed as an electrochemical gradient. Lipid bilayers of biological membranes, however, are barriers for ions. This is why energy can be stored as a combination of these two gradients across the membrane. Only special membrane proteins like ion channels can sometimes allow ions to move across the membrane (see also: Membrane transport). In the chemiosmotic hypothesis a transmembrane ATP synthase is central to convert energy of spontaneous flow of protons through them into chemical energy of ATP bonds. Hence researchers created the term proton-motive force (PMF), derived from the electrochemical gradient mentioned earlier. It can be described as the measure of the potential energy stored (chemiosmotic potential) as a combination of proton and voltage (electrical potential) gradients across a membrane. The electrical gradient is a consequence of the charge separation across the membrane (when the protons H+ move without a counterion, such as chloride Cl−). In most cases the proton-motive force is generated by an electron transport chain which acts as a proton pump, using the Gibbs free energy of redox reactions to pump protons (hydrogen ions) out across the membrane, separating the charge across the membrane. In mitochondria, energy released by the electron transport chain is used to move protons from the mitochondrial matrix (N side) to the intermembrane space (P side). Moving the protons out of the mitochondrion creates a lower concentration of positively charged protons inside it, resulting in excess negative charge on the inside of the membrane. The electrical potential gradient is about -170 mV , negative inside (N). These gradients - charge difference and the proton concentration difference both create a combined electrochemical gradient across the membrane, often expressed as the proton-motive force (PMF). In mitochondria, the PMF is almost entirely made up of the electrical component but in chloroplasts the PMF is made up mostly of the pH gradient because the charge of protons H+ is neutralized by the movement of Cl− and other anions. In either case, the PMF needs to be greater than about 460 mV (45 kJ/mol) for the ATP synthase to be able to make ATP. Equations The proton-motive force is derived from the Gibbs free energy. Let N denote the inside of a cell, and P denote the outside. Then where is the Gibbs free energy change per unit amount of cations transferred from P to N; is the charge number of the cation ; is the electric potential of N relative to P; and are the cation concentrations at P and N, respectively; is the Faraday constant; is the gas constant; and is the temperature. The molar Gibbs free energy change is frequently interpreted as a molar electrochemical ion potential . For an electrochemical proton gradient and as a consequence: where . Mitchell defined the proton-motive force (PMF) as . For example, implies . At this equation takes the form: . Note that for spontaneous proton import from the P side (relatively more positive and acidic) to the N side (relatively more negative and alkaline), is negative (similar to ) whereas PMF is positive (similar to redox cell potential ). It is worth noting that, as with any transmembrane transport process, the PMF is directional. The sign of the transmembrane electric potential difference is chosen to represent the change in potential energy per unit charge flowing into the cell as above. Furthermore, due to redox-driven proton pumping by coupling sites, the proton gradient is always inside-alkaline. For both of these reasons, protons flow in spontaneously, from the P side to the N side; the available free energy is used to synthesize ATP (see below). For this reason, PMF is defined for proton import, which is spontaneous. PMF for proton export, i.e., proton pumping as catalyzed by the coupling sites, is simply the negative of PMF(import). The spontaneity of proton import (from the P to the N side) is universal in all bioenergetic membranes. This fact was not recognized before the 1990s, because the chloroplast thylakoid lumen was interpreted as an interior phase, but in fact it is topologically equivalent to the exterior of the chloroplast. Azzone et al. stressed that the inside phase (N side of the membrane) is the bacterial cytoplasm, mitochondrial matrix, or chloroplast stroma; the outside (P) side is the bacterial periplasmic space, mitochondrial intermembrane space, or chloroplast lumen. Furthermore, 3D tomography of the mitochondrial inner membrane shows its extensive invaginations to be stacked, similar to thylakoid disks; hence the mitochondrial intermembrane space is topologically quite similar to the chloroplast lumen.: The energy expressed here as Gibbs free energy, electrochemical proton gradient, or proton-motive force (PMF), is a combination of two gradients across the membrane: the concentration gradient (via ) and electric potential gradient . When a system reaches equilibrium, ; nevertheless, the concentrations on either side of the membrane need not be equal. Spontaneous movement across the potential membrane is determined by both concentration and electric potential gradients. The molar Gibbs free energy of ATP synthesis is also called phosphorylation potential. The equilibrium concentration ratio can be calculated by comparing and , for example in case of the mammalian mitochondrion: H+ / ATP = ΔGp / (Δp / 10.4 kJ·mol−1/mV) = 40.2 kJ·mol−1 / (173.5 mV / 10.4 kJ·mol−1/mV) = 40.2 / 16.7 = 2.4. The actual ratio of the proton-binding c-subunit to the ATP-synthesizing beta-subunit copy numbers is 8/3 = 2.67, showing that under these conditions, the mitochondrion functions at 90% (2.4/2.67) efficiency. In fact, the thermodynamic efficiency is mostly lower in eukaryotic cells because ATP must be exported from the matrix to the cytoplasm, and ADP and phosphate must be imported from the cytoplasm. This "costs" one "extra" proton import per ATP, hence the actual efficiency is only 65% (= 2.4/3.67). In mitochondria The complete breakdown of glucose releasing its energy is called cellular respiration. The last steps of this process occur in mitochondria. The reduced molecules NADH and FADH2 are generated by the Krebs cycle, glycolysis, and pyruvate processing. These molecules pass electrons to an electron transport chain, which releases the energy of oxygen to create a proton gradient across the inner mitochondrial membrane. ATP synthase then uses the energy stored in this gradient to make ATP. This process is called oxidative phosphorylation because it uses energy released by the oxidation of NADH and FADH2 to phosphorylate ADP into ATP. In plants The light reactions of photosynthesis generate ATP by the action of chemiosmosis. The photons in sunlight are received by the antenna complex of Photosystem II, which excites electrons to a higher energy level. These electrons travel down an electron transport chain, causing protons to be actively pumped across the thylakoid membrane into the thylakoid lumen. These protons then flow down their electrochemical potential gradient through an enzyme called ATP-synthase, creating ATP by the phosphorylation of ADP to ATP. The electrons from the initial light reaction reach Photosystem I, then are raised to a higher energy level by light energy and then received by an electron acceptor and reduce NADP+ to NADPH. The electrons lost from Photosystem II get replaced by the oxidation of water, which is "split" into protons and oxygen by the oxygen-evolving complex (OEC, also known as WOC, or the water-oxidizing complex). To generate one molecule of diatomic oxygen, 10 photons must be absorbed by Photosystems I and II, four electrons must move through the two photosystems, and 2 NADPH are generated (later used for carbon dioxide fixation in the Calvin Cycle). In prokaryotes Bacteria and archaea also can use chemiosmosis to generate ATP. Cyanobacteria, green sulfur bacteria, and purple bacteria synthesize ATP by a process called photophosphorylation. These bacteria use the energy of light to create a proton gradient using a photosynthetic electron transport chain. Non-photosynthetic bacteria such as E. coli also contain ATP synthase. In fact, mitochondria and chloroplasts are the product of endosymbiosis and trace back to incorporated prokaryotes. This process is described in the endosymbiotic theory. The origin of the mitochondrion triggered the origin of eukaryotes, and the origin of the plastid the origin of the Archaeplastida, one of the major eukaryotic supergroups. Chemiosmotic phosphorylation is the third pathway that produces ATP from inorganic phosphate and an ADP molecule. This process is part of oxidative phosphorylation. Emergence of chemiosmosis Thermal cycling model A stepwise model for the emergence of chemiosmosis, a key element in the origin of life on earth, proposes that primordial organisms used thermal cycling as an energy source (thermosynthesis), functioning essentially as a heat engine: self-organized convection in natural waters causing thermal cycling → added β-subunit of F1 ATP Synthase (generated ATP by thermal cycling of subunit during suspension in convection cell: thermosynthesis) → added membrane and Fo ATP Synthase moiety (generated ATP by change in electrical polarization of membrane during thermal cycling: thermosynthesis) → added metastable, light-induced electric dipoles in membrane (primitive photosynthesis) → added quinones and membrane-spanning light-induced electric dipoles (today's bacterial photosynthesis, which makes use of chemiosmosis). External proton gradient model Deep-sea hydrothermal vents, emitting hot acidic or alkaline water, would have created external proton gradients. These provided energy that primordial organisms could have exploited. To keep the flows separate, such an organism could have wedged itself in the rock of the hydrothermal vent, exposed to the hydrothermal flow on one side and the more alkaline water on the other. As long as the organism's membrane (or passive ion channels within it) is permeable to protons, the mechanism can function without ion pumps. Such a proto-organism could then have evolved further mechanisms such as ion pumps and ATP synthase. Meteoritic quinones A proposed alternative source to chemiosmotic energy developing across membranous structures is if an electron acceptor, ferricyanide, is within a vesicle and the electron donor is outside, quinones transported by carbonaceous meteorites pick up electrons and protons from the donor. They would release electrons across the lipid membrane by diffusion to ferricyanide within the vesicles and release protons which produces gradients above pH 2, the process is conducive to the development of proton gradients. See also Cellular respiration Citric acid cycle Electrochemical gradient Glycolysis Oxidative phosphorylation References Further reading Biochemistry textbook reference, from the NCBI bookshelf – A set of experiments aiming to test some tenets of the chemiosmotic theory – External links Chemiosmosis (University of Wisconsin) Biochemical reactions Cell biology Cellular respiration
0.777446
0.990259
0.769873
Shock and awe
Shock and awe (technically known as rapid dominance) is a military strategy based on the use of overwhelming power and spectacular displays of force to paralyze the enemy's perception of the battlefield and destroy their will to fight. Though the concept has a variety of historical precedents, the doctrine was explained by Harlan K. Ullman and James P. Wade in 1996 and was developed specifically for application by the US military by the National Defense University of the United States. Doctrine of rapid dominance Rapid dominance is defined by its authors, Harlan K. Ullman and James P. Wade, as attempting Further, rapid dominance will, according to Ullman and Wade, Introducing the doctrine in a report to the United States' National Defense University in 1996, Ullman and Wade describe it as an attempt to develop a post-Cold War military doctrine for the United States. Rapid dominance and shock and awe, they write, may become a "revolutionary change" as the United States military is reduced in size and information technology is increasingly integrated into warfare. Subsequent U.S. military authors have written that rapid dominance exploits the "superior technology, precision engagement, and information dominance" of the United States. Ullman and Wade identify four vital characteristics of rapid dominance: near total or absolute knowledge and understanding of self, adversary, and environment; rapidity and timeliness in application; operational brilliance in execution; and (near) total control and signature management of the entire operational environment. The term "shock and awe" is most consistently used by Ullman and Wade as the effect that rapid dominance seeks to impose upon an adversary. It is the desired state of helplessness and lack of will. It can be induced, they write, by direct force applied to command and control centers, selective denial of information and dissemination of disinformation, overwhelming combat force, and rapidity of action. The doctrine of rapid dominance has evolved from the concept of "decisive force". Ulman and Wade contrast the two concepts in terms of objective, use of force, force size, scope, speed, casualties, and technique. Civilian casualties and destruction of infrastructure Although Ullman and Wade claim that the need to "[m]inimize civilian casualties, loss of life, and collateral damage" is a "political sensitivity [which needs] to be understood up front", their doctrine of rapid dominance requires the capability to disrupt "means of communication, transportation, food production, water supply, and other aspects of infrastructure", and, in practice, "the appropriate balance of Shock and Awe must cause ... the threat and fear of action that may shut down all or part of the adversary's society or render his ability to fight useless short of complete physical destruction." Using as an example a theoretical invasion of Iraq 20 years after Operation Desert Storm, the authors claimed, "Shutting the country down would entail both the physical destruction of appropriate infrastructure and the shutdown and control of the flow of all vital information and associated commerce so rapidly as to achieve a level of national shock akin to the effect that dropping nuclear weapons on Hiroshima and Nagasaki had on the Japanese." Reiterating the example in an interview with CBS News several months before Operation Iraqi Freedom, Ullman stated, "You're sitting in Baghdad and all of a sudden you're the general and 30 of your division headquarters have been wiped out. You also take the city down. By that I mean you get rid of their power, water. In 2, 3, 4, 5 days they are physically, emotionally and psychologically exhausted." Historical applications Ullman and Wade argue that there have been military applications that fall within some of the concepts of shock and awe. They enumerate nine examples: Overwhelming force: The "application of massive or overwhelming force" to "disarm, incapacitate, or render the enemy militarily impotent with as few casualties to ourselves and to noncombatants as possible." Hiroshima and Nagasaki: The establishment of shock and awe through "instant, nearly incomprehensible levels of massive destruction directed at influencing society writ large, meaning its leadership and public, rather than targeting directly against military or strategic objectives even with relatively few numbers or systems." Massive bombardment: Described as "precise destructive power largely against military targets and related sectors over time." Blitzkrieg: The "intent was to apply precise, surgical amounts of tightly focused force to achieve maximum leverage but with total economies of scale." Sun Tzu: The "selective, instant beheading of military or societal targets to achieve shock and awe." Haitian example: This example (occasionally referred to as the Potemkin village example) refers to a martial parade staged in Haiti on behalf of the (then) colonial power France in the early 1800s in which the native Haitians marched a small number of battalions in a cyclical manner. This led the colonial power into the belief that the size of the native forces was large enough so as to make any military action infeasible. The Roman legions: "Achieving shock and awe rests in the ability to deter and overpower an adversary through the adversary's perception and fear of his vulnerability and our own invincibility." Decay and default: "The imposition of societal breakdown over a lengthy period, but without the application of massive destruction." First Chechen War Russia's military strategy in the First Chechen War, and particularly the Battle of Grozny, was described as "shock and awe." Iraq War Buildup Before the 2003 invasion of Iraq, United States armed forces officials described their plan as employing shock and awe. But, Tommy Franks, commanding general of the invading forces, "had never cared for the use of the term 'shock and awe' " and "had not seen that effect as the point of the air offensive." Conflicting pre-war assessments Before its implementation, there was dissent within the Bush administration as to whether the shock and awe plan would work. According to a CBS News report, "One senior official called it a bunch of bull, but confirmed it is the concept on which the war plan is based." CBS Correspondent David Martin noted that during Operation Anaconda in Afghanistan in the prior year, the U.S. forces were "badly surprised by the willingness of al Qaeda to fight to the death. If the Iraqis fight, the U.S. would have to throw in reinforcements and win the old fashioned way by crushing the Republican Guards, and that would mean more casualties on both sides." Campaign Continuous bombing began on March 19, 2003, as United States forces unsuccessfully attempted to kill Saddam Hussein with decapitation strikes. Attacks continued against a small number of targets until March 21, 2003, when, at 1700 UTC, the main bombing campaign of the US and their allies began. Its forces launched approximately 1,700 air sorties (504 using cruise missiles). Coalition ground forces had begun a "running start" offensive towards Baghdad on the previous day. Coalition ground forces seized Baghdad on April 5, and the United States declared victory on April 15. The term "shock and awe" is typically used to describe only the very beginning of the invasion of Iraq, not the larger war, nor the ensuing insurgency. Conflicting post-war assessments To what extent the United States fought a campaign of shock and awe is unclear as post-war assessments are contradictory. Within two weeks of the United States' victory declaration, on April 27, The Washington Post published an interview with Iraqi military personnel detailing demoralization and lack of command. According to the soldiers, Coalition bombing was surprisingly widespread and had a severely demoralizing effect. When United States tanks passed through the Iraqi military's Republican Guard and Special Republican Guard units outside Baghdad to Saddam's presidential palaces, it caused a shock to troops inside Baghdad. Iraqi soldiers said there was no organization intact by the time the United States entered Baghdad and that resistance crumbled under the presumption that "it wasn't a war, it was suicide." In contrast, in an October 2003 presentation to the United States House Committee on Armed Services, staff of the United States Army War College did not attribute their performance to rapid dominance. Rather, they cited technological superiority and "Iraqi ineptitude". The speed of the coalition's actions ("rapidity"), they said, did not affect Iraqi morale. Further, they said that Iraqi armed forces ceased resistance only after direct force-on-force combat within cities. According to National Geographic researcher Bijal Trivedi, "Even after several days of bombing the Iraqis showed remarkable resilience. Many continued with their daily lives, working and shopping, as bombs continued to fall around them. According to some analysts, the military's attack was perhaps too precise. It did not trigger shock and awe in the Iraqis and, in the end, the city was only captured after close combat on the outskirts of Baghdad." Criticism of execution According to The Guardian correspondent Brian Whitaker in 2003, "To some in the Arab and Muslim countries, Shock and Awe is terrorism by another name; to others, a crime that compares unfavourably with September 11." Anti-war protesters in 2003 also claimed that "the shock and awe pummeling of Baghdad [was] a kind of terrorism." Casualties A dossier released by Iraq Body Count, a project of the U.K. non-governmental non-violent and disarmament organization Oxford Research Group, attributed approximately 6,616 civilian deaths to the actions of U.S.-led forces during the "invasion phase", including the shock-and-awe bombing campaign on Baghdad. These findings were disputed by both the U.S. military and the Iraqi government. Lieutenant Colonel Steve Boylan, the spokesman for the U.S. military in Baghdad, stated, "I don't know how they are doing their methodology and can't talk to how they calculate their numbers," as well as "we do everything we can to avoid civilian casualties in all of our operations." National Geographic researcher Bijal Trivedi stated, "Civilian casualties did occur, but the strikes, for the most part, were surgical." In popular culture Following the 2003 invasion of Iraq by the US, the term "shock and awe" has been used for commercial purposes. The United States Patent and Trademark Office received at least 29 trademark applications in 2003 for exclusive use of the term. The first came from a fireworks company on the day the US started bombing Baghdad. Sony registered the trademark the day after the beginning of the operation for use in a video game title but later withdrew the application and described it as "an exercise of regrettable bad judgment." In an interview, Harlan Ullman stated that he believed that using the term to try to sell products was "probably a mistake", and that "the marketing value will be somewhere between slim and none". Shock and awe is the job of Jane Doe, most commonly known as The Soldier from Valve's 2007 multi-player FPS game Team Fortress 2. In the 2009 theatrical movie Avatar, the genocide attack on the Na'vi is described as a "Shock and Awe" campaign by doctor Max Patel. In the 2011 theatrical film Battle: Los Angeles, the invasion by the alien force is described as using "rapid dominance" along the world's coastlines, including indiscriminate use of heavy ordnance. A mission entitled "Shock and Awe" in the video game Call of Duty 4: Modern Warfare concludes with the detonation of a nuclear warhead. In the 2008 video game Command & Conquer: Red Alert 3, one of the songs in the soundtrack of the game is titled "Shock and Awe". In the 2016 video game Hearts of Iron IV, one doctrine the player can select is named “Shock and Awe”, focussing on overwhelming Artillery- and Air support. However, the game is set before Ullman and Wade’s explanation of the terminology. See also Demoralization (military) Hearts and minds (Iraq) Powell Doctrine Psychological warfare Rumsfeld Doctrine Terror (politics) London Blitz Blitzkrieg Notes Further reading Blakesley, Paul J. "Shock and Awe: A Widely Misunderstood Effect". United States Army Command and General Staff College, June 17, 2004. Branigin, William. "A Brief, Bitter War for Iraq's Military Officers". Washington Post, October 27, 2003. Peterson, Scott. "US mulls air strategies in Iraq". Christian Science Monitor, January 30, 2003. Ullman, Harlan K. and Wade, James P. Rapid Dominance: A Force for All Seasons. Royal United Services Institute in Defense Studies, 1998. External links Shock and awe , from SourceWatch Command and Control Research Program 1996 neologisms English phrases Iraq War terminology Military doctrines Military terminology Psychological warfare techniques Warfare of the late modern period
0.7718
0.997477
0.769853
Magnetic field
A magnetic field (sometimes called B-field) is a physical field that describes the magnetic influence on moving electric charges, electric currents, and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field. A permanent magnet's magnetic field pulls on ferromagnetic materials such as iron, and attracts or repels other magnets. In addition, a nonuniform magnetic field exerts minuscule forces on "nonmagnetic" materials by three other magnetic effects: paramagnetism, diamagnetism, and antiferromagnetism, although these forces are usually so small they can only be detected by laboratory equipment. Magnetic fields surround magnetized materials, electric currents, and electric fields varying in time. Since both strength and direction of a magnetic field may vary with location, it is described mathematically by a function assigning a vector to each point of space, called a vector field (more precisely, a pseudovector field). In electromagnetics, the term magnetic field is used for two distinct but closely related vector fields denoted by the symbols and . In the International System of Units, the unit of , magnetic flux density, is the tesla (in SI base units: kilogram per second squared per ampere), which is equivalent to newton per meter per ampere. The unit of , magnetic field strength, is ampere per meter (A/m). and differ in how they take the medium and/or magnetization into account. In vacuum, the two fields are related through the vacuum permeability, ; in a magnetized material, the quantities on each side of this equation differ by the magnetization field of the material. Magnetic fields are produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. Magnetic fields and electric fields are interrelated and are both components of the electromagnetic force, one of the four fundamental forces of nature. Magnetic fields are used throughout modern technology, particularly in electrical engineering and electromechanics. Rotating magnetic fields are used in both electric motors and generators. The interaction of magnetic fields in electric devices such as transformers is conceptualized and investigated as magnetic circuits. Magnetic forces give information about the charge carriers in a material through the Hall effect. The Earth produces its own magnetic field, which shields the Earth's ozone layer from the solar wind and is important in navigation using a compass. Description The force on an electric charge depends on its location, speed, and direction; two vector fields are used to describe this force. The first is the electric field, which describes the force acting on a stationary charge and gives the component of the force that is independent of motion. The magnetic field, in contrast, describes the component of the force that is proportional to both the speed and direction of charged particles. The field is defined by the Lorentz force law and is, at each instant, perpendicular to both the motion of the charge and the force it experiences. There are two different, but closely related vector fields which are both sometimes called the "magnetic field" written and . While both the best names for these fields and exact interpretation of what these fields represent has been the subject of long running debate, there is wide agreement about how the underlying physics work. Historically, the term "magnetic field" was reserved for while using other terms for , but many recent textbooks use the term "magnetic field" to describe as well as or in place of . There are many alternative names for both (see sidebars). The B-field The magnetic field vector at any point can be defined as the vector that, when used in the Lorentz force law, correctly predicts the force on a charged particle at that point: Here is the force on the particle, is the particle's electric charge, , is the particle's velocity, and × denotes the cross product. The direction of force on the charge can be determined by a mnemonic known as the right-hand rule (see the figure). Using the right hand, pointing the thumb in the direction of the current, and the fingers in the direction of the magnetic field, the resulting force on the charge points outwards from the palm. The force on a negatively charged particle is in the opposite direction. If both the speed and the charge are reversed then the direction of the force remains the same. For that reason a magnetic field measurement (by itself) cannot distinguish whether there is a positive charge moving to the right or a negative charge moving to the left. (Both of these cases produce the same current.) On the other hand, a magnetic field combined with an electric field can distinguish between these, see Hall effect below. The first term in the Lorentz equation is from the theory of electrostatics, and says that a particle of charge in an electric field experiences an electric force: The second term is the magnetic force: Using the definition of the cross product, the magnetic force can also be written as a scalar equation: where , , and are the scalar magnitude of their respective vectors, and is the angle between the velocity of the particle and the magnetic field. The vector is defined as the vector field necessary to make the Lorentz force law correctly describe the motion of a charged particle. In other words, The field can also be defined by the torque on a magnetic dipole, . The SI unit of is tesla (symbol: T). The Gaussian-cgs unit of is the gauss (symbol: G). (The conversion is 1 T ≘ 10000 G.) One nanotesla corresponds to 1 gamma (symbol: γ). The H-field The magnetic field is defined: where is the vacuum permeability, and is the magnetization vector. In a vacuum, and are proportional to each other. Inside a material they are different (see H and B inside and outside magnetic materials). The SI unit of the -field is the ampere per metre (A/m), and the CGS unit is the oersted (Oe). Measurement An instrument used to measure the local magnetic field is known as a magnetometer. Important classes of magnetometers include using induction magnetometers (or search-coil magnetometers) which measure only varying magnetic fields, rotating coil magnetometers, Hall effect magnetometers, NMR magnetometers, SQUID magnetometers, and fluxgate magnetometers. The magnetic fields of distant astronomical objects are measured through their effects on local charged particles. For instance, electrons spiraling around a field line produce synchrotron radiation that is detectable in radio waves. The finest precision for a magnetic field measurement was attained by Gravity Probe B at . Visualization The field can be visualized by a set of magnetic field lines, that follow the direction of the field at each point. The lines can be constructed by measuring the strength and direction of the magnetic field at a large number of points (or at every point in space). Then, mark each location with an arrow (called a vector) pointing in the direction of the local magnetic field with its magnitude proportional to the strength of the magnetic field. Connecting these arrows then forms a set of magnetic field lines. The direction of the magnetic field at any point is parallel to the direction of nearby field lines, and the local density of field lines can be made proportional to its strength. Magnetic field lines are like streamlines in fluid flow, in that they represent a continuous distribution, and a different resolution would show more or fewer lines. An advantage of using magnetic field lines as a representation is that many laws of magnetism (and electromagnetism) can be stated completely and concisely using simple concepts such as the "number" of field lines through a surface. These concepts can be quickly "translated" to their mathematical form. For example, the number of field lines through a given surface is the surface integral of the magnetic field. Various phenomena "display" magnetic field lines as though the field lines were physical phenomena. For example, iron filings placed in a magnetic field form lines that correspond to "field lines". Magnetic field "lines" are also visually displayed in polar auroras, in which plasma particle dipole interactions create visible streaks of light that line up with the local direction of Earth's magnetic field. Field lines can be used as a qualitative tool to visualize magnetic forces. In ferromagnetic substances like iron and in plasmas, magnetic forces can be understood by imagining that the field lines exert a tension, (like a rubber band) along their length, and a pressure perpendicular to their length on neighboring field lines. "Unlike" poles of magnets attract because they are linked by many field lines; "like" poles repel because their field lines do not meet, but run parallel, pushing on each other. Magnetic field of permanent magnets Permanent magnets are objects that produce their own persistent magnetic fields. They are made of ferromagnetic materials, such as iron and nickel, that have been magnetized, and they have both a north and a south pole. The magnetic field of permanent magnets can be quite complicated, especially near the magnet. The magnetic field of a small straight magnet is proportional to the magnet's strength (called its magnetic dipole moment ). The equations are non-trivial and depend on the distance from the magnet and the orientation of the magnet. For simple magnets, points in the direction of a line drawn from the south to the north pole of the magnet. Flipping a bar magnet is equivalent to rotating its by 180 degrees. The magnetic field of larger magnets can be obtained by modeling them as a collection of a large number of small magnets called dipoles each having their own . The magnetic field produced by the magnet then is the net magnetic field of these dipoles; any net force on the magnet is a result of adding up the forces on the individual dipoles. There are two simplified models for the nature of these dipoles: the magnetic pole model and the Amperian loop model. These two models produce two different magnetic fields, and . Outside a material, though, the two are identical (to a multiplicative constant) so that in many cases the distinction can be ignored. This is particularly true for magnetic fields, such as those due to electric currents, that are not generated by magnetic materials. A realistic model of magnetism is more complicated than either of these models; neither model fully explains why materials are magnetic. The monopole model has no experimental support. The Amperian loop model explains some, but not all of a material's magnetic moment. The model predicts that the motion of electrons within an atom are connected to those electrons' orbital magnetic dipole moment, and these orbital moments do contribute to the magnetism seen at the macroscopic level. However, the motion of electrons is not classical, and the spin magnetic moment of electrons (which is not explained by either model) is also a significant contribution to the total moment of magnets. Magnetic pole model Historically, early physics textbooks would model the force and torques between two magnets as due to magnetic poles repelling or attracting each other in the same manner as the Coulomb force between electric charges. At the microscopic level, this model contradicts the experimental evidence, and the pole model of magnetism is no longer the typical way to introduce the concept. However, it is still sometimes used as a macroscopic model for ferromagnetism due to its mathematical simplicity. In this model, a magnetic -field is produced by fictitious magnetic charges that are spread over the surface of each pole. These magnetic charges are in fact related to the magnetization field . The -field, therefore, is analogous to the electric field , which starts at a positive electric charge and ends at a negative electric charge. Near the north pole, therefore, all -field lines point away from the north pole (whether inside the magnet or out) while near the south pole all -field lines point toward the south pole (whether inside the magnet or out). Too, a north pole feels a force in the direction of the -field while the force on the south pole is opposite to the -field. In the magnetic pole model, the elementary magnetic dipole is formed by two opposite magnetic poles of pole strength separated by a small distance vector , such that . The magnetic pole model predicts correctly the field both inside and outside magnetic materials, in particular the fact that is opposite to the magnetization field inside a permanent magnet. Since it is based on the fictitious idea of a magnetic charge density, the pole model has limitations. Magnetic poles cannot exist apart from each other as electric charges can, but always come in north–south pairs. If a magnetized object is divided in half, a new pole appears on the surface of each piece, so each has a pair of complementary poles. The magnetic pole model does not account for magnetism that is produced by electric currents, nor the inherent connection between angular momentum and magnetism. The pole model usually treats magnetic charge as a mathematical abstraction, rather than a physical property of particles. However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole). In other words, it would possess a "magnetic charge" analogous to an electric charge. Magnetic field lines would start or end on magnetic monopoles, so if they exist, they would give exceptions to the rule that magnetic field lines neither start nor end. Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed. Amperian loop model In the model developed by Ampere, the elementary magnetic dipole that makes up all magnets is a sufficiently small Amperian loop with current and loop area . The dipole moment of this loop is . These magnetic dipoles produce a magnetic -field. The magnetic field of a magnetic dipole is depicted in the figure. From outside, the ideal magnetic dipole is identical to that of an ideal electric dipole of the same strength. Unlike the electric dipole, a magnetic dipole is properly modeled as a current loop having a current and an area . Such a current loop has a magnetic moment of where the direction of is perpendicular to the area of the loop and depends on the direction of the current using the right-hand rule. An ideal magnetic dipole is modeled as a real magnetic dipole whose area has been reduced to zero and its current increased to infinity such that the product is finite. This model clarifies the connection between angular momentum and magnetic moment, which is the basis of the Einstein–de Haas effect rotation by magnetization and its inverse, the Barnett effect or magnetization by rotation. Rotating the loop faster (in the same direction) increases the current and therefore the magnetic moment, for example. Interactions with magnets Force between magnets Specifying the force between two small magnets is quite complicated because it depends on the strength and orientation of both magnets and their distance and direction relative to each other. The force is particularly sensitive to rotations of the magnets due to magnetic torque. The force on each magnet depends on its magnetic moment and the magnetic field of the other. To understand the force between magnets, it is useful to examine the magnetic pole model given above. In this model, the -field of one magnet pushes and pulls on both poles of a second magnet. If this -field is the same at both poles of the second magnet then there is no net force on that magnet since the force is opposite for opposite poles. If, however, the magnetic field of the first magnet is nonuniform (such as the near one of its poles), each pole of the second magnet sees a different field and is subject to a different force. This difference in the two forces moves the magnet in the direction of increasing magnetic field and may also cause a net torque. This is a specific example of a general rule that magnets are attracted (or repulsed depending on the orientation of the magnet) into regions of higher magnetic field. Any non-uniform magnetic field, whether caused by permanent magnets or electric currents, exerts a force on a small magnet in this way. The details of the Amperian loop model are different and more complicated but yield the same result: that magnetic dipoles are attracted/repelled into regions of higher magnetic field. Mathematically, the force on a small magnet having a magnetic moment due to a magnetic field is: where the gradient is the change of the quantity per unit distance and the direction is that of maximum increase of . The dot product , where and represent the magnitude of the and vectors and is the angle between them. If is in the same direction as then the dot product is positive and the gradient points "uphill" pulling the magnet into regions of higher -field (more strictly larger ). This equation is strictly only valid for magnets of zero size, but is often a good approximation for not too large magnets. The magnetic force on larger magnets is determined by dividing them into smaller regions each having their own then summing up the forces on each of these very small regions. Magnetic torque on permanent magnets If two like poles of two separate magnets are brought near each other, and one of the magnets is allowed to turn, it promptly rotates to align itself with the first. In this example, the magnetic field of the stationary magnet creates a magnetic torque on the magnet that is free to rotate. This magnetic torque tends to align a magnet's poles with the magnetic field lines. A compass, therefore, turns to align itself with Earth's magnetic field. In terms of the pole model, two equal and opposite magnetic charges experiencing the same also experience equal and opposite forces. Since these equal and opposite forces are in different locations, this produces a torque proportional to the distance (perpendicular to the force) between them. With the definition of as the pole strength times the distance between the poles, this leads to , where is a constant called the vacuum permeability, measuring V·s/(A·m) and is the angle between and . Mathematically, the torque on a small magnet is proportional both to the applied magnetic field and to the magnetic moment of the magnet: where × represents the vector cross product. This equation includes all of the qualitative information included above. There is no torque on a magnet if is in the same direction as the magnetic field, since the cross product is zero for two vectors that are in the same direction. Further, all other orientations feel a torque that twists them toward the direction of magnetic field. Interactions with electric currents Currents of electric charges both generate a magnetic field and feel a force due to magnetic B-fields. Magnetic field due to moving charges and electric currents All moving charged particles produce magnetic fields. Moving point charges, such as electrons, produce complicated but well known magnetic fields that depend on the charge, velocity, and acceleration of the particles. Magnetic field lines form in concentric circles around a cylindrical current-carrying conductor, such as a length of wire. The direction of such a magnetic field can be determined by using the "right-hand grip rule" (see figure at right). The strength of the magnetic field decreases with distance from the wire. (For an infinite length wire the strength is inversely proportional to the distance.) Bending a current-carrying wire into a loop concentrates the magnetic field inside the loop while weakening it outside. Bending a wire into multiple closely spaced loops to form a coil or "solenoid" enhances this effect. A device so formed around an iron core may act as an electromagnet, generating a strong, well-controlled magnetic field. An infinitely long cylindrical electromagnet has a uniform magnetic field inside, and no magnetic field outside. A finite length electromagnet produces a magnetic field that looks similar to that produced by a uniform permanent magnet, with its strength and polarity determined by the current flowing through the coil. The magnetic field generated by a steady current (a constant flow of electric charges, in which charge neither accumulates nor is depleted at any point) is described by the Biot–Savart law: where the integral sums over the wire length where vector is the vector line element with direction in the same sense as the current , is the magnetic constant, is the distance between the location of and the location where the magnetic field is calculated, and is a unit vector in the direction of . For example, in the case of a sufficiently long, straight wire, this becomes: where . The direction is tangent to a circle perpendicular to the wire according to the right hand rule. A slightly more general way of relating the current to the -field is through Ampère's law: where the line integral is over any arbitrary loop and is the current enclosed by that loop. Ampère's law is always valid for steady currents and can be used to calculate the -field for certain highly symmetric situations such as an infinite wire or an infinite solenoid. In a modified form that accounts for time varying electric fields, Ampère's law is one of four Maxwell's equations that describe electricity and magnetism. Force on moving charges and current Force on a charged particle A charged particle moving in a -field experiences a sideways force that is proportional to the strength of the magnetic field, the component of the velocity that is perpendicular to the magnetic field and the charge of the particle. This force is known as the Lorentz force, and is given by where is the force, is the electric charge of the particle, is the instantaneous velocity of the particle, and is the magnetic field (in teslas). The Lorentz force is always perpendicular to both the velocity of the particle and the magnetic field that created it. When a charged particle moves in a static magnetic field, it traces a helical path in which the helix axis is parallel to the magnetic field, and in which the speed of the particle remains constant. Because the magnetic force is always perpendicular to the motion, the magnetic field can do no work on an isolated charge. It can only do work indirectly, via the electric field generated by a changing magnetic field. It is often claimed that the magnetic force can do work to a non-elementary magnetic dipole, or to charged particles whose motion is constrained by other forces, but this is incorrect because the work in those cases is performed by the electric forces of the charges deflected by the magnetic field. Force on current-carrying wire The force on a current carrying wire is similar to that of a moving charge as expected since a current carrying wire is a collection of moving charges. A current-carrying wire feels a force in the presence of a magnetic field. The Lorentz force on a macroscopic current is often referred to as the Laplace force. Consider a conductor of length , cross section , and charge due to electric current . If this conductor is placed in a magnetic field of magnitude that makes an angle with the velocity of charges in the conductor, the force exerted on a single charge is so, for charges where the force exerted on the conductor is where . Relation between H and B The formulas derived for the magnetic field above are correct when dealing with the entire current. A magnetic material placed inside a magnetic field, though, generates its own bound current, which can be a challenge to calculate. (This bound current is due to the sum of atomic sized current loops and the spin of the subatomic particles such as electrons that make up the material.) The -field as defined above helps factor out this bound current; but to see how, it helps to introduce the concept of magnetization first. Magnetization The magnetization vector field represents how strongly a region of material is magnetized. It is defined as the net magnetic dipole moment per unit volume of that region. The magnetization of a uniform magnet is therefore a material constant, equal to the magnetic moment of the magnet divided by its volume. Since the SI unit of magnetic moment is A⋅m2, the SI unit of magnetization is ampere per meter, identical to that of the -field. The magnetization field of a region points in the direction of the average magnetic dipole moment in that region. Magnetization field lines, therefore, begin near the magnetic south pole and ends near the magnetic north pole. (Magnetization does not exist outside the magnet.) In the Amperian loop model, the magnetization is due to combining many tiny Amperian loops to form a resultant current called bound current. This bound current, then, is the source of the magnetic field due to the magnet. Given the definition of the magnetic dipole, the magnetization field follows a similar law to that of Ampere's law: where the integral is a line integral over any closed loop and is the bound current enclosed by that closed loop. In the magnetic pole model, magnetization begins at and ends at magnetic poles. If a given region, therefore, has a net positive "magnetic pole strength" (corresponding to a north pole) then it has more magnetization field lines entering it than leaving it. Mathematically this is equivalent to: where the integral is a closed surface integral over the closed surface and is the "magnetic charge" (in units of magnetic flux) enclosed by . (A closed surface completely surrounds a region with no holes to let any field lines escape.) The negative sign occurs because the magnetization field moves from south to north. H-field and magnetic materials In SI units, the H-field is related to the B-field by In terms of the H-field, Ampere's law is where represents the 'free current' enclosed by the loop so that the line integral of does not depend at all on the bound currents. For the differential equivalent of this equation see Maxwell's equations. Ampere's law leads to the boundary condition where is the surface free current density and the unit normal points in the direction from medium 2 to medium 1. Similarly, a surface integral of over any closed surface is independent of the free currents and picks out the "magnetic charges" within that closed surface: which does not depend on the free currents. The -field, therefore, can be separated into two independent parts: where is the applied magnetic field due only to the free currents and is the demagnetizing field due only to the bound currents. The magnetic -field, therefore, re-factors the bound current in terms of "magnetic charges". The field lines loop only around "free current" and, unlike the magnetic field, begins and ends near magnetic poles as well. Magnetism Most materials respond to an applied -field by producing their own magnetization and therefore their own -fields. Typically, the response is weak and exists only when the magnetic field is applied. The term magnetism describes how materials respond on the microscopic level to an applied magnetic field and is used to categorize the magnetic phase of a material. Materials are divided into groups based upon their magnetic behavior: Diamagnetic materials produce a magnetization that opposes the magnetic field. Paramagnetic materials produce a magnetization in the same direction as the applied magnetic field. Ferromagnetic materials and the closely related ferrimagnetic materials and antiferromagnetic materials can have a magnetization independent of an applied B-field with a complex relationship between the two fields. Superconductors (and ferromagnetic superconductors) are materials that are characterized by perfect conductivity below a critical temperature and magnetic field. They also are highly magnetic and can be perfect diamagnets below a lower critical magnetic field. Superconductors often have a broad range of temperatures and magnetic fields (the so-named mixed state) under which they exhibit a complex hysteretic dependence of on . In the case of paramagnetism and diamagnetism, the magnetization is often proportional to the applied magnetic field such that: where is a material dependent parameter called the permeability. In some cases the permeability may be a second rank tensor so that may not point in the same direction as . These relations between and are examples of constitutive equations. However, superconductors and ferromagnets have a more complex -to- relation; see magnetic hysteresis. Stored energy Energy is needed to generate a magnetic field both to work against the electric field that a changing magnetic field creates and to change the magnetization of any material within the magnetic field. For non-dispersive materials, this same energy is released when the magnetic field is destroyed so that the energy can be modeled as being stored in the magnetic field. For linear, non-dispersive, materials (such that where is frequency-independent), the energy density is: If there are no magnetic materials around then can be replaced by . The above equation cannot be used for nonlinear materials, though; a more general expression given below must be used. In general, the incremental amount of work per unit volume needed to cause a small change of magnetic field is: Once the relationship between and is known this equation is used to determine the work needed to reach a given magnetic state. For hysteretic materials such as ferromagnets and superconductors, the work needed also depends on how the magnetic field is created. For linear non-dispersive materials, though, the general equation leads directly to the simpler energy density equation given above. Appearance in Maxwell's equations Like all vector fields, a magnetic field has two important mathematical properties that relates it to its sources. (For the sources are currents and changing electric fields.) These two properties, along with the two corresponding properties of the electric field, make up Maxwell's Equations. Maxwell's Equations together with the Lorentz force law form a complete description of classical electrodynamics including both electricity and magnetism. The first property is the divergence of a vector field , , which represents how "flows" outward from a given point. As discussed above, a -field line never starts or ends at a point but instead forms a complete loop. This is mathematically equivalent to saying that the divergence of is zero. (Such vector fields are called solenoidal vector fields.) This property is called Gauss's law for magnetism and is equivalent to the statement that there are no isolated magnetic poles or magnetic monopoles. The second mathematical property is called the curl, such that represents how curls or "circulates" around a given point. The result of the curl is called a "circulation source". The equations for the curl of and of are called the Ampère–Maxwell equation and Faraday's law respectively. Gauss' law for magnetism One important property of the -field produced this way is that magnetic -field lines neither start nor end (mathematically, is a solenoidal vector field); a field line may only extend to infinity, or wrap around to form a closed curve, or follow a never-ending (possibly chaotic) path. Magnetic field lines exit a magnet near its north pole and enter near its south pole, but inside the magnet -field lines continue through the magnet from the south pole back to the north. If a -field line enters a magnet somewhere it has to leave somewhere else; it is not allowed to have an end point. More formally, since all the magnetic field lines that enter any given region must also leave that region, subtracting the "number" of field lines that enter the region from the number that exit gives identically zero. Mathematically this is equivalent to Gauss's law for magnetism: where the integral is a surface integral over the closed surface (a closed surface is one that completely surrounds a region with no holes to let any field lines escape). Since points outward, the dot product in the integral is positive for -field pointing out and negative for -field pointing in. Faraday's Law A changing magnetic field, such as a magnet moving through a conducting coil, generates an electric field (and therefore tends to drive a current in such a coil). This is known as Faraday's law and forms the basis of many electrical generators and electric motors. Mathematically, Faraday's law is: where is the electromotive force (or EMF, the voltage generated around a closed loop) and is the magnetic flux—the product of the area times the magnetic field normal to that area. (This definition of magnetic flux is why is often referred to as magnetic flux density.) The negative sign represents the fact that any current generated by a changing magnetic field in a coil produces a magnetic field that opposes the change in the magnetic field that induced it. This phenomenon is known as Lenz's law. This integral formulation of Faraday's law can be converted into a differential form, which applies under slightly different conditions. Ampère's Law and Maxwell's correction Similar to the way that a changing magnetic field generates an electric field, a changing electric field generates a magnetic field. This fact is known as Maxwell's correction to Ampère's law and is applied as an additive term to Ampere's law as given above. This additional term is proportional to the time rate of change of the electric flux and is similar to Faraday's law above but with a different and positive constant out front. (The electric flux through an area is proportional to the area times the perpendicular part of the electric field.) The full law including the correction term is known as the Maxwell–Ampère equation. It is not commonly given in integral form because the effect is so small that it can typically be ignored in most cases where the integral form is used. The Maxwell term is critically important in the creation and propagation of electromagnetic waves. Maxwell's correction to Ampère's Law together with Faraday's law of induction describes how mutually changing electric and magnetic fields interact to sustain each other and thus to form electromagnetic waves, such as light: a changing electric field generates a changing magnetic field, which generates a changing electric field again. These, though, are usually described using the differential form of this equation given below. where is the complete microscopic current density, and is the vacuum permittivity. As discussed above, materials respond to an applied electric field and an applied magnetic field by producing their own internal "bound" charge and current distributions that contribute to and but are difficult to calculate. To circumvent this problem, and fields are used to re-factor Maxwell's equations in terms of the free current density : These equations are not any more general than the original equations (if the "bound" charges and currents in the material are known). They also must be supplemented by the relationship between and as well as that between and . On the other hand, for simple relationships between these quantities this form of Maxwell's equations can circumvent the need to calculate the bound charges and currents. Formulation in special relativity and quantum electrodynamics Relativistic electrodynamics As different aspects of the same phenomenon According to the special theory of relativity, the partition of the electromagnetic force into separate electric and magnetic components is not fundamental, but varies with the observational frame of reference: An electric force perceived by one observer may be perceived by another (in a different frame of reference) as a magnetic force, or a mixture of electric and magnetic forces. The magnetic field existing as electric field in other frames can be shown by consistency of equations obtained from Lorentz transformation of four force from Coulomb's Law in particle's rest frame with Maxwell's laws considering definition of fields from Lorentz force and for non accelerating condition. The form of magnetic field hence obtained by Lorentz transformation of four-force from the form of Coulomb's law in source's initial frame is given by:where is the charge of the point source, is the vacuum permittivity, is the position vector from the point source to the point in space, is the velocity vector of the charged particle, is the ratio of speed of the charged particle divided by the speed of light and is the angle between and . This form of magnetic field can be shown to satisfy maxwell's laws within the constraint of particle being non accelerating. The above reduces to Biot-Savart law for non relativistic stream of current. Formally, special relativity combines the electric and magnetic fields into a rank-2 tensor, called the electromagnetic tensor. Changing reference frames mixes these components. This is analogous to the way that special relativity mixes space and time into spacetime, and mass, momentum, and energy into four-momentum. Similarly, the energy stored in a magnetic field is mixed with the energy stored in an electric field in the electromagnetic stress–energy tensor. Magnetic vector potential In advanced topics such as quantum mechanics and relativity it is often easier to work with a potential formulation of electrodynamics rather than in terms of the electric and magnetic fields. In this representation, the magnetic vector potential , and the electric scalar potential , are defined using gauge fixing such that: The vector potential, given by this form may be interpreted as a generalized potential momentum per unit charge just as is interpreted as a generalized potential energy per unit charge. There are multiple choices one can make for the potential fields that satisfy the above condition. However, the choice of potentials is represented by its respective gauge condition. Maxwell's equations when expressed in terms of the potentials in Lorenz gauge can be cast into a form that agrees with special relativity. In relativity, together with forms a four-potential regardless of the gauge condition, analogous to the four-momentum that combines the momentum and energy of a particle. Using the four potential instead of the electromagnetic tensor has the advantage of being much simpler—and it can be easily modified to work with quantum mechanics. Propagation of Electric and Magnetic fields Special theory of relativity imposes the condition for events related by cause and effect to be time-like separated, that is that causal efficacy propagates no faster than light. Maxwell's equations for electromagnetism are found to be in favor of this as electric and magnetic disturbances are found to travel at the speed of light in space. Electric and magnetic fields from classical electrodynamics obey the principle of locality in physics and are expressed in terms of retarded time or the time at which the cause of a measured field originated given that the influence of field travelled at speed of light. The retarded time for a point particle is given as solution of: where is retarded time or the time at which the source's contribution of the field originated, is the position vector of the particle as function of time, is the point in space, is the time at which fields are measured and is the speed of light. The equation subtracts the time taken for light to travel from particle to the point in space from the time of measurement to find time of origin of the fields. The uniqueness of solution for for given , and is valid for charged particles moving slower than speed of light. Magnetic field of arbitrary moving point charge The solution of maxwell's equations for electric and magnetic field of a point charge is expressed in terms of retarded time or the time at which the particle in the past causes the field at the point, given that the influence travels across space at the speed of light. Any arbitrary motion of point charge causes electric and magnetic fields found by solving maxwell's equations using green's function for retarded potentials and hence finding the fields to be as follows: where and are electric scalar potential and magnetic vector potential in Lorentz gauge, is the charge of the point source, is a unit vector pointing from charged particle to the point in space, is the velocity of the particle divided by the speed of light and is the corresponding Lorentz factor. Hence by the principle of superposition, the fields of a system of charges also obey principle of locality. Quantum electrodynamics The classical electromagnetic field incorporated into quantum mechanics forms what is known as the semi-classical theory of radiation. However, it is not able to make experimentally observed predictions such as spontaneous emission process or Lamb shift implying the need for quantization of fields. In modern physics, the electromagnetic field is understood to be not a classical field, but rather a quantum field; it is represented not as a vector of three numbers at each point, but as a vector of three quantum operators at each point. The most accurate modern description of the electromagnetic interaction (and much else) is quantum electrodynamics (QED), which is incorporated into a more complete theory known as the Standard Model of particle physics. In QED, the magnitude of the electromagnetic interactions between charged particles (and their antiparticles) is computed using perturbation theory. These rather complex formulas produce a remarkable pictorial representation as Feynman diagrams in which virtual photons are exchanged. Predictions of QED agree with experiments to an extremely high degree of accuracy: currently about 10−12 (and limited by experimental errors); for details see precision tests of QED. This makes QED one of the most accurate physical theories constructed thus far. All equations in this article are in the classical approximation, which is less accurate than the quantum description mentioned here. However, under most everyday circumstances, the difference between the two theories is negligible. Uses and examples Earth's magnetic field The Earth's magnetic field is produced by convection of a liquid iron alloy in the outer core. In a dynamo process, the movements drive a feedback process in which electric currents create electric and magnetic fields that in turn act on the currents. The field at the surface of the Earth is approximately the same as if a giant bar magnet were positioned at the center of the Earth and tilted at an angle of about 11° off the rotational axis of the Earth (see the figure). The north pole of a magnetic compass needle points roughly north, toward the North Magnetic Pole. However, because a magnetic pole is attracted to its opposite, the North Magnetic Pole is actually the south pole of the geomagnetic field. This confusion in terminology arises because the pole of a magnet is defined by the geographical direction it points. Earth's magnetic field is not constant—the strength of the field and the location of its poles vary. Moreover, the poles periodically reverse their orientation in a process called geomagnetic reversal. The most recent reversal occurred 780,000 years ago. Rotating magnetic fields The rotating magnetic field is a key principle in the operation of alternating-current motors. A permanent magnet in such a field rotates so as to maintain its alignment with the external field. Magnetic torque is used to drive electric motors. In one simple motor design, a magnet is fixed to a freely rotating shaft and subjected to a magnetic field from an array of electromagnets. By continuously switching the electric current through each of the electromagnets, thereby flipping the polarity of their magnetic fields, like poles are kept next to the rotor; the resultant torque is transferred to the shaft. A rotating magnetic field can be constructed using two orthogonal coils with 90 degrees phase difference in their AC currents. However, in practice such a system would be supplied through a three-wire arrangement with unequal currents. This inequality would cause serious problems in standardization of the conductor size and so, to overcome it, three-phase systems are used where the three currents are equal in magnitude and have 120 degrees phase difference. Three similar coils having mutual geometrical angles of 120 degrees create the rotating magnetic field in this case. The ability of the three-phase system to create a rotating field, utilized in electric motors, is one of the main reasons why three-phase systems dominate the world's electrical power supply systems. Synchronous motors use DC-voltage-fed rotor windings, which lets the excitation of the machine be controlled—and induction motors use short-circuited rotors (instead of a magnet) following the rotating magnetic field of a multicoiled stator. The short-circuited turns of the rotor develop eddy currents in the rotating field of the stator, and these currents in turn move the rotor by the Lorentz force. The Italian physicist Galileo Ferraris and the Serbian-American electrical engineer Nikola Tesla independently researched the use of rotating magnetic fields in electric motors. In 1888, Ferraris published his research in a paper to the Royal Academy of Sciences in Turin and Tesla gained for his work. Hall effect The charge carriers of a current-carrying conductor placed in a transverse magnetic field experience a sideways Lorentz force; this results in a charge separation in a direction perpendicular to the current and to the magnetic field. The resultant voltage in that direction is proportional to the applied magnetic field. This is known as the Hall effect. The Hall effect is often used to measure the magnitude of a magnetic field. It is used as well to find the sign of the dominant charge carriers in materials such as semiconductors (negative electrons or positive holes). Magnetic circuits An important use of is in magnetic circuits where inside a linear material. Here, is the magnetic permeability of the material. This result is similar in form to Ohm's law , where is the current density, is the conductance and is the electric field. Extending this analogy, the counterpart to the macroscopic Ohm's law is: where is the magnetic flux in the circuit, is the magnetomotive force applied to the circuit, and is the reluctance of the circuit. Here the reluctance is a quantity similar in nature to resistance for the flux. Using this analogy it is straightforward to calculate the magnetic flux of complicated magnetic field geometries, by using all the available techniques of circuit theory. Largest magnetic fields , the largest magnetic field produced over a macroscopic volume outside a lab setting is 2.8 kT (VNIIEF in Sarov, Russia, 1998). As of October 2018, the largest magnetic field produced in a laboratory over a macroscopic volume was 1.2 kT by researchers at the University of Tokyo in 2018. The largest magnetic fields produced in a laboratory occur in particle accelerators, such as RHIC, inside the collisions of heavy ions, where microscopic fields reach 1014 T. Magnetars have the strongest known magnetic fields of any naturally occurring object, ranging from 0.1 to 100 GT (108 to 1011 T). Common formulæ Additional magnetic field values can be found through the magnetic field of a finite beam, for example, that the magnetic field of an arc of angle and radius at the center is , or that the magnetic field at the center of a N-sided regular polygon of side is , both outside of the plane with proper directions as inferred by right hand thumb rule. History Early developments While magnets and some properties of magnetism were known to ancient societies, the research of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field on the surface of a spherical magnet using iron needles. Noting the resulting field lines crossed at two points he named those points "poles" in analogy to Earth's poles. He also articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them. Almost three centuries later, William Gilbert of Colchester replicated Petrus Peregrinus' work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilbert's work, De Magnete, helped to establish magnetism as a science. Mathematical development In 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that north and south poles cannot be separated. Building on this force between poles, Siméon Denis Poisson (1781–1840) created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic -field is produced by magnetic poles and magnetism is due to small pairs of north–south magnetic poles. Three discoveries in 1820 challenged this foundation of magnetism. Hans Christian Ørsted demonstrated that a current-carrying wire is surrounded by a circular magnetic field. Then André-Marie Ampère showed that parallel wires with currents attract one another if the currents are in the same direction and repel if they are in opposite directions. Finally, Jean-Baptiste Biot and Félix Savart announced empirical results about the forces that a current-carrying long, straight wire exerted on a small magnet, determining the forces were inversely proportional to the perpendicular distance from the wire to the magnet. Laplace later deduced a law of force based on the differential action of a differential section of the wire, which became known as the Biot–Savart law, as Laplace did not publish his findings. Extending these experiments, Ampère published his own successful model of magnetism in 1825. In it, he showed the equivalence of electrical currents to magnets and proposed that magnetism is due to perpetually flowing loops of current instead of the dipoles of magnetic charge in Poisson's model. Further, Ampère derived both Ampère's force law describing the force between two currents and Ampère's law, which, like the Biot–Savart law, correctly described the magnetic field generated by a steady current. Also in this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism. In 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field, formulating what is now known as Faraday's law of induction. Later, Franz Ernst Neumann proved that, for a moving conductor in a magnetic field, induction is a consequence of Ampère's force law. In the process, he introduced the magnetic vector potential, which was later shown to be equivalent to the underlying mechanism proposed by Faraday. In 1850, Lord Kelvin, then known as William Thomson, distinguished between two magnetic fields now denoted and . The former applied to Poisson's model and the latter to Ampère's model and induction. Further, he derived how and relate to each other and coined the term permeability. Between 1861 and 1865, James Clerk Maxwell developed and published Maxwell's equations, which explained and united all of classical electricity and magnetism. The first set of these equations was published in a paper entitled On Physical Lines of Force in 1861. These equations were valid but incomplete. Maxwell completed his set of equations in his later 1865 paper A Dynamical Theory of the Electromagnetic Field and demonstrated the fact that light is an electromagnetic wave. Heinrich Hertz published papers in 1887 and 1888 experimentally confirming this fact. Modern developments In 1887, Tesla developed an induction motor that ran on alternating current. The motor used polyphase current, which generated a rotating magnetic field to turn the motor (a principle that Tesla claimed to have conceived in 1882). Tesla received a patent for his electric motor in May 1888. In 1885, Galileo Ferraris independently researched rotating magnetic fields and subsequently published his research in a paper to the Royal Academy of Sciences in Turin, just two months before Tesla was awarded his patent, in March 1888. The twentieth century showed that classical electrodynamics is already consistent with special relativity, and extended classical electrodynamics to work with quantum mechanics. Albert Einstein, in his paper of 1905 that established relativity, showed that both the electric and magnetic fields are part of the same phenomena viewed from different reference frames. Finally, the emergent field of quantum mechanics was merged with electrodynamics to form quantum electrodynamics, which first formalized the notion that electromagnetic field energy is quantized in the form of photons. See also General Magnetohydrodynamics – the study of the dynamics of electrically conducting fluids Magnetic hysteresis – application to ferromagnetism Magnetic nanoparticles – extremely small magnetic particles that are tens of atoms wide Magnetic reconnection – an effect that causes solar flares and auroras Magnetic scalar potential SI electromagnetism units – common units used in electromagnetism Orders of magnitude (magnetic field) – list of magnetic field sources and measurement devices from smallest magnetic fields to largest detected Upward continuation Moses Effect Mathematics Magnetic helicity – extent to which a magnetic field wraps around itself Applications Dynamo theory – a proposed mechanism for the creation of the Earth's magnetic field Helmholtz coil – a device for producing a region of nearly uniform magnetic field Magnetic field viewing film – Film used to view the magnetic field of an area Magnetic pistol – a device on torpedoes or naval mines that detect the magnetic field of their target Maxwell coil – a device for producing a large volume of an almost constant magnetic field Stellar magnetic field – a discussion of the magnetic field of stars Teltron tube – device used to display an electron beam and demonstrates effect of electric and magnetic fields on moving charges Notes References Further reading External links Crowell, B., "Electromagnetism ". Nave, R., "Magnetic Field". HyperPhysics. "Magnetism", The Magnetic Field (archived 9 July 2006). theory.uwinnipeg.ca. Hoadley, Rick, "What do magnetic fields look like?" 17 July 2005. Magnetism Electromagnetic quantities
0.770359
0.999324
0.769838
Power density
Power density, defined as the amount of power (the time rate of energy transfer) per unit volume, is a critical parameter used across a spectrum of scientific and engineering disciplines. This metric, typically denoted in watts per cubic meter (W/m3), serves as a fundamental measure for evaluating the efficacy and capability of various devices, systems, and materials based on their spatial energy distribution. The concept of power density finds extensive application in physics, engineering, electronics, and energy technologies. It plays a pivotal role in assessing the efficiency and performance of components and systems, particularly in relation to the power they can handle or generate relative to their physical dimensions or volume. In the domain of energy storage and conversion technologies, such as batteries, fuel cells, motors, and power supply units, power density is a crucial consideration. Here, power density often refers to the volume power density, quantifying how much power can be accommodated or delivered within a specific volume (W/m3). For instance, when examining reciprocating internal combustion engines, power density assumes a distinct importance. In this context, power density is commonly defined as power per swept volume or brake horsepower per cubic centimeter. This measure is derived from the internal capacity of the engine, providing insight into its power output relative to its internal volume rather than its external size. This extends to advancement in material science where new materials which can withstand higher power densities can reduce size or weight of devices, or just increase their performance. The significance of power density extends beyond these examples, impacting the design and optimization of a myriad of systems and devices. Notably, advancements in power density often drive innovations in areas ranging from renewable energy technologies to aerospace propulsion systems. Understanding and enhancing power density can lead to substantial improvements in the performance and efficiency of various applications. Researchers and engineers continually explore ways to push the limits of power density, leveraging advancements in materials science, manufacturing techniques, and computational modeling. By engaging with these educational resources and specialized coursework, students and professionals can deepen their understanding of power density and its implications across diverse industries. The pursuit of higher power densities continues to drive innovation and shape the future of energy systems and technological development. Examples See also Surface power density, energy per unit of area Energy density, energy per unit volume Specific energy, energy per unit mass Power-to-weight ratio/specific power, power per unit mass Specific absorption rate (SAR) References Power (physics)
0.780481
0.986236
0.769739
Exertion
Exertion is the physical or perceived use of energy. Exertion traditionally connotes a strenuous or costly effort, resulting in generation of force, initiation of motion, or in the performance of work. It often relates to muscular activity and can be quantified, empirically and by measurable metabolic response. Physical In physics, exertion is the expenditure of energy against, or inductive of, inertia as described by Isaac Newton's third law of motion. In physics, force exerted equivocates work done. The ability to do work can be either positive or negative depending on the direction of exertion relative to gravity. For example, a force exerted upwards, like lifting an object, creates positive work done on that object. Exertion often results in force generated, a contributing dynamic of general motion. In mechanics it describes the use of force against a body in the direction of its motion (see vector). Physiological Exertion, physiologically, can be described by the initiation of exercise, or, intensive and exhaustive physical activity that causes cardiovascular stress or a sympathetic nervous response. This can be continuous or intermittent exertion. Exertion requires, of the body, modified oxygen uptake, increased heart rate, and autonomic monitoring of blood lactate concentrations. Mediators of physical exertion include cardio-respiratory and musculoskeletal strength, as well as metabolic capability. This often correlates to an output of force followed by a refractory period of recovery. Exertion is limited by cumulative load and repetitive motions. Muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of ATP and increased oxygen consumption. Muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract. Perceived exertion can be explained as subjective, perceived experience that mediates response to somatic sensations and mechanisms. A rating of perceived exertion, as measured by the RPE-scale, or Borg scale, is a quantitative measure of physical exertion. Often in health, exertion of oneself resulting in cardiovascular stress showed reduced physiological responses, like cortisol levels and mood, to stressors. Therefore, biological exertion is effective in mediating psychological exertion, responsive to environmental stress. Overexertion causes more than 3.5 million injuries a year. An overexertion injury can include sprains or strains, the stretching and tear of ligaments, tendons, or muscles caused by a load that exceeds the human ability to perform the work. Psychological In sport psychology, the perceived exertion of an exercise is how hard it seems to the person doing it. Perceived exertion is often rated on the Borg scale of 6 to 20, where 6 is complete rest and 20 is the maximum effort that an individual can sustain for any period of time. Although this is a psychological measure of effort, it tends to correspond fairly well to the actual physical exertion of an exercise as well. Additionally, because a high perceived exertion can limit an athlete's ability to perform, some people try to decrease this number through strategies like breathing exercises and listening to music. See also Exercise Energy Cost Inertia Volition (psychology) Decision theory Ferdinand Tönnies [as in will (sociology)] Friedrich Nietzsche [as in strong-willed, drive and will (philosophy)] Isaac Newton Bionics Machine Muscular Energy Musculoskeletal Strength Physics Physiological stress Work References External links Measuring Physical Activity Intensity: Perceived Exertion (Borg Rating of Perceived Exertion Scale) RPE-scale (2nd) First principle of mechanics Exertion interfaces allowing exertion by proxy in sporting games Physical exercise Exercise physiology Effects of external causes
0.784699
0.980906
0.769716
Jerk (physics)
Jerk (also known as jolt) is the rate of change of an object's acceleration over time. It is a vector quantity (having both magnitude and direction). Jerk is most commonly denoted by the symbol and expressed in m/s3 (SI units) or standard gravities per second (g0/s). Expressions As a vector, jerk can be expressed as the first time derivative of acceleration, second time derivative of velocity, and third time derivative of position: Where: is acceleration is velocity is position is time Third-order differential equations of the form are sometimes called jerk equations. When converted to an equivalent system of three ordinary first-order non-linear differential equations, jerk equations are the minimal setting for solutions showing chaotic behaviour. This condition generates mathematical interest in jerk systems. Systems involving fourth-order derivatives or higher are accordingly called hyperjerk systems. Physiological effects and human perception Human body position is controlled by balancing the forces of antagonistic muscles. In balancing a given force, such as holding up a weight, the postcentral gyrus establishes a control loop to achieve the desired equilibrium. If the force changes too quickly, the muscles cannot relax or tense fast enough and overshoot in either direction, causing a temporary loss of control. The reaction time for responding to changes in force depends on physiological limitations and the attention level of the brain: an expected change will be stabilized faster than a sudden decrease or increase of load. To avoid vehicle passengers losing control over body motion and getting injured, it is necessary to limit the exposure to both the maximum force (acceleration) and maximum jerk, since time is needed to adjust muscle tension and adapt to even limited stress changes. Sudden changes in acceleration can cause injuries such as whiplash. Excessive jerk may also result in an uncomfortable ride, even at levels that do not cause injury. Engineers expend considerable design effort minimizing "jerky motion" on elevators, trams, and other conveyances. For example, consider the effects of acceleration and jerk when riding in a car: Skilled and experienced drivers can accelerate smoothly, but beginners often provide a jerky ride. When changing gears in a car with a foot-operated clutch, the accelerating force is limited by engine power, but an inexperienced driver can cause severe jerk because of intermittent force closure over the clutch. The feeling of being pressed into the seats in a high-powered sports car is due to the acceleration. As the car launches from rest, there is a large positive jerk as its acceleration rapidly increases. After the launch, there is a small, sustained negative jerk as the force of air resistance increases with the car's velocity, gradually decreasing acceleration and reducing the force pressing the passenger into the seat. When the car reaches its top speed, the acceleration has reached 0 and remains constant, after which there is no jerk until the driver decelerates or changes direction. When braking suddenly or during collisions, passengers whip forward with an initial acceleration that is larger than during the rest of the braking process because muscle tension regains control of the body quickly after the onset of braking or impact. These effects are not modeled in vehicle testing because cadavers and crash test dummies do not have active muscle control. To minimize the jerk, curves along roads are designed to be clothoids as are railroad curves and roller coaster loops. Force, acceleration, and jerk For a constant mass , acceleration is directly proportional to force according to Newton's second law of motion: In classical mechanics of rigid bodies, there are no forces associated with the derivatives of acceleration; however, physical systems experience oscillations and deformations as a result of jerk. In designing the Hubble Space Telescope, NASA set limits on both jerk and jounce. The Abraham–Lorentz force is the recoil force on an accelerating charged particle emitting radiation. This force is proportional to the particle's jerk and to the square of its charge. The Wheeler–Feynman absorber theory is a more advanced theory, applicable in a relativistic and quantum environment, and accounting for self-energy. In an idealized setting Discontinuities in acceleration do not occur in real-world environments because of deformation, quantum mechanics effects, and other causes. However, a jump-discontinuity in acceleration and, accordingly, unbounded jerk are feasible in an idealized setting, such as an idealized point mass moving along a piecewise smooth, whole continuous path. The jump-discontinuity occurs at points where the path is not smooth. Extrapolating from these idealized settings, one can qualitatively describe, explain and predict the effects of jerk in real situations. Jump-discontinuity in acceleration can be modeled using a Dirac delta function in jerk, scaled to the height of the jump. Integrating jerk over time across the Dirac delta yields the jump-discontinuity. For example, consider a path along an arc of radius , which tangentially connects to a straight line. The whole path is continuous, and its pieces are smooth. Now assume a point particle moves with constant speed along this path, so its tangential acceleration is zero. The centripetal acceleration given by is normal to the arc and inward. When the particle passes the connection of pieces, it experiences a jump-discontinuity in acceleration given by , and it undergoes a jerk that can be modeled by a Dirac delta, scaled to the jump-discontinuity. For a more tangible example of discontinuous acceleration, consider an ideal spring–mass system with the mass oscillating on an idealized surface with friction. The force on the mass is equal to the vector sum of the spring force and the kinetic frictional force. When the velocity changes sign (at the maximum and minimum displacements), the magnitude of the force on the mass changes by twice the magnitude of the frictional force, because the spring force is continuous and the frictional force reverses direction with velocity. The jump in acceleration equals the force on the mass divided by the mass. That is, each time the mass passes through a minimum or maximum displacement, the mass experiences a discontinuous acceleration, and the jerk contains a Dirac delta until the mass stops. The static friction force adapts to the residual spring force, establishing equilibrium with zero net force and zero velocity. Consider the example of a braking and decelerating car. The brake pads generate kinetic frictional forces and constant braking torques on the disks (or drums) of the wheels. Rotational velocity decreases linearly to zero with constant angular deceleration. The frictional force, torque, and car deceleration suddenly reach zero, which indicates a Dirac delta in physical jerk. The Dirac delta is smoothed down by the real environment, the cumulative effects of which are analogous to damping of the physiologically perceived jerk. This example neglects the effects of tire sliding, suspension dipping, real deflection of all ideally rigid mechanisms, etc. Another example of significant jerk, analogous to the first example, is the cutting of a rope with a particle on its end. Assume the particle is oscillating in a circular path with non-zero centripetal acceleration. When the rope is cut, the particle's path changes abruptly to a straight path, and the force in the inward direction changes suddenly to zero. Imagine a monomolecular fiber cut by a laser; the particle would experience very high rates of jerk because of the extremely short cutting time. In rotation Consider a rigid body rotating about a fixed axis in an inertial reference frame. If its angular position as a function of time is , the angular velocity, acceleration, and jerk can be expressed as follows: Angular velocity, , is the time derivative of . Angular acceleration, , is the time derivative of . Angular jerk, , is the time derivative of . Angular acceleration equals the torque acting on the body, divided by the body's moment of inertia with respect to the momentary axis of rotation. A change in torque results in angular jerk. The general case of a rotating rigid body can be modeled using kinematic screw theory, which includes one axial vector, angular velocity , and one polar vector, linear velocity . From this, the angular acceleration is defined as and the angular jerk is given by taking the angular acceleration from Angular acceleration#Particle in three dimensions as , we obtain replacing we can have the last item as , and we finally get or vice versa, replacing with : For example, consider a Geneva drive, a device used for creating intermittent rotation of a driven wheel (the blue wheel in the animation) by continuous rotation of a driving wheel (the red wheel in the animation). During one cycle of the driving wheel, the driven wheel's angular position changes by 90 degrees and then remains constant. Because of the finite thickness of the driving wheel's fork (the slot for the driving pin), this device generates a discontinuity in the angular acceleration , and an unbounded angular jerk in the driven wheel. Jerk does not preclude the Geneva drive from being used in applications such as movie projectors and cams. In movie projectors, the film advances frame-by-frame, but the projector operation has low noise and is highly reliable because of the low film load (only a small section of film weighing a few grams is driven), the moderate speed (2.4 m/s), and the low friction. With cam drive systems, use of a dual cam can avoid the jerk of a single cam; however, the dual cam is bulkier and more expensive. The dual-cam system has two cams on one axle that shifts a second axle by a fraction of a revolution. The graphic shows step drives of one-sixth and one-third rotation per one revolution of the driving axle. There is no radial clearance because two arms of the stepped wheel are always in contact with the double cam. Generally, combined contacts may be used to avoid the jerk (and wear and noise) associated with a single follower (such as a single follower gliding along a slot and changing its contact point from one side of the slot to the other can be avoided by using two followers sliding along the same slot, one side each). In elastically deformable matter An elastically deformable mass deforms under an applied force (or acceleration); the deformation is a function of its stiffness and the magnitude of the force. If the change in force is slow, the jerk is small, and the propagation of deformation is considered instantaneous as compared to the change in acceleration. The distorted body acts as if it were in a quasistatic regime, and only a changing force (nonzero jerk) can cause propagation of mechanical waves (or electromagnetic waves for a charged particle); therefore, for nonzero to high jerk, a shock wave and its propagation through the body should be considered. The propagation of deformation is shown in the graphic "Compression wave patterns" as a compressional plane wave through an elastically deformable material. Also shown, for angular jerk, are the deformation waves propagating in a circular pattern, which causes shear stress and possibly other modes of vibration. The reflection of waves along the boundaries cause constructive interference patterns (not pictured), producing stresses that may exceed the material's limits. The deformation waves may cause vibrations, which can lead to noise, wear, and failure, especially in cases of resonance. The graphic captioned "Pole with massive top" shows a block connected to an elastic pole and a massive top. The pole bends when the block accelerates, and when the acceleration stops, the top will oscillate (damped) under the regime of pole stiffness. One could argue that a greater (periodic) jerk might excite a larger amplitude of oscillation because small oscillations are damped before reinforcement by a shock wave. One can also argue that a larger jerk might increase the probability of exciting a resonant mode because the larger wave components of the shock wave have higher frequencies and Fourier coefficients. To reduce the amplitude of excited stress waves and vibrations, one can limit jerk by shaping motion and making the acceleration continuous with slopes as flat as possible. Due to limitations of abstract models, algorithms for reducing vibrations include higher derivatives, such as jounce, or suggest continuous regimes for both acceleration and jerk. One concept for limiting jerk is to shape acceleration and deceleration sinusoidally with zero acceleration in between (see graphic captioned "Sinusoidal acceleration profile"), making the speed appear sinusoidal with constant maximum speed. The jerk, however, will remain discontinuous at the points where acceleration enters and leaves the zero phases. In the geometric design of roads and tracks Roads and tracks are designed to limit the jerk caused by changes in their curvature. Design standards for high-speed rail vary from 0.2 m/s3 to 0.6 m/s3. Track transition curves limit the jerk when transitioning from a straight line to a curve, or vice versa. Recall that in constant-speed motion along an arc, acceleration is zero in the tangential direction and nonzero in the inward normal direction. Transition curves gradually increase the curvature and, consequently, the centripetal acceleration. An Euler spiral, the theoretically optimum transition curve, linearly increases centripetal acceleration and results in constant jerk (see graphic). In real-world applications, the plane of the track is inclined (cant) along the curved sections. The incline causes vertical acceleration, which is a design consideration for wear on the track and embankment. The Wiener Kurve (Viennese Curve) is a patented curve designed to minimize this wear. Rollercoasters are also designed with track transitions to limit jerk. When entering a loop, acceleration values can reach around 4g (40 m/s2), and riding in this high acceleration environment is only possible with track transitions. S-shaped curves, such as figure eights, also use track transitions for smooth rides. In motion control In motion control, the design focus is on straight, linear motion, with the need to move a system from one steady position to another (point-to-point motion). The design concern from a jerk perspective is vertical jerk; the jerk from tangential acceleration is effectively zero since linear motion is non-rotational. Motion control applications include passenger elevators and machining tools. Limiting vertical jerk is considered essential for elevator riding convenience. ISO 8100-34 specifies measurement methods for elevator ride quality with respect to jerk, acceleration, vibration, and noise; however, the standard does not specify levels for acceptable or unacceptable ride quality. It is reported that most passengers rate a vertical jerk of 2 m/s3 as acceptable and 6 m/s3 as intolerable. For hospitals, 0.7 m/s3 is the recommended limit. A primary design goal for motion control is to minimize the transition time without exceeding speed, acceleration, or jerk limits. Consider a third-order motion-control profile with quadratic ramping and deramping phases in velocity (see figure). This motion profile consists of the following seven segments: Acceleration build up — positive jerk limit; linear increase in acceleration to the positive acceleration limit; quadratic increase in velocity Upper acceleration limit — zero jerk; linear increase in velocity Acceleration ramp down — negative jerk limit; linear decrease in acceleration; (negative) quadratic increase in velocity, approaching the desired velocity limit Velocity limit — zero jerk; zero acceleration Deceleration build up — negative jerk limit; linear decrease in acceleration to the negative acceleration limit; (negative) quadratic decrease in velocity Lower deceleration limit — zero jerk; linear decrease in velocity Deceleration ramp down — positive jerk limit; linear increase in acceleration to zero; quadratic decrease in velocity; approaching the desired position at zero speed and zero acceleration Segment four's time period (constant velocity) varies with distance between the two positions. If this distance is so small that omitting segment four would not suffice, then segments two and six (constant acceleration) could be equally reduced, and the constant velocity limit would not be reached. If this modification does not sufficiently reduce the crossed distance, then segments one, three, five, and seven could be shortened by an equal amount, and the constant acceleration limits would not be reached. Other motion profile strategies are used, such as minimizing the square of jerk for a given transition time and, as discussed above, sinusoidal-shaped acceleration profiles. Motion profiles are tailored for specific applications including machines, people movers, chain hoists, automobiles, and robotics. In manufacturing Jerk is an important consideration in manufacturing processes. Rapid changes in acceleration of a cutting tool can lead to premature tool wear and result in uneven cuts; consequently, modern motion controllers include jerk limitation features. In mechanical engineering, jerk, in addition to velocity and acceleration, is considered in the development of cam profiles because of tribological implications and the ability of the actuated body to follow the cam profile without chatter. Jerk is often considered when vibration is a concern. A device that measures jerk is called a "jerkmeter". Further derivatives Further time derivatives have also been named, as snap or jounce (fourth derivative), crackle (fifth derivative), and pop (sixth derivative).The Seventh Derivative is known as "Bang," as it is a logical continuation to the cycle. The Eighth derivative has been referred to as "Boom," and the 9th is known as "Crash." However, time derivatives of position of higher order than four appear rarely. The terms snap, crackle, and popfor the fourth, fifth, and sixth derivatives of positionwere inspired by the advertising mascots Snap, Crackle, and Pop. See also Geomagnetic jerk Shock (mechanics) Yank References External links What is the term used for the third derivative of position? , description of jerk in the Usenet Physics FAQ Mathematics of Motion Control Profiles Elevator-Ride-Quality Elevator manufacturer brochure Patent of Wiener Kurve Description of Wiener Kurve Acceleration Classical mechanics Kinematic properties Temporal rates Time in physics Vector physical quantities
0.771555
0.997617
0.769716
Onsager reciprocal relations
In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists. "Reciprocal relations" occur between different pairs of forces and flows in a variety of physical systems. For example, consider fluid systems described in terms of temperature, matter density, and pressure. In this class of systems, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions. What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. Perhaps surprisingly, the heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics (microscopic reversibility). The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once, with the limitation that "the principle of dynamical reversibility does not apply when (external) magnetic fields or Coriolis forces are present", in which case "the reciprocal relations break down". Though the fluid system is perhaps described most intuitively, the high precision of electrical measurements makes experimental realisations of Onsager's reciprocity easier in systems involving electrical phenomena. In fact, Onsager's 1931 paper refers to thermoelectricity and transport phenomena in electrolytes as well known from the 19th century, including "quasi-thermodynamic" theories by Thomson and Helmholtz respectively. Onsager's reciprocity in the thermoelectric effect manifests itself in the equality of the Peltier (heat flow caused by a voltage difference) and Seebeck (electric current caused by a temperature difference) coefficients of a thermoelectric material. Similarly, the so-called "direct piezoelectric" (electric current produced by mechanical stress) and "reverse piezoelectric" (deformation produced by a voltage difference) coefficients are equal. For many kinetic systems, like the Boltzmann equation or chemical kinetics, the Onsager relations are closely connected to the principle of detailed balance and follow from them in the linear approximation near equilibrium. Experimental verifications of the Onsager reciprocal relations were collected and analyzed by D. G. Miller for many classes of irreversible processes, namely for thermoelectricity, electrokinetics, transference in electrolytic solutions, diffusion, conduction of heat and electricity in anisotropic solids, thermomagnetism and galvanomagnetism. In this classical review, chemical reactions are considered as "cases with meager" and inconclusive evidence. Further theoretical analysis and experiments support the reciprocal relations for chemical kinetics with transport. Kirchhoff's law of thermal radiation is another special case of the Onsager reciprocal relations applied to the wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium. For his discovery of these reciprocal relations, Lars Onsager was awarded the 1968 Nobel Prize in Chemistry. The presentation speech referred to the three laws of thermodynamics and then added "It can be said that Onsager's reciprocal relations represent a further law making a thermodynamic study of irreversible processes possible." Some authors have even described Onsager's relations as the "Fourth law of thermodynamics". Example: Fluid system The fundamental equation The basic thermodynamic potential is internal energy. In a simple fluid system, neglecting the effects of viscosity the fundamental thermodynamic equation is written: where U is the internal energy, T is temperature, S is entropy, P is the hydrostatic pressure, V is the volume, is the chemical potential, and M mass. In terms of the internal energy density, u, entropy density s, and mass density , the fundamental equation at fixed volume is written: For non-fluid or more complex systems there will be a different collection of variables describing the work term, but the principle is the same. The above equation may be solved for the entropy density: The above expression of the first law in terms of entropy change defines the entropic conjugate variables of and , which are and and are intensive quantities analogous to potential energies; their gradients are called thermodynamic forces as they cause flows of the corresponding extensive variables as expressed in the following equations. The continuity equations The conservation of mass is expressed locally by the fact that the flow of mass density satisfies the continuity equation: where is the mass flux vector. The formulation of energy conservation is generally not in the form of a continuity equation because it includes contributions both from the macroscopic mechanical energy of the fluid flow and of the microscopic internal energy. However, if we assume that the macroscopic velocity of the fluid is negligible, we obtain energy conservation in the following form: where is the internal energy density and is the internal energy flux. Since we are interested in a general imperfect fluid, entropy is locally not conserved and its local evolution can be given in the form of entropy density as where is the rate of increase in entropy density due to the irreversible processes of equilibration occurring in the fluid and is the entropy flux. The phenomenological equations In the absence of matter flows, Fourier's law is usually written: where is the thermal conductivity. However, this law is just a linear approximation, and holds only for the case where , with the thermal conductivity possibly being a function of the thermodynamic state variables, but not their gradients or time rate of change. Assuming that this is the case, Fourier's law may just as well be written: In the absence of heat flows, Fick's law of diffusion is usually written: where D is the coefficient of diffusion. Since this is also a linear approximation and since the chemical potential is monotonically increasing with density at a fixed temperature, Fick's law may just as well be written: where, again, is a function of thermodynamic state parameters, but not their gradients or time rate of change. For the general case in which there are both mass and energy fluxes, the phenomenological equations may be written as: or, more concisely, where the entropic "thermodynamic forces" conjugate to the "displacements" and are and and is the Onsager matrix of transport coefficients. The rate of entropy production From the fundamental equation, it follows that: and Using the continuity equations, the rate of entropy production may now be written: and, incorporating the phenomenological equations: It can be seen that, since the entropy production must be non-negative, the Onsager matrix of phenomenological coefficients is a positive semi-definite matrix. The Onsager reciprocal relations Onsager's contribution was to demonstrate that not only is positive semi-definite, it is also symmetric, except in cases where time-reversal symmetry is broken. In other words, the cross-coefficients and are equal. The fact that they are at least proportional is suggested by simple dimensional analysis (i.e., both coefficients are measured in the same units of temperature times mass density). The rate of entropy production for the above simple example uses only two entropic forces, and a 2×2 Onsager phenomenological matrix. The expression for the linear approximation to the fluxes and the rate of entropy production can very often be expressed in an analogous way for many more general and complicated systems. Abstract formulation Let denote fluctuations from equilibrium values in several thermodynamic quantities, and let be the entropy. Then, Boltzmann's entropy formula gives for the probability distribution function , where A is a constant, since the probability of a given set of fluctuations is proportional to the number of microstates with that fluctuation. Assuming the fluctuations are small, the probability distribution function can be expressed through the second differential of the entropy where we are using Einstein summation convention and is a positive definite symmetric matrix. Using the quasi-stationary equilibrium approximation, that is, assuming that the system is only slightly non-equilibrium, we have Suppose we define thermodynamic conjugate quantities as , which can also be expressed as linear functions (for small fluctuations): Thus, we can write where are called kinetic coefficients The principle of symmetry of kinetic coefficients or the Onsager's principle states that is a symmetric matrix, that is Proof Define mean values and of fluctuating quantities and respectively such that they take given values at . Note that Symmetry of fluctuations under time reversal implies that or, with , we have Differentiating with respect to and substituting, we get Putting in the above equation, It can be easily shown from the definition that , and hence, we have the required result. See also Lars Onsager Langevin equation References Eponymous equations of physics Laws of thermodynamics Non-equilibrium thermodynamics Thermodynamic equations
0.778314
0.988907
0.76968
The Seven Pillars of Life
The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS. The Seven Pillars Program Koshland defines "Program" as an "organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time." In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms. Improvisation "Improvisation" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection. Compartmentalization "Compartmentalization" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments. Energy Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis. Regeneration "Regeneration" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the larger decline of components of the system in ageing. Living systems replace these losses by importing molecules from the outside environment, synthesizing new molecules and components, or creating new generations to start the system over again. Adaptability "Adaptability" is the ability of a living system to respond to needs, dangers, or changes. It is distinguished from improvisation because the response is timely and does not involve a change of the program. Adaptability occurs from a molecular level to a behavioral level through feedback and feedforward systems. For example, an animal seeing a predator will respond to the danger with hormonal changes and escape behavior. Seclusion "Seclusion" is the separation of chemical pathways and the specificity of the effect of molecules, so that processes can function separately within the living system. In organisms on Earth, proteins aid in seclusion because of their individualized structure that are specific for their function, so that they can efficiently act without affecting separate functions. Criticism Y. N. Zhuravlev and V. A. Avetisov have analyzed Koshland's seven pillars from the context of primordial life and, though calling the concept "elegant," point out that the pillars of compartmentalization, program, and seclusion don't apply well to the non-differentiated earliest life. See also Artificial life Extraterrestrial life Non-cellular life Organism References External links "The Seven Pillars of Life" in Science Magazine "Biochemist suggests '7 pillars' to define life" in USA Today Life Biological concepts Philosophy of biology
0.797049
0.965638
0.769661
Isolated system
In physical science, an isolated system is either of the following: a physical system so far removed from other systems that it does not interact with them. a thermodynamic system enclosed by rigid immovable walls through which neither mass nor energy can pass. Though subject internally to its own gravity, an isolated system is usually taken to be outside the reach of external gravitational and other long-range forces. This can be contrasted with what (in the more common terminology used in thermodynamics) is called a closed system, being enclosed by selective walls through which energy can pass as heat or work, but not matter; and with an open system, which both matter and energy can enter or exit, though it may have variously impermeable walls in parts of its boundaries. An isolated system obeys the conservation law that its total energy–mass stays constant. Most often, in thermodynamics, mass and energy are treated as separately conserved. Because of the requirement of enclosure, and the near ubiquity of gravity, strictly and ideally isolated systems do not actually occur in experiments or in nature. Though very useful, they are strictly hypothetical. Classical thermodynamics is usually presented as postulating the existence of isolated systems. It is also usually presented as the fruit of experience. Obviously, no experience has been reported of an ideally isolated system. It is, however, the fruit of experience that some physical systems, including isolated ones, do seem to reach their own states of internal thermodynamic equilibrium. Classical thermodynamics postulates the existence of systems in their own states of internal thermodynamic equilibrium. This postulate is a very useful idealization. In the attempt to explain the idea of a gradual approach to thermodynamic equilibrium after a thermodynamic operation, with entropy increasing according to the second law of thermodynamics, Boltzmann’s H-theorem used equations, which assumed a system (for example, a gas) was isolated. That is, all the mechanical degrees of freedom could be specified, treating the enclosing walls simply as mirror boundary conditions. This led to Loschmidt's paradox. If, however, the stochastic behavior of the molecules and thermal radiation in real enclosing walls is considered, then the system is in effect in a heat bath. Then Boltzmann’s assumption of molecular chaos can be justified. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena; e.g., the planets in the Solar System, and the proton and electron in a hydrogen atom are often treated as isolated systems. But, from time to time, a hydrogen atom will interact with electromagnetic radiation and go to an excited state. Radiative isolation For radiative isolation, the walls should be perfectly conductive, so as to perfectly reflect the radiation within the cavity, as for example imagined by Planck. He was considering the internal thermal radiative equilibrium of a thermodynamic system in a cavity initially devoid of substance. He did not mention what he imagined to surround his perfectly reflective and thus perfectly conductive walls. Presumably, since they are perfectly reflective, they isolate the cavity from any external electromagnetic effect. Planck held that for radiative equilibrium within the isolated cavity, it needed to have added to its interior a speck of carbon. If the cavity with perfectly reflective walls contains enough radiative energy to sustain a temperature of cosmological magnitude, then the speck of carbon is not needed because the radiation generates particles of substance, such as for example electron-positron pairs, and thereby reaches thermodynamic equilibrium. A different approach is taken by Roger Balian. For quantizing the radiation in the cavity, he imagines his radiatively isolating walls to be perfectly conductive. Though he does not mention mass outside, and it seems from his context that he intends the reader to suppose the interior of the cavity to be devoid of mass, he does imagine that some factor causes currents in the walls. If that factor is internal to the cavity, it can be only the radiation, which would thereby be perfectly reflected. For the thermal equilibrium problem, however, he considers walls that contain charged particles that interact with the radiation inside the cavity; such cavities are of course not isolated, but may be regarded as in a heat bath. See also Closed system Dynamical system Open system Thermodynamic system Open system (thermodynamics) References Thermodynamic systems
0.778538
0.988587
0.769653
Spin (physics)
Spin is an intrinsic form of angular momentum carried by elementary particles, and thus by composite particles such as hadrons, atomic nuclei, and atoms. Spin is quantized, and accurate models for the interaction with spin require relativistic quantum mechanics or quantum field theory. The existence of electron spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. The relativistic spin–statistics theorem connects electron spin quantization to the Pauli exclusion principle: observations of exclusion imply half-integer spin, and observations of half-integer spin imply exclusion. Spin is described mathematically as a vector for some particles such as photons, and as a spinor or bispinor for other particles such as electrons. Spinors and bispinors behave similarly to vectors: they have definite magnitudes and change under rotations; however, they use an unconventional "direction". All elementary particles of a given kind have the same magnitude of spin angular momentum, though its direction may change. These are indicated by assigning the particle a spin quantum number. The SI units of spin are the same as classical angular momentum (i.e., N·m·s, J·s, or kg·m2·s−1). In quantum mechanics, angular momentum and spin angular momentum take discrete values proportional to the Planck constant. In practice, spin is usually given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant . Often, the "spin quantum number" is simply called "spin". Models Rotating charged mass The earliest models for electron spin imagined a rotating charged mass, but this model fails when examined in detail: the required space distribution does not match limits on the electron radius: the required rotation speed exceeds the speed of light. In the Standard Model, the fundamental particles are all considered "point-like": they have their effects through the field that surrounds them. Any model for spin based on mass rotation would need to be consistent with that model. Pauli's "classically non-describable two-valuedness" Wolfgang Pauli, a central figure in the history of quantum spin, initially rejected any idea that the "degree of freedom" he introduced to explain experimental observations was related to rotation. He called it "classically non-describable two-valuedness". Later, he allowed that it is related to angular momentum, but insisted on considering spin an abstract property. This approach allowed Pauli to develop a proof of his fundamental Pauli exclusion principle, a proof now called the spin-statistics theorem. In retrospect, this insistence and the style of his proof initiated the modern particle-physics era, where abstract quantum properties derived from symmetry properties dominate. Concrete interpretation became secondary and optional. Circulation of classical fields The first classical model for spin proposed a small rigid particle rotating about an axis, as ordinary use of the word may suggest. Angular momentum can be computed from a classical field as well. By applying Frederik Belinfante's approach to calculating the angular momentum of a field, Hans C. Ohanian showed that "spin is essentially a wave property ... generated by a circulating flow of charge in the wave field of the electron". This same concept of spin can be applied to gravity waves in water: "spin is generated by subwavelength circular motion of water particles". Unlike classical wavefield circulation, which allows continuous values of angular momentum, quantum wavefields allow only discrete values. Consequently, energy transfer to or from spin states always occurs in fixed quantum steps. Only a few steps are allowed: for many qualitative purposes, the complexity of the spin quantum wavefields can be ignored and the system properties can be discussed in terms of "integer" or "half-integer" spin models as discussed in quantum numbers below. Dirac's relativistic electron Quantitative calculations of spin properties for electrons requires the Dirac relativistic wave equation. Relation to orbital angular momentum As the name suggests, spin was originally conceived as the rotation of a particle around some axis. Historically orbital angular momentum related to particle orbits. While the names based on mechanical models have survived, the physical explanation has not. Quantization fundamentally alters the character of both spin and orbital angular momentum. Since elementary particles are point-like, self-rotation is not well-defined for them. However, spin implies that the phase of the particle depends on the angle as for rotation of angle around the axis parallel to the spin . This is equivalent to the quantum-mechanical interpretation of momentum as phase dependence in the position, and of orbital angular momentum as phase dependence in the angular position. For fermions, the picture is less clear: From the Ehrenfest theorem, the angular velocity is equal to the derivative of the Hamiltonian to its conjugate momentum, which is the total angular momentum operator Therefore, if the Hamiltonian has any dependence on the spin , then must be non-zero; consequently, for classical mechanics, the existence of spin in the Hamiltonian will produce an actual angular velocity, and hence an actual physical rotation – that is, a change in the phase-angle, , over time. However, whether this holds true for free electron is ambiguous, since for an electron, ² is a constant and one might decide that since it cannot change, no partial can exist. Therefore it is a matter of interpretation whether the Hamiltonian must include such a term, and whether this aspect of classical mechanics extends into quantum mechanics (any particle's intrinsic spin angular momentum, , is a quantum number arising from a "spinor" in the mathematical solution to the Dirac equation, rather than being a more nearly physical quantity, like orbital angular momentum ). Nevertheless, spin appears in the Dirac equation, and thus the relativistic Hamiltonian of the electron, treated as a Dirac field, can be interpreted as including a dependence in the spin . Quantum number Spin obeys the mathematical laws of angular momentum quantization. The specific properties of spin angular momenta include: Spin quantum numbers may take either half-integer or integer values. Although the direction of its spin can be changed, the magnitude of the spin of an elementary particle cannot be changed. The spin of a charged particle is associated with a magnetic dipole moment with a -factor that differs from 1. (In the classical context, this would imply the internal charge and mass distributions differing for a rotating object.) The conventional definition of the spin quantum number is , where can be any non-negative integer. Hence the allowed values of are 0, , 1, , 2, etc. The value of for an elementary particle depends only on the type of particle and cannot be altered in any known way (in contrast to the spin direction described below). The spin angular momentum of any physical system is quantized. The allowed values of are where is the Planck constant, and is the reduced Planck constant. In contrast, orbital angular momentum can only take on integer values of ; i.e., even-numbered values of . Fermions and bosons Those particles with half-integer spins, such as , , , are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle: that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning, roughly, having the same position, velocity and spin direction). Fermions obey the rules of Fermi–Dirac statistics. In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch together" in identical states. Also, composite particles can have spins different from their component particles. For example, a helium-4 atom in the ground state has spin 0 and behaves like a boson, even though the quarks and electrons which make it up are all fermions. This has some profound consequences: Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin . The common idea that "matter takes up space" actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close. Elementary fermions with other spins (, , etc.) are not known to exist. Elementary particles which are thought of as carrying forces are all bosons with spin 1. They include the photon, which carries the electromagnetic force, the gluon (strong force), and the W and Z bosons (weak force). The ability of bosons to occupy the same quantum state is used in the laser, which aligns many photons having the same quantum number (the same direction and frequency), superfluid liquid helium resulting from helium-4 atoms being bosons, and superconductivity, where pairs of electrons (which individually are fermions) act as single composite bosons. Elementary bosons with other spins (0, 2, 3, etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular, theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013, the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar elementary particle (spin 0) known to exist in nature. Atomic nuclei have nuclear spin which may be either half-integer or integer, so that the nuclei may be either fermions or bosons. Spin–statistics theorem The spin–statistics theorem splits particles into two groups: bosons and fermions, where bosons obey Bose–Einstein statistics, and fermions obey Fermi–Dirac statistics (and therefore the Pauli exclusion principle). Specifically, the theorem requires that particles with half-integer spins obey the Pauli exclusion principle while particles with integer spin do not. As an example, electrons have half-integer spin and are fermions that obey the Pauli exclusion principle, while photons have integer spin and do not. The theorem was derived by Wolfgang Pauli in 1940; it relies on both quantum mechanics and the theory of special relativity. Pauli described this connection between spin and statistics as "one of the most important applications of the special relativity theory". Magnetic moments Particles with spin can possess a magnetic dipole moment, just like a rotating electrically charged body in classical electrodynamics. These magnetic moments can be experimentally observed in several ways, e.g. by the deflection of particles by inhomogeneous magnetic fields in a Stern–Gerlach experiment, or by measuring the magnetic fields generated by the particles themselves. The intrinsic magnetic moment of a spin- particle with charge , mass , and spin angular momentum is where the dimensionless quantity is called the spin -factor. For exclusively orbital rotations, it would be 1 (assuming that the mass and the charge occupy spheres of equal radius). The electron, being a charged elementary particle, possesses a nonzero magnetic moment. One of the triumphs of the theory of quantum electrodynamics is its accurate prediction of the electron -factor, which has been experimentally determined to have the value , with the digits in parentheses denoting measurement uncertainty in the last two digits at one standard deviation. The value of 2 arises from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties; and the deviation from arises from the electron's interaction with the surrounding quantum fields, including its own electromagnetic field and virtual particles. Composite particles also possess magnetic moments associated with their spin. In particular, the neutron possesses a non-zero magnetic moment despite being electrically neutral. This fact was an early indication that the neutron is not an elementary particle. In fact, it is made up of quarks, which are electrically charged particles. The magnetic moment of the neutron comes from the spins of the individual quarks and their orbital motions. Neutrinos are both elementary and electrically neutral. The minimally extended Standard Model that takes into account non-zero neutrino masses predicts neutrino magnetic moments of: where the are the neutrino magnetic moments, are the neutrino masses, and is the Bohr magneton. New physics above the electroweak scale could, however, lead to significantly higher neutrino magnetic moments. It can be shown in a model-independent way that neutrino magnetic moments larger than about 10−14  are "unnatural" because they would also lead to large radiative contributions to the neutrino mass. Since the neutrino masses are known to be at most about , fine-tuning would be necessary in order to prevent large contributions to the neutrino mass via radiative corrections. The measurement of neutrino magnetic moments is an active area of research. Experimental results have put the neutrino magnetic moment at less than  times the electron's magnetic moment. On the other hand, elementary particles with spin but without electric charge, such as the photon and Z boson, do not have a magnetic moment. Curie temperature and loss of alignment In ordinary materials, the magnetic dipole moments of individual atoms produce magnetic fields that cancel one another, because each dipole points in a random direction, with the overall average being very near zero. Ferromagnetic materials below their Curie temperature, however, exhibit magnetic domains in which the atomic dipole moments spontaneously align locally, producing a macroscopic, non-zero magnetic field from the domain. These are the ordinary "magnets" with which we are all familiar. In paramagnetic materials, the magnetic dipole moments of individual atoms will partially align with an externally applied magnetic field. In diamagnetic materials, on the other hand, the magnetic dipole moments of individual atoms align oppositely to any externally applied magnetic field, even if it requires energy to do so. The study of the behavior of such "spin models" is a thriving area of research in condensed matter physics. For instance, the Ising model describes spins (dipoles) that have only two possible states, up and down, whereas in the Heisenberg model the spin vector is allowed to point in any direction. These models have many interesting properties, which have led to interesting results in the theory of phase transitions. Direction Spin projection quantum number and multiplicity In classical mechanics, the angular momentum of a particle possesses not only a magnitude (how fast the body is rotating), but also a direction (either up or down on the axis of rotation of the particle). Quantum-mechanical spin also contains information about direction, but in a more subtle form. Quantum mechanics states that the component of angular momentum for a spin-s particle measured along any direction can only take on the values where is the spin component along the -th axis (either , , or ), is the spin projection quantum number along the -th axis, and is the principal spin quantum number (discussed in the previous section). Conventionally the direction chosen is the  axis: where is the spin component along the  axis, is the spin projection quantum number along the  axis. One can see that there are possible values of . The number "" is the multiplicity of the spin system. For example, there are only two possible values for a spin- particle: and . These correspond to quantum states in which the spin component is pointing in the +z or −z directions respectively, and are often referred to as "spin up" and "spin down". For a spin- particle, like a delta baryon, the possible values are +, +, −, −. Vector For a given quantum state, one could think of a spin vector whose components are the expectation values of the spin components along each axis, i.e., . This vector then would describe the "direction" in which the spin is pointing, corresponding to the classical concept of the axis of rotation. It turns out that the spin vector is not very useful in actual quantum-mechanical calculations, because it cannot be measured directly: , and cannot possess simultaneous definite values, because of a quantum uncertainty relation between them. However, for statistically large collections of particles that have been placed in the same pure quantum state, such as through the use of a Stern–Gerlach apparatus, the spin vector does have a well-defined experimental meaning: It specifies the direction in ordinary space in which a subsequent detector must be oriented in order to achieve the maximum possible probability (100%) of detecting every particle in the collection. For spin- particles, this probability drops off smoothly as the angle between the spin vector and the detector increases, until at an angle of 180°—that is, for detectors oriented in the opposite direction to the spin vector—the expectation of detecting particles from the collection reaches a minimum of 0%. As a qualitative concept, the spin vector is often handy because it is easy to picture classically. For instance, quantum-mechanical spin can exhibit phenomena analogous to classical gyroscopic effects. For example, one can exert a kind of "torque" on an electron by putting it in a magnetic field (the field acts upon the electron's intrinsic magnetic dipole moment—see the following section). The result is that the spin vector undergoes precession, just like a classical gyroscope. This phenomenon is known as electron spin resonance (ESR). The equivalent behaviour of protons in atomic nuclei is used in nuclear magnetic resonance (NMR) spectroscopy and imaging. Mathematically, quantum-mechanical spin states are described by vector-like objects known as spinors. There are subtle differences between the behavior of spinors and vectors under coordinate rotations. For example, rotating a spin- particle by 360° does not bring it back to the same quantum state, but to the state with the opposite quantum phase; this is detectable, in principle, with interference experiments. To return the particle to its exact original state, one needs a 720° rotation. (The plate trick and Möbius strip give non-quantum analogies.) A spin-zero particle can only have a single quantum state, even after torque is applied. Rotating a spin-2 particle 180° can bring it back to the same quantum state, and a spin-4 particle should be rotated 90° to bring it back to the same quantum state. The spin-2 particle can be analogous to a straight stick that looks the same even after it is rotated 180°, and a spin-0 particle can be imagined as sphere, which looks the same after whatever angle it is turned through. Mathematical formulation Operator Spin obeys commutation relations analogous to those of the orbital angular momentum: where is the Levi-Civita symbol. It follows (as with angular momentum) that the eigenvectors of and (expressed as kets in the total basis) are The spin raising and lowering operators acting on these eigenvectors give where . But unlike orbital angular momentum, the eigenvectors are not spherical harmonics. They are not functions of and . There is also no reason to exclude half-integer values of and . All quantum-mechanical particles possess an intrinsic spin (though this value may be equal to zero). The projection of the spin on any axis is quantized in units of the reduced Planck constant, such that the state function of the particle is, say, not , but , where can take only the values of the following discrete set: One distinguishes bosons (integer spin) and fermions (half-integer spin). The total angular momentum conserved in interaction processes is then the sum of the orbital angular momentum and the spin. Pauli matrices The quantum-mechanical operators associated with spin- observables are where in Cartesian components For the special case of spin- particles, , and are the three Pauli matrices: Pauli exclusion principle The Pauli exclusion principle states that the wavefunction for a system of identical particles having spin must change upon interchanges of any two of the particles as Thus, for bosons the prefactor will reduce to +1, for fermions to −1. This permutation postulate for -particle state functions has most important consequences in daily life, e.g. the periodic table of the chemical elements. Rotations As described above, quantum mechanics states that components of angular momentum measured along any direction can only take a number of discrete values. The most convenient quantum-mechanical description of particle's spin is therefore with a set of complex numbers corresponding to amplitudes of finding a given value of projection of its intrinsic angular momentum on a given axis. For instance, for a spin- particle, we would need two numbers , giving amplitudes of finding it with projection of angular momentum equal to and , satisfying the requirement For a generic particle with spin , we would need such parameters. Since these numbers depend on the choice of the axis, they transform into each other non-trivially when this axis is rotated. It is clear that the transformation law must be linear, so we can represent it by associating a matrix with each rotation, and the product of two transformation matrices corresponding to rotations A and B must be equal (up to phase) to the matrix representing rotation AB. Further, rotations preserve the quantum-mechanical inner product, and so should our transformation matrices: Mathematically speaking, these matrices furnish a unitary projective representation of the rotation group SO(3). Each such representation corresponds to a representation of the covering group of SO(3), which is SU(2). There is one -dimensional irreducible representation of SU(2) for each dimension, though this representation is -dimensional real for odd and -dimensional complex for even (hence of real dimension ). For a rotation by angle in the plane with normal vector , where , and is the vector of spin operators. A generic rotation in 3-dimensional space can be built by compounding operators of this type using Euler angles: An irreducible representation of this group of operators is furnished by the Wigner D-matrix: where is Wigner's small d-matrix. Note that for and ; i.e., a full rotation about the  axis, the Wigner D-matrix elements become Recalling that a generic spin state can be written as a superposition of states with definite , we see that if is an integer, the values of are all integers, and this matrix corresponds to the identity operator. However, if is a half-integer, the values of are also all half-integers, giving for all , and hence upon rotation by 2 the state picks up a minus sign. This fact is a crucial element of the proof of the spin–statistics theorem. Lorentz transformations We could try the same approach to determine the behavior of spin under general Lorentz transformations, but we would immediately discover a major obstacle. Unlike SO(3), the group of Lorentz transformations SO(3,1) is non-compact and therefore does not have any faithful, unitary, finite-dimensional representations. In case of spin- particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product that is preserved by this representation. We associate a 4-component Dirac spinor with each particle. These spinors transform under Lorentz transformations according to the law where are gamma matrices, and is an antisymmetric 4 × 4 matrix parametrizing the transformation. It can be shown that the scalar product is preserved. It is not, however, positive-definite, so the representation is not unitary. Measurement of spin along the , , or axes Each of the (Hermitian) Pauli matrices of spin- particles has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are (Because any eigenvector multiplied by a constant is still an eigenvector, there is ambiguity about the overall sign. In this article, the convention is chosen to make the first element imaginary and negative if there is a sign ambiguity. The present convention is used by software such as SymPy; while many physics textbooks, such as Sakurai and Griffiths, prefer to make it real and positive.) By the postulates of quantum mechanics, an experiment designed to measure the electron spin on the , , or  axis can only yield an eigenvalue of the corresponding spin operator (, or ) on that axis, i.e. or . The quantum state of a particle (with respect to spin), can be represented by a two-component spinor: When the spin of this particle is measured with respect to a given axis (in this example, the  axis), the probability that its spin will be measured as is just . Correspondingly, the probability that its spin will be measured as is just . Following the measurement, the spin state of the particle collapses into the corresponding eigenstate. As a result, if the particle's spin along a given axis has been measured to have a given eigenvalue, all measurements will yield the same eigenvalue (since , etc.), provided that no measurements of the spin are made along other axes. Measurement of spin along an arbitrary axis The operator to measure spin along an arbitrary axis direction is easily obtained from the Pauli spin matrices. Let be an arbitrary unit vector. Then the operator for spin in this direction is simply The operator has eigenvalues of , just like the usual spin matrices. This method of finding the operator for spin in an arbitrary direction generalizes to higher spin states, one takes the dot product of the direction with a vector of the three operators for the three -, -, -axis directions. A normalized spinor for spin- in the direction (which works for all spin states except spin down, where it will give ) is The above spinor is obtained in the usual way by diagonalizing the matrix and finding the eigenstates corresponding to the eigenvalues. In quantum mechanics, vectors are termed "normalized" when multiplied by a normalizing factor, which results in the vector having a length of unity. Compatibility of spin measurements Since the Pauli matrices do not commute, measurements of spin along the different axes are incompatible. This means that if, for example, we know the spin along the  axis, and we then measure the spin along the  axis, we have invalidated our previous knowledge of the  axis spin. This can be seen from the property of the eigenvectors (i.e. eigenstates) of the Pauli matrices that So when physicists measure the spin of a particle along the  axis as, for example, , the particle's spin state collapses into the eigenstate . When we then subsequently measure the particle's spin along the  axis, the spin state will now collapse into either or , each with probability . Let us say, in our example, that we measure . When we now return to measure the particle's spin along the  axis again, the probabilities that we will measure or are each (i.e. they are and respectively). This implies that the original measurement of the spin along the  axis is no longer valid, since the spin along the  axis will now be measured to have either eigenvalue with equal probability. Higher spins The spin- operator forms the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher-spin systems in three spatial dimensions can be calculated for arbitrarily large using this spin operator and ladder operators. For example, taking the Kronecker product of two spin- yields a four-dimensional representation, which is separable into a 3-dimensional spin-1 (triplet states) and a 1-dimensional spin-0 representation (singlet state). The resulting irreducible representations yield the following spin matrices and eigenvalues in the z-basis: Also useful in the quantum mechanics of multiparticle systems, the general Pauli group is defined to consist of all -fold tensor products of Pauli matrices. The analog formula of Euler's formula in terms of the Pauli matrices for higher spins is tractable, but less simple. Parity In tables of the spin quantum number for nuclei or particles, the spin is often followed by a "+" or "−". This refers to the parity with "+" for even parity (wave function unchanged by spatial inversion) and "−" for odd parity (wave function negated by spatial inversion). For example, see the isotopes of bismuth, in which the list of isotopes includes the column nuclear spin and parity. For Bi-209, the longest-lived isotope, the entry 9/2– means that the nuclear spin is 9/2 and the parity is odd. Measuring spin The nuclear spin of atoms can be determined by sophisticated improvements to the original Stern-Gerlach experiment. A single-energy (monochromatic) molecular beam of atoms in an inhomogeneous magnetic field will split into beams representing each possible spin quantum state. For an atom with electronic spin and nuclear spin , there are spin states. For example, neutral Na atoms, which have , were passed through a series of inhomogeneous magnetic fields that selected one of the two electronic spin states and separated the nuclear spin states, from which four beams were observed. Thus, the nuclear spin for 23Na atoms was found to be . The spin of pions, a type of elementary particle, was determined by the principle of detailed balance applied to those collisions of protons that produced charged pions and deuterium. The known spin values for protons and deuterium allows analysis of the collision cross-section to show that has spin . A different approach is needed for neutral pions. In that case the decay produced two gamma ray photons with spin one: This result supplemented with additional analysis leads to the conclusion that the neutral pion also has spin zero. Applications Spin has important theoretical implications and practical applications. Well-established direct applications of spin include: Nuclear magnetic resonance (NMR) spectroscopy in chemistry; Electron spin resonance (ESR or EPR) spectroscopy in chemistry and physics; Magnetic resonance imaging (MRI) in medicine, a type of applied NMR, which relies on proton spin density; Giant magnetoresistive (GMR) drive-head technology in modern hard disks. Electron spin plays an important role in magnetism, with applications for instance in computer memories. The manipulation of nuclear spin by radio-frequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging. Spin–orbit coupling leads to the fine structure of atomic spectra, which is used in atomic clocks and in the modern definition of the second. Precise measurements of the -factor of the electron have played an important role in the development and verification of quantum electrodynamics. Photon spin is associated with the polarization of light (photon polarization). An emerging application of spin is as a binary information carrier in spin transistors. The original concept, proposed in 1990, is known as Datta–Das spin transistor. Electronics based on spin transistors are referred to as spintronics. The manipulation of spin in dilute magnetic semiconductor materials, such as metal-doped ZnO or TiO2 imparts a further degree of freedom and has the potential to facilitate the fabrication of more efficient electronics. There are many indirect applications and manifestations of spin and the associated Pauli exclusion principle, starting with the periodic table of chemistry. History Spin was first discovered in the context of the emission spectrum of alkali metals. Starting around 1910, many experiments on different atoms produced a collection of relationships involving quantum numbers for atomic energy levels partially summarized in Bohr's model for the atom Transitions between levels obeyed selection rules and the rules were known to be correlated with even or odd atomic number. Additional information was known from changes to atomic spectra observed in strong magnetic fields, known as the Zeeman effect. In 1924, Wolfgang Pauli used this large collection of empirical observations to propose a new degree of freedom, introducing what he called a "two-valuedness not describable classically" associated with the electron in the outermost shell. The physical interpretation of Pauli's "degree of freedom" was initially unknown. Ralph Kronig, one of Alfred Landé's assistants, suggested in early 1925 that it was produced by the self-rotation of the electron. When Pauli heard about the idea, he criticized it severely, noting that the electron's hypothetical surface would have to be moving faster than the speed of light in order for it to rotate quickly enough to produce the necessary angular momentum. This would violate the theory of relativity. Largely due to Pauli's criticism, Kronig decided not to publish his idea. In the autumn of 1925, the same thought came to Dutch physicists George Uhlenbeck and Samuel Goudsmit at Leiden University. Under the advice of Paul Ehrenfest, they published their results. The young physicists immediately regretted the publication: Hendrik Lorentz and Werner Heisenberg both pointed out problems with the concept of a spinning electron. Pauli was especially unconvinced and continued to pursue his two-valued degree of freedom. This allowed him to formulate the Pauli exclusion principle, stating that no two electrons can have the same quantum state in the same quantum system. Fortunately, by February 1926, Llewellyn Thomas managed to resolve a factor-of-two discrepancy between experimental results for the fine structure in the hydrogen spectrum and calculations based on Uhlenbeck and Goudsmit's (and Kronig's unpublished) model. This discrepancy was due to a relativistic effect, the difference between the electron's rotating rest frame and the nuclear rest frame; the effect is now known as Thomas precession. Thomas' result convinced Pauli that electron spin was the correct interpretation of his two-valued degree of freedom, while he continued to insist that the classical rotating charge model is invalid. In 1927, Pauli formalized the theory of spin using the theory of quantum mechanics invented by Erwin Schrödinger and Werner Heisenberg. He pioneered the use of Pauli matrices as a representation of the spin operators and introduced a two-component spinor wave-function. Pauli's theory of spin was non-relativistic. In 1928, Paul Dirac published his relativistic electron equation, using a four-component spinor (known as a "Dirac spinor") for the electron wave-function. In 1940, Pauli proved the spin–statistics theorem, which states that fermions have half-integer spin, and bosons have integer spin. In retrospect, the first direct experimental evidence of the electron spin was the Stern–Gerlach experiment of 1922. However, the correct explanation of this experiment was only given in 1927. The original interpretation assumed the two spots observed in the experiment were due to quantized orbital angular momentum. However, in 1927 Ronald Fraser showed that Sodium atoms are isotropic with no orbital angular momentum and suggested that the observed magnetic properties were due to electron spin. In same year, Phipps and Taylor applied the Stern-Gerlach technique to hydrogen atoms; the ground state of hydrogen has zero angular momentum but the measurements again showed two peaks. Once the quantum theory became established, it became clear that the original interpretation could not have been correct: the possible values of orbital angular momentum along one axis is always an odd number, unlike the observations. Hydrogen atoms have a single electron with two spin states giving the two spots observed; silver atoms have closed shells which do not contribute to the magnetic moment and only the unmatched outer electron's spin responds to the field. See also Chirality (physics) Dynamic nuclear polarization Helicity (particle physics) Holstein–Primakoff transformation Kramers' theorem Pauli equation Pauli–Lubanski pseudovector Rarita–Schwinger equation Representation theory of SU(2) Spin angular momentum of light Spin engineering Spin-flip Spin isomers of hydrogen Spin–orbit interaction Spin tensor Spintronics Spin wave Yrast References Further reading Sin-Itiro Tomonaga, The Story of Spin, 1997 External links Goudsmit on the discovery of electron spin. Nature: " Milestones in 'spin' since 1896." ECE 495N Lecture 36: Spin Online lecture by S. Datta Rotational symmetry Quantum field theory Physical quantities
0.770644
0.998698
0.769641
Tectonics
Tectonics (; ) are the processes that result in the structure and properties of the Earth's crust and its evolution through time. The field of planetary tectonics extends the concept to other planets and moons. These processes include those of mountain-building, the growth and behavior of the strong, old cores of continents known as cratons, and the ways in which the relatively rigid plates that constitute the Earth's outer shell interact with each other. Principles of tectonics also provide a framework for understanding the earthquake and volcanic belts that directly affect much of the global population. Tectonic studies are important as guides for economic geologists searching for fossil fuels and ore deposits of metallic and nonmetallic resources. An understanding of tectonic principles can help geomorphologists to explain erosion patterns and other Earth-surface features. Main types of tectonic regime Extensional tectonics Extensional tectonics is associated with the stretching and thinning of the crust or the lithosphere. This type of tectonics is found at divergent plate boundaries, in continental rifts, during and after a period of continental collision caused by the lateral spreading of the thickened crust formed, at releasing bends in strike-slip faults, in back-arc basins, and on the continental end of passive margin sequences where a detachment layer is present. Thrust (contractional) tectonics Thrust tectonics is associated with the shortening and thickening of the crust, or the lithosphere. This type of tectonics is found at zones of continental collision, at restraining bends in strike-slip faults, and at the oceanward part of passive margin sequences where a detachment layer is present. Strike-slip tectonics Strike-slip tectonics is associated with the relative lateral movement of parts of the crust or the lithosphere. This type of tectonics is found along oceanic and continental transform faults which connect offset segments of mid-ocean ridges. Strike-slip tectonics also occurs at lateral offsets in extensional and thrust fault systems. In areas involved with plate collisions strike-slip deformation occurs in the over-riding plate in zones of oblique collision and accommodates deformation in the foreland to a collisional belt. Plate tectonics In plate tectonics, the outermost part of the Earth known as the lithosphere (the crust and uppermost mantle) act as a single mechanical layer. The lithosphere is divided into separate "plates" that move relative to each other on the underlying, relatively weak asthenosphere in a process ultimately driven by the continuous loss of heat from the Earth's interior. There are three main types of plate boundaries: divergent, where plates move apart from each other and new lithosphere is formed in the process of sea-floor spreading; transform, where plates slide past each other, and convergent, where plates converge and lithosphere is "consumed" by the process of subduction. Convergent and transform boundaries are responsible for most of the world's major (Mw > 7) earthquakes. Convergent and divergent boundaries are also the site of most of the world's volcanoes, such as around the Pacific Ring of Fire. Most of the deformation in the lithosphere is related to the interaction between plates at or near plate boundaries. The latest studies, based on the integration of available geological data, and satellite imagery and Gravimetric and magnetic anomaly datasets have shown that the crust of the Earth is dissected by thousands of different types of tectonic elements which define the subdivision into numerous smaller microplates which have amalgamated into the larger Plates. Other fields of tectonic studies Salt tectonics Salt tectonics is concerned with the structural geometries and deformation processes associated with the presence of significant thicknesses of rock salt within a sequence of rocks. This is due both to the low density of salt, which does not increase with burial, and its low strength. Neotectonics Neotectonics is the study of the motions and deformations of the Earth's crust (geological and geomorphological processes) that are current or recent in geological time. The term may also refer to the motions and deformations themselves. The corresponding time frame is referred to as the neotectonic period. Accordingly, the preceding time is referred to as palaeotectonic period. Tectonophysics Tectonophysics is the study of the physical processes associated with deformation of the crust and mantle from the scale of individual mineral grains up to that of tectonic plates. Seismotectonics Seismotectonics is the study of the relationship between earthquakes, active tectonics, and individual faults in a region. It seeks to understand which faults are responsible for seismic activity in an area by analysing a combination of regional tectonics, recent instrumentally recorded events, accounts of historical earthquakes, and geomorphological evidence. This information can then be used to quantify the seismic hazard of an area. Impact tectonics Impact tectonics is the study of modification of the lithosphere through high velocity impact cratering events. Planetary tectonics Techniques used in the analysis of tectonics on Earth have also been applied to the study of the planets and their moons, especially icy moons. See also Tectonophysics Seismology UNESCO world heritage site Glarus Thrust Volcanology Mohorovičić discontinuity References Further reading Edward A. Keller (2001) Active Tectonics: Earthquakes, Uplift, and Landscape Prentice Hall; 2nd edition, Stanley A. Schumm, Jean F. Dumont and John M. Holbrook (2002) Active Tectonics and Alluvial Rivers, Cambridge University Press; Reprint edition, External links The Origin and the Mechanics of the Forces Responsible for Tectonic Plate Movements The Paleomap Project
0.776236
0.991484
0.769626
Similitude
Similitude is a concept applicable to the testing of engineering models. A model is said to have similitude with the real application if the two share geometric similarity, kinematic similarity and dynamic similarity. Similarity and similitude are interchangeable in this context. The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic similitude have already been met. Similitude's main application is in hydraulic and aerospace engineering to test fluid flow conditions with scaled models. It is also the primary theory behind many textbook formulas in fluid mechanics. The concept of similitude is strongly tied to dimensional analysis. Overview Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations aren't reliable. Models are usually smaller than the final design, but not always. Scale models allow testing of a design prior to building, and in many cases are a critical step in the development process. Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such as pressure, temperature or the velocity and type of fluid may need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design. The following criteria are required to achieve similitude; Geometric similarity – the model is the same shape as the application, usually scaled. Kinematic similarity – fluid flow of both the model and real application must undergo similar time rates of change motions. (fluid streamlines are similar) Dynamic similarity – ratios of all forces acting on corresponding fluid particles and boundary surfaces in the two systems are constant. To satisfy the above conditions the application is analyzed; All parameters required to describe the system are identified using principles from continuum mechanics. Dimensional analysis is used to express the system with as few independent variables and as many dimensionless parameters as possible. The values of the dimensionless parameters are held to be the same for both the scale model and application. This can be done because they are dimensionless and will ensure dynamic similitude between the model and the application. The resulting equations are used to derive scaling laws which dictate model testing conditions. It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters. The design of marine vessels remains more of an art than a science in large part because dynamic similitude is especially difficult to attain for a vessel that is partially submerged: a ship is affected by wind forces in the air above it, by hydrodynamic forces within the water under it, and especially by wave motions at the interface between the water and the air. The scaling requirements for each of these phenomena differ, so models cannot replicate what happens to a full sized vessel nearly so well as can be done for an aircraft or submarine—each of which operates entirely within one medium. Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar. An example Consider a submarine modeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed. A free body diagram is constructed and the relevant relationships of force and velocity are formulated using techniques from continuum mechanics. The variables which describe the system are: This example has five independent variables and three fundamental units. The fundamental units are: meter, kilogram, second. Invoking the Buckingham π theorem shows that the system can be described with two dimensionless numbers and one independent variable. Dimensional analysis is used to rearrange the units to form the Reynolds number and pressure coefficient. These dimensionless numbers account for all the variables listed above except F, which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test. Scaling laws: The pressure is not one of the five variables, but the force is. The pressure difference (Δ) has thus been replaced with in the pressure coefficient. This gives a required test velocity of: . A model test is then conducted at that velocity and the force that is measured in the model is then scaled to find the force that can be expected for the real application: The power in watts required by the submarine is then: Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive. Typical applications Fluid mechanics Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application. Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation of computer simulations with the ultimate goal of eliminating the need for physical models altogether. Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions so helium is sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute. Some common applications of similitude and associated dimensionless numbers; Solid mechanics: structural similitude Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws. The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities. In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype. Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model. See also Similitude of ship models References Further reading External links MIT open courseware lecture notes on Similitude for marine engineering Dimensional analysis Conceptual modelling
0.800053
0.961963
0.769622
Isentropic process
An isentropic process is an idealized thermodynamic process that is both adiabatic and reversible. The work transfers of the system are frictionless, and there is no net transfer of heat or matter. Such an idealized process is useful in engineering as a model of and basis of comparison for real processes. This process is idealized because reversible processes do not occur in reality; thinking of a process as both adiabatic and reversible would show that the initial and final entropies are the same, thus, the reason it is called isentropic (entropy does not change). Thermodynamic processes are named based on the effect they would have on the system (ex. isovolumetric: constant volume, isenthalpic: constant enthalpy). Even though in reality it is not necessarily possible to carry out an isentropic process, some may be approximated as such. The word "isentropic" derives from the process being one in which the entropy of the system remains unchanged. In addition to a process which is both adiabatic and reversible. Background The second law of thermodynamics states that where is the amount of energy the system gains by heating, is the temperature of the surroundings, and is the change in entropy. The equal sign refers to a reversible process, which is an imagined idealized theoretical limit, never actually occurring in physical reality, with essentially equal temperatures of system and surroundings. For an isentropic process, if also reversible, there is no transfer of energy as heat because the process is adiabatic; δQ = 0. In contrast, if the process is irreversible, entropy is produced within the system; consequently, in order to maintain constant entropy within the system, energy must be simultaneously removed from the system as heat. For reversible processes, an isentropic transformation is carried out by thermally "insulating" the system from its surroundings. Temperature is the thermodynamic conjugate variable to entropy, thus the conjugate process would be an isothermal process, in which the system is thermally "connected" to a constant-temperature heat bath. Isentropic processes in thermodynamic systems The entropy of a given mass does not change during a process that is internally reversible and adiabatic. A process during which the entropy remains constant is called an isentropic process, written or . Some examples of theoretically isentropic thermodynamic devices are pumps, gas compressors, turbines, nozzles, and diffusers. Isentropic efficiencies of steady-flow devices in thermodynamic systems Most steady-flow devices operate under adiabatic conditions, and the ideal process for these devices is the isentropic process. The parameter that describes how efficiently a device approximates a corresponding isentropic device is called isentropic or adiabatic efficiency. Isentropic efficiency of turbines: Isentropic efficiency of compressors: Isentropic efficiency of nozzles: For all the above equations: is the specific enthalpy at the entrance state, is the specific enthalpy at the exit state for the actual process, is the specific enthalpy at the exit state for the isentropic process. Isentropic devices in thermodynamic cycles Note: The isentropic assumptions are only applicable with ideal cycles. Real cycles have inherent losses due to compressor and turbine inefficiencies and the second law of thermodynamics. Real systems are not truly isentropic, but isentropic behavior is an adequate approximation for many calculation purposes. Isentropic flow In fluid dynamics, an isentropic flow is a fluid flow that is both adiabatic and reversible. That is, no heat is added to the flow, and no energy transformations occur due to friction or dissipative effects. For an isentropic flow of a perfect gas, several relations can be derived to define the pressure, density and temperature along a streamline. Note that energy can be exchanged with the flow in an isentropic transformation, as long as it doesn't happen as heat exchange. An example of such an exchange would be an isentropic expansion or compression that entails work done on or by the flow. For an isentropic flow, entropy density can vary between different streamlines. If the entropy density is the same everywhere, then the flow is said to be homentropic. Derivation of the isentropic relations For a closed system, the total change in energy of a system is the sum of the work done and the heat added: The reversible work done on a system by changing the volume is where is the pressure, and is the volume. The change in enthalpy is given by Then for a process that is both reversible and adiabatic (i.e. no heat transfer occurs), , and so All reversible adiabatic processes are isentropic. This leads to two important observations: Next, a great deal can be computed for isentropic processes of an ideal gas. For any transformation of an ideal gas, it is always true that , and Using the general results derived above for and , then So for an ideal gas, the heat capacity ratio can be written as For a calorically perfect gas is constant. Hence on integrating the above equation, assuming a calorically perfect gas, we get that is, Using the equation of state for an ideal gas, , (Proof: But nR = constant itself, so .) also, for constant (per mole), and Thus for isentropic processes with an ideal gas, or Table of isentropic relations for an ideal gas {| style="bgcolor:white" cellpadding=5 |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | | align="center" | |- |} Derived from where: = pressure, = volume, = ratio of specific heats = , = temperature, = mass, = gas constant for the specific gas = , = universal gas constant, = molecular weight of the specific gas, = density, = specific heat at constant pressure, = specific heat at constant volume. See also Gas laws Adiabatic process Isenthalpic process Isentropic analysis Polytropic process Notes References Van Wylen, G. J. and Sonntag, R. E. (1965), Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc., New York. Library of Congress Catalog Card Number: 65-19470 Thermodynamic processes Thermodynamic entropy
0.773557
0.994902
0.769614
General relativity
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations. Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data. Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic. Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories. History Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913. The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests. General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated." From classical mechanics to general relativity General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. Geometry of Newtonian gravity At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. Relativistic generalization As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). Einstein's equations Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations: On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular, is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as On the right-hand side, is a constant and is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the Newtonian constant of gravitation and the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Total force in general relativity In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by A conservative total force can then be obtained as its negative gradient where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect. Alternatives to general relativity There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory. Definition and basic applications The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. Definition and basic properties General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. Model-building The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. Consequences of Einstein's theory General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. Gravitational time dilation and frequency shift Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. Light deflection and gravitational time delay General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a star. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. Gravitational waves Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models. Orbital effects and the relativity of direction General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. Precession of apsides In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by where: is the semi-major axis is the orbital period is the speed of light in vacuum is the orbital eccentricity Orbital decay According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations. Geodetic precession and frame-dragging Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. Astrophysical applications Gravitational lensing The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. Gravitational-wave astronomy Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. Black holes and other compact objects Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. Cosmology The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos, where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). Exotic solutions: time travel, warp drives Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel. Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability. Advanced concepts Asymptotic symmetries The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries. Causal structure and global geometry In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. Horizons Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation. Singularities Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. Evolution equations Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. Global and quasi-local quantities The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. Relationship with quantum theory If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. Quantum field theory in curved spacetime Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. Quantum gravity The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. Current status General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research. See also (warp drive) References Bibliography ; original paper in Russian: See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project Further reading Popular books Beginning undergraduate textbooks Advanced undergraduate textbooks Graduate textbooks Specialists' books Journal articles See also English translation at Einstein Papers Project External links Einstein Online  – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics GEO600 home page, the official website of the GEO600 project. LIGO Laboratory NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity (lecture by Leonard Susskind recorded 22 September 2008 at Stanford University). Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced). General Relativity Tutorials by John Baez. The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space Concepts in astronomy Albert Einstein 1915 in science Articles containing video clips
0.770082
0.999387
0.76961
ROOT
ROOT is an object-oriented computer program and library developed by CERN. It was originally designed for particle physics data analysis and contains several features specific to the field, but it is also used in other applications such as astronomy and data mining. The latest minor release is 6.32, as of 2024-05-26. Description CERN maintained the CERN Program Library written in FORTRAN for many years. Its development and maintenance were discontinued in 2003 in favour of ROOT, which is written in the C++ programming language. ROOT development was initiated by René Brun and Fons Rademakers in 1994. Some parts are published under the GNU Lesser General Public License (LGPL) and others are based on GNU General Public License (GPL) software, and are thus also published under the terms of the GPL. It provides platform independent access to a computer's graphics subsystem and operating system using abstract layers. Parts of the abstract platform are: a graphical user interface and a GUI builder, container classes, reflection, a C++ script and command line interpreter (CINT in version 5, cling in version 6), object serialization and persistence. The packages provided by ROOT include those for Histogramming and graphing to view and analyze distributions and functions, curve fitting (regression analysis) and minimization of functionals, statistics tools used for data analysis, matrix algebra, four-vector computations, as used in high energy physics, standard mathematical functions, multivariate data analysis, e.g. using neural networks, image manipulation, used, for instance, to analyze astronomical pictures, access to distributed data (in the context of the Grid), distributed computing, to parallelize data analyses, persistence and serialization of objects, which can cope with changes in class definitions of persistent data, access to databases, 3D visualizations (geometry), creating files in various graphics formats, like PDF, PostScript, PNG, SVG, LaTeX, etc. interfacing Python code in both directions, interfacing Monte Carlo event generators. A key feature of ROOT is a data container called tree, with its substructures branches and leaves. A tree can be seen as a sliding window to the raw data, as stored in a file. Data from the next entry in the file can be retrieved by advancing the index in the tree. This avoids memory allocation problems associated with object creation, and allows the tree to act as a lightweight container while handling buffering invisibly. ROOT is designed for high computing efficiency, as it is required to process data from the Large Hadron Collider's experiments estimated at several petabytes per year. ROOT is mainly used in data analysis and data acquisition in particle physics (high energy physics) experiments, and most experimental plots and results in those subfields are obtained using ROOT. The inclusion of a C++ interpreter (CINT until version 5.34, Cling from version 6.00) makes this package very versatile as it can be used in interactive, scripted and compiled modes in a manner similar to commercial products like MATLAB. On July 4, 2012 the ATLAS and CMS LHC's experiments presented the status of the Standard Model Higgs search. All data plotting presented that day used ROOT. Applications Several particle physics collaborations have written software based on ROOT, often in favor of using more generic solutions (e.g. using ROOT containers instead of STL). Some of the running particle physics experiments using software based on ROOT ALICE ATLAS BaBar experiment Belle Experiment (an electron positron collider at KEK (Japan)) Belle II experiment (successor of the Belle experiment) BES III CB-ELSA/TAPS CMS COMPASS experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy) CUORE (Cryogenic Underground Observatory for Rare Events) D0 experiment GlueX Experiment GRAPES-3 (Gamma Ray Astronomy PeV EnergieS) H1 (particle detector) at HERA collider at DESY, Hamburg LHCb MINERνA (Main Injector Experiment for ν-A) MINOS (Main injector neutrino oscillation search) NA61 experiment (SPS Heavy Ion and Neutrino Experiment) NOνA OPERA experiment PHENIX detector PHOBOS experiment at Relativistic Heavy Ion Collider SNO+ STAR detector (Solenoidal Tracker at RHIC) T2K experiment Future particle physics experiments currently developing software based on ROOT Mu2e Compressed Baryonic Matter experiment (CBM) PANDA experiment (antiProton Annihilation at Darmstadt (PANDA)) Deep Underground Neutrino Experiment (DUNE) Hyper-Kamiokande (HK (Japan)) Astrophysics (X-ray and gamma-ray astronomy, astroparticle physics) projects using ROOT AGILE Alpha Magnetic Spectrometer (AMS) Antarctic Impulse Transient Antenna (ANITA) ANTARES neutrino detector CRESST (Dark Matter Search) DMTPC DEAP-3600/Cryogenic Low-Energy Astrophysics with Neon(CLEAN) Fermi Gamma-ray Space Telescope ICECUBE HAWC High Energy Stereoscopic System (H.E.S.S.) Hitomi (ASTRO-H) MAGIC Milagro Pierre Auger Observatory VERITAS PAMELA POLAR PoGOLite Criticisms Criticisms of ROOT include its difficulty for beginners, as well as various aspects of its design and implementation. Frequent causes of frustration include extreme code bloat, heavy use of global variables, and an overcomplicated class hierarchy. From time to time these issues are discussed on the ROOT users mailing list. While scientists dissatisfied with ROOT have in the past managed to work around its flaws, some of the shortcomings are regularly addressed by the ROOT team. The CINT interpreter, for example, has been replaced by the Cling interpreter, and numerous bugs are fixed with every release. See also Matplotlib – a plotting and analysis system for Python SciPy – a scientific data analysis system for Python, based on the NumPy classes Perl Data Language – a set of array programming extensions to the Perl programming language HippoDraw – an alternative C++-based data analysis system Java Analysis Studio – a Java-based AIDA-compliant data analysis system R programming language AIDA (computing) – open interfaces and formats for particle physics data processing Geant4 – a platform for the simulation of the passage of particles through matter using Monte Carlo methods PAW IGOR Pro Scientific Linux Scientific computing OpenDX OpenScientist CERN Program Library – legacy program library written in Fortran77, still available but not updated References External links The ROOT System Home Page Image galleries ROOT User's Guide ROOT Reference Guide ROOT Forum The RooFit Toolkit for Data Modeling, an extension to ROOT to facilitate maximum likelihood fits The Toolkit for Multivariate Data Analysis with ROOT (TMVA) is a ROOT-integrated project providing a machine learning environment for the processing and evaluation of multivariate classification, both binary and multi class, and regression techniques targeting applications in high-energy physics (here or here). C++ libraries Data analysis software Data management software Experimental particle physics Free physics software Free plotting software Free science software Free software programmed in C++ Numerical software Physics software Plotting software CERN software
0.772437
0.9963
0.769578
Reaction–diffusion system
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space. Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form where represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable. One-component reaction–diffusion equations The simplest reaction–diffusion equation is in one spatial dimension in plane geometry, is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with and (Zeldovich number) that arises in combustion theory, and its particular degenerate case with that is sometimes referred to as the Zeldovich equation as well. The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form and therefore describes a permanent decrease of the "free energy" given by the functional with a potential such that In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form with , where is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For , there is a simple proof for this statement: if is a stationary solution and is an infinitesimally perturbed solution, linear stability analysis yields the equation With the ansatz we arrive at the eigenvalue problem of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance is a neutral eigenfunction with the eigenvalue , and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue cannot be the lowest one, thereby implying instability. To determine the velocity of a moving front, one may go to a moving coordinate system and look at stationary solutions: This equation has a nice mechanical analogue as the motion of a mass with position in the course of the "time" under the force with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of . When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability. Two-component reaction–diffusion equations Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion. A linear stability analysis however shows that when linearizing the general two-component system a plane wave perturbation of the stationary homogeneous solution will satisfy Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian of the reaction function. In particular, if a finite wave vector is supposed to be the most unstable one, the Jacobian must have the signs This class of systems is named activator-inhibitor system after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation with which describes how an action potential travels through a nerve. Here, and are positive constants. When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number or a Turing bifurcation to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns. For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle Three- and more-component reaction–diffusion equations For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems. It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in. Applications and universality In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment. Experiments Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems. Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out. Numerical treatments A reaction–diffusion system can be solved by using methods of numerical mathematics. There are existing several numerical treatments in research literature. Also for complex geometries numerical solution methods are proposed. To highest degree of detail reaction-diffusion systems are described with particle based simulation tools like SRSim or ReaDDy which employ for example reversible interacting-particle reaction dynamics. See also Autowave Diffusion-controlled reaction Chemical kinetics Phase space method Autocatalytic reactions and order creation Pattern formation Patterns in nature Periodic travelling wave Stochastic geometry MClone The Chemical Basis of Morphogenesis Turing pattern Multi-state modeling of biomolecules Examples Fisher's equation Zeldovich–Frank-Kamenetskii equation FitzHugh–Nagumo model Wrinkle paint References External links Reaction–Diffusion by the Gray–Scott Model: Pearson's parameterization a visual map of the parameter space of Gray–Scott reaction diffusion. A thesis on reaction–diffusion patterns with an overview of the field RD Tool: an interactive web application for reaction-diffusion simulation Mathematical modeling Parabolic partial differential equations Reaction mechanisms
0.775953
0.991773
0.76957
Tribology
Tribology is the science and engineering of understanding friction, lubrication and wear phenomena for interacting surfaces in relative motion. It is highly interdisciplinary, drawing on many academic fields, including physics, chemistry, materials science, mathematics, biology and engineering. The fundamental objects of study in tribology are tribosystems, which are physical systems of contacting surfaces. Subfields of tribology include biotribology, nanotribology and space tribology. It is also related to other areas such as the coupling of corrosion and tribology in tribocorrosion and the contact mechanics of how surfaces in contact deform. Approximately 20% of the total energy expenditure of the world is due to the impact of friction and wear in the transportation, manufacturing, power generation, and residential sectors. This section will provide an overview of tribology, with links to many of the more specialized areas. Etymology The word tribology derives from the Greek root τριβ- of the verb , tribo, "I rub" in classic Greek, and the suffix -logy from , -logia "study of", "knowledge of". Peter Jost coined the word in 1966, in the eponymous report which highlighted the cost of friction, wear and corrosion to the UK economy. History Early history Despite the relatively recent naming of the field of tribology, quantitative studies of friction can be traced as far back as 1493, when Leonardo da Vinci first noted the two fundamental 'laws' of friction. According to Leonardo, frictional resistance was the same for two different objects of the same weight but making contact over different widths and lengths. He also observed that the force needed to overcome friction doubles as weight doubles. However, Leonardo's findings remained unpublished in his notebooks. The two fundamental 'laws' of friction were first published (in 1699) by Guillaume Amontons, with whose name they are now usually associated. They state that: the force of friction acting between two sliding surfaces is proportional to the load pressing the surfaces together the force of friction is independent of the apparent area of contact between the two surfaces. Although not universally applicable, these simple statements hold for a surprisingly wide range of systems. These laws were further developed by Charles-Augustin de Coulomb (in 1785), who noticed that static friction force may depend on the contact time and sliding (kinetic) friction may depend on sliding velocity, normal force and contact area. In 1798, Charles Hatchett and Henry Cavendish carried out the first reliable test on frictional wear. In a study commissioned by the Privy Council of the UK, they used a simple reciprocating machine to evaluate the wear rate of gold coins. They found that coins with grit between them wore at a faster rate compared to self-mated coins. In 1860, Theodor Reye proposed . In 1953, John Frederick Archard developed the Archard equation which describes sliding wear and is based on the theory of asperity contact. Other pioneers of tribology research are Australian physicist Frank Philip Bowden and British physicist David Tabor, both of the Cavendish Laboratory at Cambridge University. Together they wrote the seminal textbook The Friction and Lubrication of Solids (Part I originally published in 1950 and Part II in 1964). Michael J. Neale was another leader in the field during the mid-to-late 1900s. He specialized in solving problems in machine design by applying his knowledge of tribology. Neale was respected as an educator with a gift for integrating theoretical work with his own practical experience to produce easy-to-understand design guides. The Tribology Handbook, which he first edited in 1973 and updated in 1995, is still used around the world and forms the basis of numerous training courses for engineering designers. Duncan Dowson surveyed the history of tribology in his 1997 book History of Tribology (2nd edition). This covers developments from prehistory, through early civilizations (Mesopotamia, ancient Egypt) and highlights the key developments up to the end of the twentieth century. The Jost report The term tribology became widely used following The Jost Report published in 1966. The report highlighted the huge cost of friction, wear and corrosion to the UK economy (1.1–1.4% of GDP). As a result, the UK government established several national centres to address tribological problems. Since then the term has diffused into the international community, with many specialists now identifying as "tribologists". Significance Despite considerable research since the Jost Report, the global impact of friction and wear on energy consumption, economic expenditure, and carbon dioxide emissions are still considerable. In 2017, Kenneth Holmberg and Ali Erdemir attempted to quantify their impact worldwide. They considered the four main energy consuming sectors: transport, manufacturing, power generation, and residential. The following were concluded: In total, ~23% of the world's energy consumption originates from tribological contacts. Of that, 20% is to overcome friction and 3% to remanufacture worn parts and spare equipment due to wear and wear-related. By taking advantage of the new technologies for friction reduction and wear protection, energy losses due to friction and wear in vehicles, machinery and other equipment worldwide could be reduced by 40% in the long term (15 years) and 18% in the short term (8 years). On a global scale, these savings would amount to 1.4% of GDP annually and 8.7% of total energy consumption in the long term. The largest short term energy savings are envisioned in transport (25%) and in power generation (20%) while the potential savings in the manufacturing and residential sectors are estimated to be ~10%. In the longer term, savings would be 55%, 40%, 25%, and 20%, respectively. Implementing advanced tribological technologies can also reduce global carbon dioxide emissions by as much as 1,460 million tons of carbon dioxide equivalent (MtCO2) and result in 450,000 million Euros cost savings in the short term. In the long term, the reduction could be as large as 3,140 MtCO2 and the cost savings 970,000 million Euros. Classical tribology covering such applications as ball bearings, gear drives, clutches, brakes, etc. was developed in the context of mechanical engineering. But in the last decades tribology expanded to qualitatively new fields of applications, in particular micro- and nanotechnology as well as biology and medicine. Fundamental concepts Tribosystem The concept of tribosystems is used to provide a detailed assessment of relevant inputs, outputs and losses to tribological systems. Knowledge of these parameters allows tribologists to devise test procedures for tribological systems. Tribofilm Tribofilms are thin films that form on tribologically stressed surfaces. They play an important role in reducing friction and wear in tribological systems. Stribeck curve The Stribeck curve shows how friction in fluid-lubricated contacts is a non-linear function of lubricant viscosity, entrainment velocity and contact load. Physics Friction The word friction comes from the Latin "frictionem", which means rubbing. This term is used to describe all those dissipative phenomena, capable of producing heat and of opposing the relative motion between two surfaces. There are two main types of friction: Static friction Which occurs between surfaces in a fixed state, or relatively stationary. Dynamic friction Which occurs between surfaces in relative motion. The study of friction phenomena is a predominantly empirical study and does not allow to reach precise results, but only to useful approximate conclusions. This inability to obtain a definite result is due to the extreme complexity of the phenomenon. If it is studied more closely it presents new elements, which, in turn, make the global description even more complex. Laws of friction All the theories and studies on friction can be simplified into three main laws, which are valid in most cases: First Law of Amontons The frictional force is directly proportional to the normal load. Second Law of Amontons Friction is independent of the apparent area of contact. Third Law of Coulomb Dynamic friction is independent of the relative sliding speed. Coulomb later found deviations from Amontons’ laws in some cases. In systems with significant nonuniform stress fields, Amontons’ laws are not satisfied macroscopically because local slip occurs before the entire system slides. Static friction Consider a block of a certain mass m, placed in a quiet position on a horizontal plane. If you want to move the block, an external force must be applied, in this way we observe a certain resistance to the motion given by a force equal to and opposite to the applied force, which is precisely the static frictional force . By continuously increasing the applied force, we obtain a value such that the block starts instantly to move. At this point, also taking into account the first two friction laws stated above, it is possible to define the static friction force as a force equal in modulus to the minimum force required to cause the motion of the block, and the coefficient of static friction as the ratio of the static friction force . and the normal force at block , obtaining Dynamic friction Once the block has been put into motion, the block experiences a friction force with a lesser intensity than the static friction force . The friction force during relative motion is known as the dynamic friction force . In this case it is necessary to take into account not only the first two laws of Amontons, but also of the law of Coulomb, so as to be able to affirm that the relationship between dynamic friction force , coefficient of dynamic friction k and normal force N is the following: Static and dynamic friction coefficient At this point it is possible to summarize the main properties of the static friction coefficients and the dynamic one . These coefficients are dimensionless quantities, given by the ratio between the intensity of the friction force and the intensity of the applied load , depending on the type of surfaces that are involved in a mutual contact, and in any case, the condition is always valid such that: . Usually, the value of both coefficients does not exceed the unit and can be considered constant only within certain ranges of forces and velocities, outside of which there are extreme conditions that modify these coefficients and variables. In systems with significant nonuniform stress fields, the macroscopic static friction coefficient depends on the external pressure, system size, or shape because local slip occurs before the system slides. The following table shows the values of the static and dynamic friction coefficients for common materials: Rolling friction In the case of bodies capable of rolling, there is a particular type of friction, in which the sliding phenomenon, typical of dynamic friction, does not occur, but there is also a force that opposes the motion, which also excludes the case of static friction. This type of friction is called rolling friction. Now we want to observe in detail what happens to a wheel that rolls on a horizontal plane. Initially the wheel is immobile and the forces acting on it are the weight force and the normal force given by the response to the weight of the floor. At this point the wheel is set in motion, causing a displacement at the point of application of the normal force which is now applied in front of the center of the wheel, at a distance b, which is equal to the value of the rolling friction coefficient. The opposition to the motion is caused by the separation of the normal force and the weight force at the exact moment in which the rolling starts, so the value of the torque given by the rolling friction force isWhat happens in detail at the microscopic level between the wheel and the supporting surface is described in Figure, where it is possible to observe what is the behavior of the reaction forces of the deformed plane acting on an immobile wheel. Rolling the wheel continuously causes imperceptible deformations of the plane and, once passed to a subsequent point, the plane returns to its initial state. In the compression phase the plane opposes the motion of the wheel, while in the decompression phase it provides a positive contribution to the motion. The force of rolling friction depends, therefore, on the small deformations suffered by the supporting surface and by the wheel itself, and can be expressed as , where it is possible to express b in relation to the sliding friction coefficient as , with r being the wheel radius. The surfaces Going even deeper, it is possible to study not only the most external surface of the metal, but also the immediately more internal states, linked to the history of the metal, its composition and the manufacturing processes undergone by the latter. it is possible to divide the metal into four different layers: Crystalline structure – basic structure of the metal, bulk interior form; Machined layer – layer which may also have inclusions of foreign material and which derives from the processing processes to which the metal has been subjected; Hardened layer – has a crystalline structure of greater hardness than the inner layers, thanks to the rapid cooling to which they are subjected in the working processes; Outer layer or oxide layer – layer that is created due to chemical interaction with the metal's environment and from the deposition of impurities. The layer of oxides and impurities (third body) has a fundamental tribological importance, in fact it usually contributes to reducing friction. Another fact of fundamental importance regarding oxides is that if you could clean and smooth the surface in order to obtain a pure "metal surface", what we would observe is the union of the two surfaces in contact. In fact, in the absence of thin layers of contaminants, the atoms of the metal in question, are not able to distinguish one body from another, thus going to form a single body if put in contact. The origin of friction Contact between surfaces is made up of a large number of microscopic regions, in the literature called asperities or junctions of contact, where atom-to-atom contact takes place. The phenomenon of friction, and therefore of the dissipation of energy, is due precisely to the deformations that such regions undergo due to the load and relative movement. Plastic, elastic, or rupture deformations can be observed: Plastic deformations – permanent deformations of the shape of the bumps; Elastic deformations – deformations in which the energy expended in the compression phase is almost entirely recovered in the decompression phase (elastic hysteresis); Break deformations – deformations that lead to the breaking of bumps and the creation of new contact areas. The energy that is dissipated during the phenomenon is transformed into heat, thus increasing the temperature of the surfaces in contact. The increase in temperature also depends on the relative speed and the roughness of the material, it can be so high as to even lead to the fusion of the materials involved. In friction phenomena, temperature is fundamental in many areas of application. For example, a rise in temperature may result in a sharp reduction of the friction coefficient, and consequently, the effectiveness of the brakes. The cohesion theory The adhesion theory states that in the case of spherical asperities in contact with each other, subjected to a load, a deformation is observed, which, as the load increases, passes from an elastic to a plastic deformation. This phenomenon involves an enlargement of the real contact area , which for this reason can be expressed as:where D is the hardness of the material definable as the applied load divided by the area of the contact surface. If at this point the two surfaces are sliding between them, a resistance to shear stress t is observed, given by the presence of adhesive bonds, which were created precisely because of the plastic deformations, and therefore the frictional force will be given byAt this point, since the coefficient of friction is the ratio between the intensity of the frictional force and that of the applied load, it is possible to state thatthus relating to the two material properties: shear strength t and hardness. To obtain low value friction coefficients it is possible to resort to materials which require less shear stress, but which are also very hard. In the case of lubricants, in fact, we use a substrate of material with low cutting stress t, placed on a very hard material. The force acting between two solids in contact will not only have normal components, as implied so far, but will also have tangential components. This further complicates the description of the interactions between roughness, because due to this tangential component plastic deformation comes with a lower load than when ignoring this component. A more realistic description then of the area of each single junction that is created is given bywith constant and a "tangent" force applied to the joint. To obtain even more realistic considerations, the phenomenon of the third body should also be considered, i.e., the presence of foreign materials, such as moisture, oxides or lubricants, between the two solids in contact. A coefficient c is then introduced which is able to correlate the shear strength t of the pure "material" and that of the third body with 0 < c < 1. By studying the behavior at the limits it will be that for c = 0, t = 0 and for c = 1 it returns to the condition in which the surfaces are directly in contact and there is no presence of a third body. Keeping in mind what has just been said, it is possible to correct the friction coefficient formula as follows:In conclusion, the case of elastic bodies in interaction with each other is considered. Similarly to what we have just seen, it is possible to define an equation of the typewhere, in this case, K depends on the elastic properties of the materials. Also for the elastic bodies the tangential force depends on the coefficient c seen above, and it will beand therefore a fairly exhaustive description of the friction coefficient can be obtained Friction measurements The simplest and most immediate method for evaluating the friction coefficient of two surfaces is the use of an inclined plane on which a block of material is made to slide. As can be seen in the figure, the normal force of the plane is given by , while the frictional force is equal to . This allows us to state that the coefficient of friction can be calculated very easily, by means of the tangent of the angle in which the block begins to slip. In fact we haveThen from the inclined plane we moved on to more sophisticated systems, which allow us to consider all the possible environmental conditions in which the measurement is made, such as the cross-roller machine or the pin and disk machine. Today there are digital machines such as the "Friction Tester" which allows, by means of a software support, to insert all the desired variables. Another widely used process is the ring compression test. A flat ring of the material to be studied is plastically deformed by means of a press, if the deformation is an expansion in both the inner and the outer circle, then there will be low or zero friction coefficients. Otherwise for a deformation that expands only in the inner circle there will be increasing friction coefficients. Lubrication To reduce friction between surfaces and keep wear under control, materials called lubricants are used. Unlike what you might think, these are not just oils or fats, but any fluid material that is characterized by viscosity, such as air and water. Of course, some lubricants are more suitable than others, depending on the type of use they are intended for: air and water, for example, are readily available, but the former can only be used under limited load and speed conditions, while the second can contribute to the wear of materials. What we try to achieve by means of these materials is a perfect fluid lubrication, or a lubrication such that it is possible to avoid direct contact between the surfaces in question, inserting a lubricant film between them. To do this there are two possibilities, depending on the type of application, the costs to address and the level of "perfection" of the lubrication desired to be achieved, there is a choice between: Fluidostatic lubrication (or hydrostatic in the case of mineral oils) – which consists in the insertion of lubricating material under pressure between the surfaces in contact; Fluid fluid lubrication (or hydrodynamics) – which consists in exploiting the relative motion between the surfaces to make the lubricating material penetrate. Viscosity The viscosity is the equivalent of friction in fluids, it describes, in fact, the ability of fluids to resist the forces that cause a change in shape. Thanks to Newton's studies, a deeper understanding of the phenomenon has been achieved. He, in fact, introduced the concept of laminar flow: "a flow in which the velocity changes from layer to layer". It is possible to ideally divide a fluid between two surfaces (, ) of area A, in various layers. The layer in contact with the surface , which moves with a velocity v due to an applied force F, will have the same velocity as v of the slab, while each immediately following layer will vary this velocity of a quantity dv, up to the layer in contact with the immobile surface , which will have zero speed. From what has been said, it is possible to state that the force F, necessary to cause a rolling motion in a fluid contained between two plates, is proportional to the area of the two surfaces and to the speed gradient:At this point we can introduce a proportional constant , which corresponds to the dynamic viscosity coefficient of the fluid, to obtain the following equation, known as Newton's lawThe speed varies by the same amount dv of layer in layer and then the condition occurs so that dv / dy = v / L, where L is the distance between the surfaces and , and then we can simplify the equation by writingThe viscosity is high in fluids that strongly oppose the motion, while it is contained for fluids that flow easily. To determine what kind of flow is in the study, we observe its Reynolds numberThis is a constant that depends on the fluid mass of the fluid, on its viscosity and on the diameter L of the tube in which the fluid flows. If the Reynolds number is relatively low then there is a laminar flow, whereas for the flow becomes turbulent. To conclude we want to underline that it is possible to divide the fluids into two types according to their viscosity: Newtonian fluids, or fluids in which viscosity is a function of temperature and fluid pressure only and not of velocity gradient; Non-Newtonian fluids, or fluids in which viscosity also depends on the velocity gradient. Viscosity as a function of temperature and pressure Temperature and pressure are two fundamental factors to evaluate when choosing a lubricant instead of another. Consider the effects of temperature initially. There are three main causes of temperature variation that can affect the behavior of the lubricant: Weather conditions; Local thermal factors (like for car engines or refrigeration pumps); Energy dissipation due to rubbing between surfaces. In order to classify the various lubricants according to their viscosity behavior as a function of temperature, in 1929 the viscosity index (V.I.) was introduced by Dean and Davis. These assigned the best lubricant then available, namely the oil of Pennsylvania, the viscosity index 100, and at the worst, the American oil of the Gulf Coast, the value 0. To determine the value of the intermediate oil index, the following procedure is used: two reference oils are chosen so that the oil in question has the same viscosity at 100 °C, and the following equation is used to determine the viscosity indexThis process has some disadvantages: For mixtures of oils the results are not exact; There is no information if you are outside the fixed temperature range; With the advancement of the technologies, oils with V.I. more than 100, which can not be described by the method above. In the case of oils with V.I. above 100 you can use a different relationship that allows you to get exact resultswhere, in this case, H is the viscosity at of the oil with V.I. = 100 and v is the kinematic viscosity of the study oil at . We can therefore say, in conclusion, that an increase in temperature leads to a decrease in the viscosity of the oil. It is also useful to keep in mind that, in the same way, an increase in pressure implies an increase in viscosity. To evaluate the effects of pressure on viscosity, the following equation is usedwhere is the pressure viscosity coefficient p, is the viscosity coefficient at atmospheric pressure and is a constant that describes the relationship between viscosity and pressure. Viscosity measures To determine the viscosity of a fluid, viscosimeters are used which can be divided into 3 main categories: Capillary viscometers, in which the viscosity of the fluid is measured by sliding it into a capillary tube; Solid drop viscometers, in which viscosity is measured by calculating the velocity of a solid that moves in the fluid; Rotational viscometers, in which viscosity is obtained by evaluating the flow of fluid placed between two surfaces in relative motion. The first two types of viscometers are mainly used for Newtonian fluids, while the third is very versatile. Wear The wear is the progressive involuntary removal of material from a surface in relative motion with another or with a fluid. We can distinguish two different types of wear: moderate wear and severe wear. The first case concerns low loads and smooth surfaces, while the second concerns significantly higher loads and compatible and rough surfaces, in which the wear processes are much more violent. Wear plays a fundamental role in tribological studies, since it causes changes in the shape of the components used in the construction of machinery (for example). These worn parts must be replaced and this entails both a problem of an economic nature, due to the cost of replacement, and a functional problem, since if these components are not replaced in time, more serious damage could occur to the machine in its complex. This phenomenon, however, has not only negative sides, indeed, it is often used to reduce the roughness of some materials, eliminating the asperities. Erroneously we tend to imagine wear in a direct correlation with friction, in reality these two phenomena can not be easily connected. There may be conditions such that low friction can result in significant wear and vice versa. In order for this phenomenon to occur, certain implementation times are required, which may change depending on some variables, such as load, speed, lubrication and environmental conditions, and there are different wear mechanisms, which may occur simultaneously or even combined with each other: Adhesive wear; Abrasive wear; Fatigue wear; Corrosive wear; Rubbing wear or fretting; Erosion wear; Other minor wear phenomena (wear by impact, cavitation, wear-fusion, wear-spreading). Adhesive wear As known, the contact between two surfaces occurs through the interaction between asperities. If a shearing force is applied in the contact area, it may be possible to detach a small part of the weaker material, due to its adhesion to the harder surface. What is described is precisely the mechanism of the adhesive wear represented in the figure. This type of wear is very problematic, since it involves high wear speeds, but at the same time it is possible to reduce adhesion by increasing surface roughness and hardness of the surfaces involved, or by inserting layers of contaminants such as oxygen, oxides, water, or oils. In conclusion, the behavior of the adhesive wear volume can be described by means of three main laws Law 1 – Distance The mass involved in wear is proportional to the distance traveled in the rubbing between the surfaces. Law 2 – Load The mass involved in wear is proportional to the applied load. Law 3 – Hardness The mass involved in wear is inversely proportional to the hardness of the less hard material. An important aspect of wear is emission of wear particles into the environment which increasingly threatens human health and ecology. The first researcher who investigated this topic was Ernest Rabinowicz. Abrasive wear The abrasive wear consists of the cutting effort of hard surfaces that act on softer surfaces and can be caused either by the roughness that as tips cut off the material against which they rub (two-body abrasive wear), or from particles of hard material that interpose between two surfaces in relative motion (three-body abrasive wear). At application levels, the two-body wear is easily eliminated by means of an adequate surface finish, while the three-body wear can bring serious problems and must therefore be removed as much as possible by means of suitable filters, even before of a weighted machine design. Fatigue wear The fatigue wear is a type of wear that is caused by alternative loads, which cause local contact forces repeated over time, which in turn lead to deterioration of the materials involved. The most immediate example of this type of wear is that of a comb. If you slide a finger over the teeth of the comb over and over again, it is observed that at some point one or more teeth of the comb come off. This phenomenon can lead to the breaking of the surfaces due to mechanical or thermal causes. The first case is that described above in which a repeated load causes high contact stresses. The second case, however, is caused by the thermal expansion of the materials involved in the process. To reduce this type of wear, therefore, it is good to try to decrease both the contact forces and the thermal cycling, that is the frequency with which different temperatures intervene. For optimal results it is also good to eliminate, as much as possible, impurities between surfaces, local defects and inclusions of foreign materials in the bodies involved. Corrosive wear The corrosive wear occurs in the presence of metals that oxidize or corrode. When the pure metal surfaces come into contact with the surrounding environment, oxide films are created on their surfaces because of the contaminants present in the environment itself, such as water, oxygen or acids. These films are continually removed from the abrasive and adhesive wear mechanisms, continually recreated by pure-contaminating metal interactions. Clearly this type of wear can be reduced by trying to create an 'ad hoc' environment, free of pollutants and sensible to minimal thermal changes. Corrosive wear can also be positive in some applications. In fact, the oxides that are created, contribute to decrease the coefficient of friction between the surfaces, or, being in many cases harder than the metal to which they belong, can be used as excellent abrasives. Rubbing wear or fretting The rubbing wear occurs in systems subject to more or less intense vibrations, which cause relative movements between the surfaces in contact within the order of nanometers. These microscopic relative movements cause both adhesive wear, caused by the displacement itself, and abrasive wear, caused by the particles produced in the adhesive phase, which remain trapped between the surfaces. This type of wear can be accelerated by the presence of corrosive substances and the increase in temperature. Erosion wear The erosion wear occurs when free particles, which can be either solid or liquid, hit a surface, causing abrasion. The mechanisms involved are of various kinds and depend on certain parameters, such as the impact angle, the particle size, the impact velocity and the material of which the particles are made up. Factors affecting wear Among the main factors influencing wear we find Hardness Mutual Solubility Crystalline structure It has been verified that the harder a material is, the more it decreases. In the same way, the less two materials are mutually soluble, the more the wear tends to decrease. Finally, as regards the crystalline structure, it is possible to state that some structures are more suitable to resist the wear of others, such as a hexagonal structure with a compact distribution, which can only deform by slipping along the base planes. Wear rate To provide an assessment of the damage caused by wear, we use a dimensionless coefficient called wear rate, given by the ratio between the height change of the body and the length of the relative sliding .This coefficient makes it possible to subdivide, depending on its size, the damage suffered by various materials in different situations, passing from a modest degree of wear, through a medium, to a degree of severe wear. Instead, to express the volume of wear V it is possible to use the Holm equation (for adhesive wear) (for abrasive wear) where W / H represents the real contact area, l the length of the distance traveled and k and are experimental dimensional factors. Wear measurement In experimental measurements of material wear, it is often necessary to recreate fairly small wear rates and to accelerate times. The phenomena, which in reality develop after years, in the laboratory must occur after a few days. A first evaluation of the wear processes is a visual inspection of the superficial profile of the body in the study, including a comparison before and after the occurrence of the wear phenomenon. In this first analysis the possible variations of the hardness and of the superficial geometry of the material are observed. Another method of investigation is that of the radioactive tracer, used to evaluate wear at macroscopic levels. One of the two materials in contact, involved in a wear process, is marked with a radioactive tracer. In this way, the particles of this material, which will be removed, will be easily visible and accessible. Finally, to accelerate wear times, one of the best-known techniques used is that of the high pressure contact tests. In this case, to obtain the desired results it is sufficient to apply the load on a very reduced contact area. Applications Transport and manufacturing Historically, tribology research concentrated on the design and effective lubrication of machine components, particularly for bearings. However, the study of tribology extends into most aspects of modern technology and any system where one material slides over another can be affected by complex tribological interactions. Traditionally, tribology research in the transport industry focused on reliability, ensuring the safe, continuous operation of machine components. Nowadays, due to an increased focus on energy consumption, efficiency has become increasingly important and thus lubricants have become progressively more complex and sophisticated in order to achieve this. Tribology also plays an important role in manufacturing. For example, in metal-forming operations, friction increases tool wear and the power required to work a piece. This results in increased costs due to more frequent tool replacement, loss of tolerance as tool dimensions shift, and greater forces required to shape a piece. The use of lubricants which minimize direct surface contact reduces tool wear and power requirements. It is also necessary to know the effects of manufacturing, all manufacturing methods leave a unique system fingerprint (i.e. surface topography) which will influence the tribocontact (e.g. lubricant film formation). Research Fields Tribology research ranges from macro to nano scales, in areas as diverse as the movement of continental plates and glaciers to the locomotion of animals and insects. Tribology research is traditionally concentrated on transport and manufacturing sectors, but this has considerably diversified. Tribology research can be loosely divided into the following fields (with some overlap): Classical tribology is concerned with friction and wear in machine elements (such as rolling-element bearings, gears, plain bearings, brakes, clutches, wheels and fluid bearings) as well as manufacturing processes (such as metal forming). Biotribology studies friction, wear and lubrication in biological systems. The field is gaining importance as human lifetime expectancy increases. Human hip and knee joints are typical biotribology systems. Green tribology aims to minimize the environmental impact of tribological systems along their entire lifecycle. In particular, green tribology aims to reduce tribological losses (e.g., friction and wear) using technologies with minimal environmental impact. This is in contrast to traditional tribology, where the means of reducing tribological losses are not holistically evaluated. Geotribology studies friction, wear, and lubrication of geological systems, such as glaciers and faults. Nanotribology studies tribological phenomena at nanoscopic scales. The field is becoming increasingly important as devices become smaller (e.g. micro/nanoelectromechanical systems, MEMS/NEMS), and research has been aided by the invention of Atomic Force Microscopy. Computational tribology aims to model the behavior of tribological systems through multiphysics simulations, combining disciplines such as contact mechanics, fracture mechanics and computational fluid dynamics. Space tribology studies tribological systems that can operate under the extreme environmental conditions of outer space. In particular, this requires lubricants with low vapor pressure that can withstand extreme temperature fluctuations. Open system tribology studies tribological systems that are exposed to and affected by the natural environment. Triboinformatics is an application of Artificial Intelligence, Machine Learning and Big Data methods to tribological systems. Recently, intensive studies of superlubricity (phenomenon of vanishing friction) have sparked due to increasing demand for energy savings. Furthermore, the development of new materials, such as graphene and ionic liquids, allows for fundamentally new approaches to solve tribological problems. Societies There are now numerous national and international societies, including: the Society of Tribologists and Lubrication Engineers (STLE) in the US, the Institution of Mechanical Engineers and Institute of Physics (IMechE Tribology Group, IOP Tribology Group) in the UK, the German Society for Tribology (Gesellschaft für Tribologie), the Korean Tribology Society (KTS), the Malaysian Tribology Society (MYTRIBOS), the Japanese Society of Tribologists (JAST), the Tribology Society of India (TSI), the Chinese Mechanical Engineering Society (Chinese Tribology Institute) and the International Tribology Council. Research approach Tribology research is mostly empirical, which can be explained by the vast number of parameters that influence friction and wear in tribological contacts. Thus, most research fields rely heavily on the use of standardized tribometers and test procedures as well component-level test rigs. See also Footnotes References External links Friction Engineering mechanics Materials science Materials degradation Metallurgy Mechanical engineering
0.774659
0.993408
0.769553
PEST analysis
In business analysis, PEST analysis (political, economic, social and technological) is a framework of external macro-environmental factors used in strategic management and market research. PEST analysis was developed in 1967 by Francis Aguilar as an environmental scanning framework for businesses to understand the external conditions and relations of a business in order to assist managers in strategic planning. It has also been termed ETPS analysis. PEST analyses give an overview of the different macro-environmental factors to be considered by a business, indicating market growth or decline, business position, as well as the potential of and direction for operations. Components The basic PEST analysis includes four factors: political, economic, social, and technological. Political Political factors relate to how the governments intervene in economies. Specifically, political factors comprise areas including tax policy, labour law, environmental law, trade restrictions, tariffs, and political stability. Other factors include what are considered merit goods and demerit goods by a government, and the impact of governments on health, education, and infrastructure of a nation. Economic Economic factors include economic growth, exchange rates, inflation rate, and interest rates. Social Social factors include cultural aspects and health consciousness, population growth rate, age distribution, career attitudes and safety emphasis. Trends in social factors affect the demand for a company's products and how that company operates. Through analysis of social factors, companies may adopt various management strategies to adapt to social trends. Technological Technological factors include R&D activity, automation, technology incentives and the rate of technological change. These can determine barriers to entry, minimum efficient production level and influence the outsourcing decisions. Technological shifts would also affect costs, quality, and innovation. Variants Many similar frameworks have been constructed, with the addition of other components such as environment and law. These include PESTLE, PMESII-PT, STEPE, STEEP, STEEPLE, STEER, and TELOS. Legal and regulatory Legal factors include discrimination law, consumer law, antitrust law, employment law, and health and safety law, which can affect how a company operates, its costs, and the demand for its products. Regulatory factors have also been analysed as its own pillar. Environment Environmental factors include ecological and environmental aspects such as weather, climate, and climate change, which may especially affect industries such as tourism, farming, and insurance. Environmental analyses often use the PESTLE framework, which allow for the evaluation of factors affecting management decisions for coastal zone and freshwater resources, development of sustainable buildings, sustainable energy solutions, and transportation. Demographic Demographic factors have been considered in frameworks such as STEEPLED. Factors include gender, age, ethnicity, knowledge of languages, disabilities, mobility, home ownership, employment status, religious belief or practice, culture and tradition, living standards and income level. Military Military analyses have used the PMESII-PT framework, which considers political, military, economic, social, information, infrastructure, physical environment and time aspects in a military context. Operational The TELOS framework explores technical, economic, legal, operational, and scheduling factors. Limitations PEST analysis can be helpful to explain market changes in the past, but it is not always suitable to predict or foresee upcoming market changes. See also Enterprise planning systems Macromarketing SWOT analysis VRIO References Strategic management Management theory Analysis
0.772906
0.995662
0.769553
Denudation
Denudation is the geological process in which moving water, ice, wind, and waves erode the Earth's surface, leading to a reduction in elevation and in relief of landforms and landscapes. Although the terms erosion and denudation are used interchangeably, erosion is the transport of soil and rocks from one location to another, and denudation is the sum of processes, including erosion, that result in the lowering of Earth's surface. Endogenous processes such as volcanoes, earthquakes, and tectonic uplift can expose continental crust to the exogenous processes of weathering, erosion, and mass wasting. The effects of denudation have been recorded for millennia but the mechanics behind it have been debated for the past 200 years and have only begun to be understood in the past few decades. Description Denudation incorporates the mechanical, biological, and chemical processes of erosion, weathering, and mass wasting. Denudation can involve the removal of both solid particles and dissolved material. These include sub-processes of cryofracture, insolation weathering, slaking, salt weathering, bioturbation, and anthropogenic impacts. Factors affecting denudation include: Anthropogenic (human) activity, including agriculture, damming, mining, and deforestation; Biosphere, via animals, plants, and microorganisms contributing to chemical and physical weathering; Climate, most directly through chemical weathering from rain, but also because climate dictates what kind of weathering occurs; Lithology or the type of rock; Surface topography and changes to surface topography, such as mass wasting and erosion; and Tectonic activity, such as deformation, the changing of rocks due to stress mainly from tectonic forces, and orogeny, the process that forms mountains. Historical theories The effects of denudation have been written about since antiquity, although the terms "denudation" and "erosion" have been used interchangeably throughout most of history. In the Age of Enlightenment, scholars began trying to understand how denudation and erosion occurred without mythical or biblical explanations. Throughout the 18th century, scientists theorized valleys are formed by streams running through them, not from floods or other cataclysms. In 1785, Scottish physician James Hutton proposed an Earth history based on observable processes over an unlimited amount of time, which marked a shift from assumptions based on faith to reasoning based on logic and observation. In 1802, John Playfair, a friend of Hutton, published a paper clarifying Hutton's ideas, explaining the basic process of water wearing down the Earth's surface, and describing erosion and chemical weathering. Between 1830 and 1833, Charles Lyell published three volumes of Principles of Geology, which describes the shaping of the surface of Earth by ongoing processes, and which endorsed and established gradual denudation in the wider scientific community. As denudation came into the wider conscience, questions of how denudation occurs and what the result is began arising. Hutton and Playfair suggested over a period of time, a landscape would eventually be worn down to erosional planes at or near sea level, which gave the theory the name "planation". Charles Lyell proposed marine planation, oceans, and ancient shallow seas were the primary driving force behind denudation. While surprising given the centuries of observation of fluvial and pluvial erosion, this is more understandable given early geomorphology was largely developed in Britain, where the effects of coastal erosion are more evident and play a larger role in geomorphic processes. There was more evidence against marine planation than there was for it. By the 1860s, marine planation had largely fallen from favor, a move led by Andrew Ramsay, a former proponent of marine planation who recognized rain and rivers play a more important role in denudation. In North America during the mid-19th century, advancements in identifying fluvial, pluvial, and glacial erosion were made. The work being done in the Appalachians and American West that formed the basis for William Morris Davis to hypothesize peneplanation, despite the fact while peneplanation was compatible in the Appalachians, it did not work as well in the more active American West. Peneplanation was a cycle in which young landscapes are produced by uplift and denuded down to sea level, which is the base level. The process would be restarted when the old landscape was uplifted again or when the base level was lowered, producing a new, young landscape. Publication of the Davisian cycle of erosion caused many geologists to begin looking for evidence of planation around the world. Unsatisfied with Davis's cycle due to evidence from the Western United States, Grove Karl Gilbert suggested backwearing of slopes would shape landscapes into pediplains, and W.J. McGee named these landscapes pediments. This later gave the concept the name pediplanation when L.C. King applied it on a global scale. The dominance of the Davisian cycle gave rise to several theories to explain planation, such as eolation and glacial planation, although only etchplanation survived time and scrutiny because it was based on observations and measurements done in different climates around the world and it also explained irregularities in landscapes. The majority of these concepts failed, partly because Joseph Jukes, a popular geologist and professor, separated denudation and uplift in an 1862 publication that had a lasting impact on geomorphology. These concepts also failed because the cycles, Davis's in particular, were generalizations and based on broad observations of the landscape rather than detailed measurements; many of the concepts were developed based on local or specific processes, not regional processes, and they assumed long periods of continental stability. Some scientists opposed the Davisian cycle; one was Grove Karl Gilbert, who, based on measurements over time, realized denudation is nonlinear; he started developing theories based on fluid dynamics and equilibrium concepts. Another was Walther Penck, who devised a more complex theory that denudation and uplift occurred at the same time, and that landscape formation is based on the ratio between denudation and uplift rates. His theory proposed geomorphology is based on endogenous and exogenous processes. Penck's theory, while ultimately being ignored, returned to denudation and uplift occurring simultaneously and relying on continental mobility, even though Penck rejected continental drift. The Davisian and Penckian models were heavily debated for a few decades until Penck's was ignored and support for Davis's waned after his death as more critiques were made. One critic was John Leighly, who stated geologists did not know how landforms were developed, so Davis's theory was built upon a shaky foundation. From 1945 to 1965, a change in geomorphology research saw a shift from mostly deductive work to detailed experimental designs that used improved technologies and techniques, although this led to research over details of established theories, rather than researching new theories. Through the 1950s and 1960s, as improvements were made in ocean geology and geophysics, it became clearer Wegener's theory on continental drift was correct and that there is constant movement of parts (the plates) of Earth's surface. Improvements were also made in geomorphology to quantify slope forms and drainage networks, and to find relationships between the form and process, and the magnitude and frequency of geomorphic processes. The final blow to peneplanation came in 1964 when a team led by Luna Leopold published Fluvial Processes in Geomorphology, which links landforms with measurable precipitation-infiltration runoff processes and concluded no peneplains exist over large areas in modern times, and any historical peneplains would have to be proven to exist, rather than inferred from modern geology. They also stated pediments could form across all rock types and regions, although through different processes. Through these findings and improvements in geophysics, the study of denudation shifted from planation to studying which relationships affect denudation–including uplift, isostasy, lithology, and vegetation–and measuring denudation rates around the world. Measurement Denudation is measured in the wearing down of Earth's surface in inches or centimeters per 1000 years. This rate is intended as an estimate and often assumes uniform erosion, among other things, to simplify calculations. Assumptions made are often only valid for the landscapes being studied. Measurements of denudation over large areas are performed by averaging the rates of subdivisions. Often, no adjustments are made for human impact, which causes the measurements to be inflated. Calculations have suggested soil loss of up to caused by human activity will change previously calculated denudation rates by less than 30%. Denudation rates are usually much lower than the rates of uplift and average orogeny rates can be eight times the maximum average denudation. The only areas at which there could be equal rates of denudation and uplift are active plate margins with an extended period of continuous deformation. Denudation is measured in catchment-scale measurements and can use other erosion measurements, which are generally split into dating and survey methods. Techniques for measuring erosion and denudation include stream load measurement, cosmogenic exposure and burial dating, erosion tracking, topographic measurements, surveying the deposition in reservoirs, landslide mapping, chemical fingerprinting, thermochronology, and analysis of sedimentary records in deposition areas. The most common way of measuring denudation is from stream load measurements taken at gauging stations. The suspended load, bed load, and dissolved load are included in measurements. The weight of the load is converted to volumetric units and the load volume is divided by the area of the watershed above the gauging station. An issue with this method of measurement is the high annual variation in fluvial erosion, which can be up to a factor of five between successive years. An important equation for denudation is the stream power law: , where E is erosion rate, K is the erodibility constant, A is drainage area, S is channel gradient, and m and n are functions that are usually given beforehand or assumed based on the location. Most denudation measurements are based on stream load measurements and analysis of the sediment or the water chemistry. A more recent technique is cosmogenic isotope analysis, which is used in conjunction with stream load measurements and sediment analysis. This technique measures chemical weathering intensity by calculating chemical alteration in molecular proportions. Preliminary research into using cosmogenic isotopes to measure weathering was done by studying the weathering of feldspar and volcanic glass, which contain most of the material found in the Earth's upper crust. The most common isotopes used are 26Al and 10Be; however, 10Be is used more often in these analyses. 10Be is used due to its abundance and, while it is not stable, its half-life of 1.39 million years is relatively stable compared to the thousand or million-year scale in which denudation is measured. 26Al is used because of the low presence of Al in quartz, making it easy to separate, and because there is no risk of contamination of atmospheric 10Be. This technique was developed because previous denudation-rate studies assumed steady rates of erosion even though such uniformity is difficult to verify in the field and may be invalid for many landscapes; its use to help measure denudation and geologically date events was important. On average, the concentration of undisturbed cosmogenic isotopes in sediment leaving a particular basin is inversely related to the rate at which that basin is eroding. In a rapidly-eroding basin, most rock will be exposed to only a small number of cosmic rays before erosion and transport out of the basin; as a result, isotope concentration will be low. In a slowly-eroding basin, integrated cosmic ray exposure is much greater and isotope concentration will be much higher. Measuring isotopic reservoirs in most areas is difficult with this technique so uniform erosion is assumed. There is also variation in year-to-year measurements, which can be as high as a factor of three. Problems in measuring denudation include both the technology used and the environment. Landslides can interfere with denudation measurements in mountainous regions, especially the Himalayas. The two main problems with dating methods are uncertainties in the measurements, both with equipment used and with assumptions made during measurement; and the relationship between the measured ages and histories of the markers. This relates to the problem of making assumptions based on the measurements being made and the area being measured. Environmental factors such as temperature, atmospheric pressure, humidity, elevation, wind, the speed of light at higher elevations if using lasers or time of flight measurements, instrument drift, chemical erosion, and for cosmogenic isotopes, climate and snow or glacier coverage. When studying denudation, the Stadler effect, which states measurements over short time periods show higher accumulation rates and than measurements over longer time periods, should be considered. In a study by James Gilully, the presented data suggested the denudation rate has stayed roughly the same throughout the Cenozoic era based on geological evidence; however, given estimates of denudation rates at the time of Gilully's study and the United States' elevation, it would take 11-12 million years to erode North America; well before the 66 million years of the Cenozoic. The research on denudation is primarily done in river basins and in mountainous regions like the Himalayas because these are very geologically active regions, which allows for research between uplift and denudation. There is also research on the effects of denudation on karst because only about 30% of chemical weathering from water occurs on the surface. Denudation has a large impact on karst and landscape evolution because the most-rapid changes to landscapes occur when there are changes to subterranean structures. Other research includes effects on denudation rates; this research is mostly studying how climate and vegetation impact denudation. Research is also being done to find the relationship between denudation and isostasy; the more denudation occurs, the lighter the crust becomes in an area, which allows for uplift. The work is primarily trying to determine a ratio between denudation and uplift so better estimates can be made on changes in the landscape. In 2016 and 2019, research that attempted to apply denudation rates to improve the stream power law so it can be used more effectively was conducted. Examples Denudation exposes deep subvolcanic structures on the present surface of the area where volcanic activity once occurred. Subvolcanic structures such as volcanic plugs and dikes are exposed by denudation. Other examples include: Earthquakes causing landslides; Haloclasty, the build-up of salt in cracks in rocks leading to erosion and weathering; Ice accumulating in the cracks of rocks; and Microorganisms contributing to weathering through cellular respiration. References Geomorphology Geological processes
0.775387
0.992474
0.769551
VAM (bicycling)
VAM is the abbreviation for the Italian term velocità ascensionale media, translated in English to mean "average ascent speed" or "mean ascent velocity", but usually referred to as VAM. It is also referred to by the English backronym "Vertical Ascent in Meters". The term, which was coined by Italian physician and cycling coach Michele Ferrari, is the speed of elevation gain, usually stated in units of metres per hour. Background VAM is a parameter used in cycling as a measure of fitness and speed; it is useful for relatively objective comparisons of performances and estimating a rider's power output per kilogram of body mass, which is one of the most important qualities of a cyclist who competes in stage races and other mountainous events. Dr. Michele Ferrari also stated that VAM values exponentially rise up with every gradient increase. For example, a 1180 VAM of a 64 kg rider on a 5% gradient is equivalent to a VAM of 1400 m/h on a 10 % or a VAM of 1675 m/h on a 13% gradient. Ambient conditions (e.g. friction, air resistance) have less effect on steeper slopes (absorb less power) since speeds are lower on steeper slopes The acronym VAM is not truly expanded in English, where many think the V stands in some way for vertical, and the M represents metres, for instance "Vertical Ascent Metres/Hour." Ferrari says, I called this parameter Average Ascent Speed (‘VAM’ in its Italian abbreviation from Velocità Ascensionale Media). A direct translation of "velocità ascensionale media" is "mean (average) ascent velocity" leading to an expansion of the acronym in English as Velocity, Ascent, Mean. Definition VAM is calculated the following way: VAM = (metres ascended × 60) / minutes it took to ascend A standard unit term with the same meaning is Vm/h, vertical metres per hour; the two are used interchangeably. Relationship to relative power output Relative power means power P per body mass m. Without friction and extra mass (the bicycle), the relative power would be VAM times acceleration of gravity g: With g = 9.81 m/s2, this is equivalent to Relative power (watts/kg) = VAM (metres/hour) VAM (metres/hour) / 367 Including the power necessary for the extra mass and dissipated by friction leads to a lower number in the denominator. An empirical relationship is Relative power (watts/kg) = VAM (metres/hour) / (200 + 10 × % grade) Examples Examples: 1800+ Vm/h: Chris Froome. 1650-1800 Vm/h: Top 10 / Tour de France GC or mountain stage winner. 1450-1650 Vm/h: Top 20 / Tour de France GC; top 20 on tough mountain stage. 1300-1450 Vm/h: Finishing Tour de France mountain stages in peloton 1100-1300 Vm/h: The Autobus Crew References Cycle sport Velocity
0.77353
0.994843
0.769541
Refraction
In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomenon, but other waves such as sound waves and water waves also experience refraction. How much a wave is refracted is determined by the change in wave speed and the initial direction of wave propagation relative to the direction of change in speed. For light, refraction follows Snell's law, which states that, for a given pair of media, the ratio of the sines of the angle of incidence and angle of refraction is equal to the ratio of phase velocities in the two media, or equivalently, to the refractive indices of the two media: Optical prisms and lenses use refraction to redirect light, as does the human eye. The refractive index of materials varies with the wavelength of light, and thus the angle of the refraction also varies correspondingly. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors. General explanation A correct explanation of refraction involves two separate parts, both a result of the wave nature of light. Light slows as it travels through a medium other than vacuum (such as air, glass or water). This is not because of scattering or absorption. Rather it is because, as an electromagnetic oscillation, light itself causes other electrically charged particles such as electrons, to oscillate. The oscillating electrons emit their own electromagnetic waves which interact with the original light. The resulting "combined" wave has wave packets that pass an observer at a slower rate. The light has effectively been slowed. When light returns to a vacuum and there are no electrons nearby, this slowing effect ends and its speed returns to . When light enters a slower medium at an angle, one side of the wavefront is slowed before the other. This asymmetrical slowing of the light causes it to change the angle of its travel. Once light is within the new medium with constant properties, it travels in a straight line again. Slowing of light As described above, the speed of light is slower in a medium other than vacuum. This slowing applies to any medium such as air, water, or glass, and is responsible for phenomena such as refraction. When light leaves the medium and returns to a vacuum, and ignoring any effects of gravity, its speed returns to the usual speed of light in vacuum, . Common explanations for this slowing, based upon the idea of light scattering from, or being absorbed and re-emitted by atoms, are both incorrect. Explanations like these would cause a "blurring" effect in the resulting light, as it would no longer be travelling in just one direction. But this effect is not seen in nature. A correct explanation rests on light's nature as an electromagnetic wave. Because light is an oscillating electrical/magnetic wave, light traveling in a medium causes the electrically charged electrons of the material to also oscillate. (The material's protons also oscillate but as they are around 2000 times more massive, their movement and therefore their effect, is far smaller). A moving electrical charge emits electromagnetic waves of its own. The electromagnetic waves emitted by the oscillating electrons interact with the electromagnetic waves that make up the original light, similar to water waves on a pond, a process known as constructive interference. When two waves interfere in this way, the resulting "combined" wave may have wave packets that pass an observer at a slower rate. The light has effectively been slowed. When the light leaves the material, this interaction with electrons no longer happens, and therefore the wave packet rate (and therefore its speed) return to normal. Bending of light Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, and therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side. This is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side. Another way of understanding the same thing is to consider the change in wavelength at the interface. When the wave goes from one material to another where the wave has a different speed , the frequency of the wave will stay the same, but the distance between wavefronts or wavelength will change. If the speed is decreased, such as in the figure to the right, the wavelength will also decrease. With an angle between the wave fronts and the interface and change in distance between the wave fronts the angle must change over the interface to keep the wave fronts intact. From these considerations the relationship between the angle of incidence , angle of transmission and the wave speeds and in the two materials can be derived. This is the law of refraction or Snell's law and can be written as The phenomenon of refraction can in a more fundamental way be derived from the 2 or 3-dimensional wave equation. The boundary condition at the interface will then require the tangential component of the wave vector to be identical on the two sides of the interface. Since the magnitude of the wave vector depend on the wave speed this requires a change in direction of the wave vector. The relevant wave speed in the discussion above is the phase velocity of the wave. This is typically close to the group velocity which can be seen as the truer speed of a wave, but when they differ it is important to use the phase velocity in all calculations relating to refraction. A wave traveling perpendicular to a boundary, i.e. having its wavefronts parallel to the boundary, will not change direction even if the speed of the wave changes. Dispersion of light Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass and water have higher refractive indexes than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency (and wavelength), a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies and different wavelengths. Law For light, the refractive index of a material is more often used than the wave phase speed in the material. They are directly related through the speed of light in vacuum as In optics, therefore, the law of refraction is typically written as On water Refraction occurs when light goes through a water surface since water has a refractive index of 1.33 and air has a refractive index of about 1. Looking at a straight object, such as a pencil in the figure here, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is. The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by an archer fish. For small angles of incidence (measured from the normal, when is approximately the same as ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90°, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached. Atmospheric The refractive index of air depends on the air density and thus vary with air temperature and pressure. Since the pressure is lower at higher altitudes, the refractive index is also lower, causing light rays to refract towards the earth surface when traveling long distances through the atmosphere. This shifts the apparent positions of stars slightly when they are close to the horizon and makes the sun visible before it geometrically rises above the horizon during a sunrise. Temperature variations in the air can also cause refraction of light. This can be seen as a heat haze when hot and cold air is mixed e.g. over a fire, in engine exhaust, or when opening a window on a cold day. This makes objects viewed through the mixed air appear to shimmer or move around randomly as the hot and cold air moves. This effect is also visible from normal variations in air temperature during a sunny day when using high magnification telephoto lenses and is often limiting the image quality in these cases. In a similar way, atmospheric turbulence gives rapidly varying distortions in the images of astronomical telescopes limiting the resolution of terrestrial telescopes not using adaptive optics or other techniques for overcoming these atmospheric distortions. Air temperature variations close to the surface can give rise to other optical phenomena, such as mirages and Fata Morgana. Most commonly, air heated by a hot road on a sunny day deflects light approaching at a shallow angle towards a viewer. This makes the road appear reflecting, giving an illusion of water covering the road. Clinical significance In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision. Refractive surgery is a medical procedure to treat common vision disorders. Mechanical waves Water Water waves travel slower in shallower water. This can be used to demonstrate refraction in ripple tanks and also explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline. Sound In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent on the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water. Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries. Beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to address the meteorological effects of bending of sound rays in the lower atmosphere. Gallery See also Birefringence (double refraction) Geometrical optics Huygens–Fresnel principle List of indices of refraction Negative refraction Reflection Schlieren photography Seismic refraction Super refraction References External links Reflections and Refractions in Ray Tracing, a simple but thorough discussion of the mathematics behind refraction and reflection. Flash refraction simulation- includes source, Explains refraction and Snell's Law. Physical phenomena Geometrical optics Physical optics
0.771693
0.997209
0.769539
External ballistics
External ballistics or exterior ballistics is the part of ballistics that deals with the behavior of a projectile in flight. The projectile may be powered or un-powered, guided or unguided, spin or fin stabilized, flying through an atmosphere or in the vacuum of space, but most certainly flying under the influence of a gravitational field. Gun-launched projectiles may be unpowered, deriving all their velocity from the propellant's ignition until the projectile exits the gun barrel. However, exterior ballistics analysis also deals with the trajectories of rocket-assisted gun-launched projectiles and gun-launched rockets; and rockets that acquire all their trajectory velocity from the interior ballistics of their on-board propulsion system, either a rocket motor or air-breathing engine, both during their boost phase and after motor burnout. External ballistics is also concerned with the free-flight of other projectiles, such as balls, arrows etc. Forces acting on the projectile When in flight, the main or major forces acting on the projectile are gravity, drag, and if present, wind; if in powered flight, thrust; and if guided, the forces imparted by the control surfaces. In small arms external ballistics applications, gravity imparts a downward acceleration on the projectile, causing it to drop from the line-of-sight. Drag, or the air resistance, decelerates the projectile with a force proportional to the square of the velocity. Wind makes the projectile deviate from its trajectory. During flight, gravity, drag, and wind have a major impact on the path of the projectile, and must be accounted for when predicting how the projectile will travel. For medium to longer ranges and flight times, besides gravity, air resistance and wind, several intermediate or meso variables described in the external factors paragraph have to be taken into account for small arms. Meso variables can become significant for firearms users that have to deal with angled shot scenarios or extended ranges, but are seldom relevant at common hunting and target shooting distances. For long to very long small arms target ranges and flight times, minor effects and forces such as the ones described in the long range factors paragraph become important and have to be taken into account. The practical effects of these minor variables are generally irrelevant for most firearms users, since normal group scatter at short and medium ranges prevails over the influence these effects exert on projectile trajectories. At extremely long ranges, artillery must fire projectiles along trajectories that are not even approximately straight; they are closer to parabolic, although air resistance affects this. Extreme long range projectiles are subject to significant deflections, depending on circumstances, from the line toward the target; and all external factors and long range factors must be taken into account when aiming. In very large-calibre artillery cases, like the Paris Gun, very subtle effects that are not covered in this article can further refine aiming solutions. In the case of ballistic missiles, the altitudes involved have a significant effect as well, with part of the flight taking place in a near-vacuum well above a rotating Earth, steadily moving the target from where it was at launch time. Stabilizing non-spherical projectiles during flight Two methods can be employed to stabilize non-spherical projectiles during flight: Projectiles like arrows or arrow like sabots such as the M829 Armor-Piercing, Fin-Stabilized, Discarding Sabot (APFSDS) achieve stability by forcing their center of pressure (CP) behind their center of mass (CM) with tail surfaces. The CP behind the CM condition yields stable projectile flight, meaning the projectile will not overturn during flight through the atmosphere due to aerodynamic forces. Projectiles like small arms bullets and artillery shells must deal with their CP being in front of their CM, which destabilizes these projectiles during flight. To stabilize such projectiles the projectile is spun around its longitudinal (leading to trailing) axis. The spinning mass creates gyroscopic forces that keep the bullet's length axis resistant to the destabilizing overturning torque of the CP being in front of the CM. Main effects in external ballistics Projectile/bullet drop and projectile path The effect of gravity on a projectile in flight is often referred to as projectile drop or bullet drop. It is important to understand the effect of gravity when zeroing the sighting components of a gun. To plan for projectile drop and compensate properly, one must understand parabolic shaped trajectories. Projectile/bullet drop In order for a projectile to impact any distant target, the barrel must be inclined to a positive elevation angle relative to the target. This is due to the fact that the projectile will begin to respond to the effects of gravity the instant it is free from the mechanical constraints of the bore. The imaginary line down the center axis of the bore and out to infinity is called the line of departure and is the line on which the projectile leaves the barrel. Due to the effects of gravity a projectile can never impact a target higher than the line of departure. When a positively inclined projectile travels downrange, it arcs below the line of departure as it is being deflected off its initial path by gravity. Projectile/Bullet drop is defined as the vertical distance of the projectile below the line of departure from the bore. Even when the line of departure is tilted upward or downward, projectile drop is still defined as the distance between the bullet and the line of departure at any point along the trajectory. Projectile drop does not describe the actual trajectory of the projectile. Knowledge of projectile drop however is useful when conducting a direct comparison of two different projectiles regarding the shape of their trajectories, comparing the effects of variables such as velocity and drag behavior. Projectile/bullet path For hitting a distant target an appropriate positive elevation angle is required that is achieved by angling the line of sight from the shooter's eye through the centerline of the sighting system downward toward the line of departure. This can be accomplished by simply adjusting the sights down mechanically, or by securing the entire sighting system to a sloped mounting having a known downward slope, or by a combination of both. This procedure has the effect of elevating the muzzle when the barrel must be subsequently raised to align the sights with the target. A projectile leaving a muzzle at a given elevation angle follows a ballistic trajectory whose characteristics are dependent upon various factors such as muzzle velocity, gravity, and aerodynamic drag. This ballistic trajectory is referred to as the bullet path. If the projectile is spin stabilized, aerodynamic forces will also predictably arc the trajectory slightly to the right, if the rifling employs "right-hand twist." Some barrels are cut with left-hand twist, and the bullet will arc to the left, as a result. Therefore, to compensate for this path deviation, the sights also have to be adjusted left or right, respectively. A constant wind also predictably affects the bullet path, pushing it slightly left or right, and a little bit more up and down, depending on the wind direction. The magnitude of these deviations are also affected by whether the bullet is on the upward or downward slope of the trajectory, due to a phenomenon called "yaw of repose," where a spinning bullet tends to steadily and predictably align slightly off center from its point mass trajectory. Nevertheless, each of these trajectory perturbations are predictable once the projectile aerodynamic coefficients are established, through a combination of detailed analytical modeling and test range measurements. Projectile/bullet path analysis is of great use to shooters because it allows them to establish ballistic tables that will predict how much vertical elevation and horizontal deflection corrections must be applied to the sight line for shots at various known distances. The most detailed ballistic tables are developed for long range artillery and are based on six-degree-of-freedom trajectory analysis, which accounts for aerodynamic behavior along the three axial directions—elevation, range, and deflection—and the three rotational directions—pitch, yaw, and spin. For small arms applications, trajectory modeling can often be simplified to calculations involving only four of these degrees-of-freedom, lumping the effects of pitch, yaw and spin into the effect of a yaw-of-repose to account for trajectory deflection. Once detailed range tables are established, shooters can relatively quickly adjust sights based on the range to target, wind, air temperature and humidity, and other geometric considerations, such as terrain elevation differences. Projectile path values are determined by both the sight height, or the distance of the line of sight above the bore centerline, and the range at which the sights are zeroed, which in turn determines the elevation angle. A projectile following a ballistic trajectory has both forward and vertical motion. Forward motion is slowed due to air resistance, and in point mass modeling the vertical motion is dependent on a combination of the elevation angle and gravity. Initially, the projectile is rising with respect to the line of sight or the horizontal sighting plane. The projectile eventually reaches its apex (highest point in the trajectory parabola) where the vertical speed component decays to zero under the effect of gravity, and then begins to descend, eventually impacting the earth. The farther the distance to the intended target, the greater the elevation angle and the higher the apex. The projectile path crosses the horizontal sighting plane two times. The point closest to the gun occurs while the bullet is climbing through the line of sight and is called the near zero. The second point occurs as the projectile is descending through the line of sight. It is called the far zero and defines the current sight in distance for the gun. Projectile path is described numerically as distances above or below the horizontal sighting plane at various points along the trajectory. This is in contrast to projectile drop which is referenced to the plane containing the line of departure regardless of the elevation angle. Since each of these two parameters uses a different reference datum, significant confusion can result because even though a projectile is tracking well below the line of departure it can still be gaining actual and significant height with respect to the line of sight as well as the surface of the Earth in the case of a horizontal or near horizontal shot taken over flat terrain. Maximum point-blank range and battle zero Knowledge of the projectile drop and path has some practical uses to shooters even if it does not describe the actual trajectory of the projectile. For example, if the vertical projectile position over a certain range reach is within the vertical height of the target area the shooter wants to hit, the point of aim does not necessarily need to be adjusted over that range; the projectile is considered to have a sufficiently flat point-blank range trajectory for that particular target. Also known as "battle zero", maximum point-blank range is also of importance to the military. Soldiers are instructed to fire at any target within this range by simply placing their weapon's sights on the center of mass of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier. The current trend for elevated sights and higher-velocity cartridges in assault rifles is in part due to a desire to extend the maximum point-blank range, which makes the rifle easier to use. Drag resistance Mathematical models, such as computational fluid dynamics, are used for calculating the effects of drag or air resistance; they are quite complex and not yet completely reliable, but research is ongoing. The most reliable method, therefore, of establishing the necessary projectile aerodynamic properties to properly describe flight trajectories is by empirical measurement. Fixed drag curve models generated for standard-shaped projectiles Use of ballistics tables or ballistics software based on the Mayevski/Siacci method and G1 drag model, introduced in 1881, are the most common method used to work with external ballistics. Projectiles are described by a ballistic coefficient, or BC, which combines the air resistance of the bullet shape (the drag coefficient) and its sectional density (a function of mass and bullet diameter). The deceleration due to drag that a projectile with mass m, velocity v, and diameter d will experience is proportional to 1/BC, 1/m, v² and d². The BC gives the ratio of ballistic efficiency compared to the standard G1 projectile, which is a fictitious projectile with a flat base, a length of 3.28 calibers/diameters, and a 2 calibers/diameters radius tangential curve for the point. The G1 standard projectile originates from the "C" standard reference projectile defined by the German steel, ammunition and armaments manufacturer Krupp in 1881. The G1 model standard projectile has a BC of 1. The French Gâvre Commission decided to use this projectile as their first reference projectile, giving the G1 name. Sporting bullets, with a calibre d ranging from 0.177 to 0.50 inches (4.50 to 12.7 mm), have G1 BC's in the range 0.12 to slightly over 1.00, with 1.00 being the most aerodynamic, and 0.12 being the least. Very-low-drag bullets with BC's ≥ 1.10 can be designed and produced on CNC precision lathes out of mono-metal rods, but they often have to be fired from custom made full bore rifles with special barrels. Sectional density is a very important aspect of a projectile or bullet, and is for a round projectile like a bullet the ratio of frontal surface area (half the bullet diameter squared, times pi) to bullet mass. Since, for a given bullet shape, frontal surface increases as the square of the calibre, and mass increases as the cube of the diameter, then sectional density grows linearly with bore diameter. Since BC combines shape and sectional density, a half scale model of the G1 projectile will have a BC of 0.5, and a quarter scale model will have a BC of 0.25. Since different projectile shapes will respond differently to changes in velocity (particularly between supersonic and subsonic velocities), a BC provided by a bullet manufacturer will be an average BC that represents the common range of velocities for that bullet. For rifle bullets, this will probably be a supersonic velocity, for pistol bullets it will probably be subsonic. For projectiles that travel through the supersonic, transonic and subsonic flight regimes BC is not well approximated by a single constant, but is considered to be a function BC(M) of the Mach number M; here M equals the projectile velocity divided by the speed of sound. During the flight of the projectile the M will decrease, and therefore (in most cases) the BC will also decrease. Most ballistic tables or software takes for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistics coefficient. Those models do not differentiate between wadcutter, flat-based, spitzer, boat-tail, very-low-drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several drag curve models optimized for several standard projectile shapes are however available. The resulting fixed drag curve models for several standard projectile shapes or types are referred to as the: G1 or Ingalls (flatbase with 2 caliber (blunt) nose ogive - by far the most popular) G2 (Aberdeen J projectile) G5 (short 7.5° boat-tail, 6.19 calibers long tangent ogive) G6 (flatbase, 6 calibers long secant ogive) G7 (long 7.5° boat-tail, 10 calibers tangent ogive, preferred by some manufacturers for very-low-drag bullets) G8 (flatbase, 10 calibers long secant ogive) GL (blunt lead nose) How different speed regimes affect .338 calibre rifle bullets can be seen in the .338 Lapua Magnum product brochure which states Doppler radar established G1 BC data. The reason for publishing data like in this brochure is that the Siacci/Mayevski G1 model can not be tuned for the drag behavior of a specific projectile whose shape significantly deviates from the used reference projectile shape. Some ballistic software designers, who based their programs on the Siacci/Mayevski G1 model, give the user the possibility to enter several different G1 BC constants for different speed regimes to calculate ballistic predictions that closer match a bullets flight behavior at longer ranges compared to calculations that use only one BC constant. The above example illustrates the central problem fixed drag curve models have. These models will only yield satisfactory accurate predictions as long as the projectile of interest has the same shape as the reference projectile or a shape that closely resembles the reference projectile. Any deviation from the reference projectile shape will result in less accurate predictions. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The form factor can be used to compare the drag experienced by a projectile of interest to the drag experienced by the employed reference projectile at a given velocity (range). The problem that the actual drag curve of a projectile can significantly deviate from the fixed drag curve of any employed reference projectile systematically limits the traditional drag resistance modeling approach. The relative simplicity however makes that it can be explained to and understood by the general shooting public and hence is also popular amongst ballistic software prediction developers and bullet manufacturers that want to market their products. More advanced drag models Pejsa model Another attempt at building a ballistic calculator is the model presented in 1980 by Dr. Arthur J. Pejsa. Dr. Pejsa claims on his website that his method was consistently capable of predicting (supersonic) rifle bullet trajectories within 2.5 mm (0.1 in) and bullet velocities within 0.3 m/s (1 ft/s) out to 914 m (1,000 yd) in theory. The Pejsa model is a closed-form solution. The Pejsa model can predict a projectile within a given flight regime (for example the supersonic flight regime) with only two velocity measurements, a distance between said velocity measurements, and a slope or deceleration constant factor. The model allows the drag curve to change slopes (true/calibrate) or curvature at three different points. Down range velocity measurement data can be provided around key inflection points allowing for more accurate calculations of the projectile retardation rate, very similar to a Mach vs CD table. The Pejsa model allows the slope factor to be tuned to account for subtle differences in the retardation rate of different bullet shapes and sizes. It ranges from 0.1 (flat-nose bullets) to 0.9 (very-low-drag bullets). If this slope or deceleration constant factor is unknown a default value of 0.5 is used. With the help of test firing measurements the slope constant for a particular bullet/rifle system/shooter combination can be determined. These test firings should preferably be executed at 60% and for extreme long range ballistic predictions also at 80% to 90% of the supersonic range of the projectiles of interest, staying away from erratic transonic effects. With this the Pejsa model can easily be tuned. A practical downside of the Pejsa model is that accurate projectile specific down range velocity measurements to provide these better predictions can not be easily performed by the vast majority of shooting enthusiasts. An average retardation coefficient can be calculated for any given slope constant factor if velocity data points are known and distance between said velocity measurements is known. Obviously this is true only within the same flight regime. With velocity actual speed is meant, as velocity is a vector quantity and speed is the magnitude of the velocity vector. Because the power function does not have constant curvature a simple chord average cannot be used. The Pejsa model uses a weighted average retardation coefficient weighted at 0.25 range. The closer velocity is more heavily weighted. The retardation coefficient is measured in feet whereas range is measured in yards hence 0.25 × 3.0 = 0.75, in some places 0.8 rather than 0.75 is used. The 0.8 comes from rounding in order to allow easy entry on hand calculators. Since the Pejsa model does not use a simple chord weighted average, two velocity measurements are used to find the chord average retardation coefficient at midrange between the two velocity measurements points, limiting it to short range accuracy. In order to find the starting retardation coefficient Dr. Pejsa provides two separate equations in his two books. The first involves the power function. The second equation is identical to the one used to find the weighted average at R / 4; add N × (R/2) where R is the range in feet to the chord average retardation coefficient at midrange and where N is the slope constant factor. After the starting retardation coefficient is found the opposite procedure is used in order find the weighted average at R / 4; the starting retardation coefficient minus N × (R/4). In other words, N is used as the slope of the chord line. Dr. Pejsa states that he expanded his drop formula in a power series in order to prove that the weighted average retardation coefficient at R / 4 was a good approximation. For this Dr. Pejsa compared the power series expansion of his drop formula to some other unnamed drop formula's power expansion to reach his conclusions. The fourth term in both power series matched when the retardation coefficient at 0.25 range was used in Pejsa's drop formula. The fourth term was also the first term to use N. The higher terms involving N where insignificant and disappeared at N = 0.36, which according to Dr. Pejsa was a lucky coincidence making for an exceedingly accurate linear approximation, especially for N's around 0.36. If a retardation coefficient function is used exact average values for any N can be obtained because from calculus it is trivial to find the average of any integrable function. Dr. Pejsa states that the retardation coefficient can be modeled by C × VN where C is a fitting coefficient which disappears during the derivation of the drop formula and N the slope constant factor. The retardation coefficient equals the velocity squared divided by the retardation rate A. Using an average retardation coefficient allows the Pejsa model to be a closed-form expression within a given flight regime. In order to allow the use of a G1 ballistic coefficient rather than velocity data Dr. Pejsa provided two reference drag curves. The first reference drag curve is based purely on the Siacci/Mayevski retardation rate function. The second reference drag curve is adjusted to equal the Siacci/Mayevski retardation rate function at a projectile velocity of 2600 fps (792.5 m/s) using a .30-06 Springfield Cartridge, Ball, Caliber .30 M2 rifle spitzer bullet with a slope or deceleration constant factor of 0.5 in the supersonic flight regime. In other flight regimes the second Pejsa reference drag curve model uses slope constant factors of 0.0 or -4.0. These deceleration constant factors can be verified by backing out Pejsa's formulas (the drag curve segments fits the form V(2 - N) / C and the retardation coefficient curve segments fits the form V2 / (V(2 - N) / C) = C × VN where C is a fitting coefficient). The empirical test data Pejsa used to determine the exact shape of his chosen reference drag curve and pre-defined mathematical function that returns the retardation coefficient at a given Mach number was provided by the US military for the Cartridge, Ball, Caliber .30 M2 bullet. The calculation of the retardation coefficient function also involves air density, which Pejsa did not mention explicitly. The Siacci/Mayevski G1 model uses the following deceleration parametrization (60 °F, 30 inHg and 67% humidity, air density ρ = 1.2209 kg/m3). Dr. Pejsa suggests using the second drag curve because the Siacci/Mayevski G1 drag curve does not provide a good fit for modern spitzer bullets. To obtain relevant retardation coefficients for optimal long range modeling Dr. Pejsa suggested using accurate projectile specific down range velocity measurement data for a particular projectile to empirically derive the average retardation coefficient rather than using a reference drag curve derived average retardation coefficient. Further he suggested using ammunition with reduced propellant loads to empirically test actual projectile flight behavior at lower velocities. When working with reduced propellant loads utmost care must be taken to avoid dangerous or catastrophic conditions (detonations) with can occur when firing experimental loads in firearms. Manges model Although not as well known as the Pejsa model, an additional alternative ballistic model was presented in 1989 by Colonel Duff Manges (U S Army Retired) at the American Defense Preparedness (ADPA) 11th International Ballistic Symposium held at the Brussels Congress Center, Brussels, Belgium, May 9–11, 1989. A paper titled "Closed Form Trajectory Solutions for Direct Fire Weapons Systems" appears in the proceedings, Volume 1, Propulsion Dynamics, Launch Dynamics, Flight Dynamics, pages 665–674. Originally conceived to model projectile drag for 120 mm tank gun ammunition, the novel drag coefficient formula has been applied subsequently to ballistic trajectories of center-fired rifle ammunition with results comparable to those claimed for the Pejsa model. The Manges model uses a first principles theoretical approach that eschews "G" curves and "ballistic coefficients" based on the standard G1 and other similarity curves. The theoretical description has three main parts. The first is to develop and solve a formulation of the two dimensional differential equations of motion governing flat trajectories of point mass projectiles by defining mathematically a set of quadratures that permit closed form solutions for the trajectory differential equations of motion. A sequence of successive approximation drag coefficient functions is generated that converge rapidly to actual observed drag data. The vacuum trajectory, simplified aerodynamic, d'Antonio, and Euler drag law models are special cases. The Manges drag law thereby provides a unifying influence with respect to earlier models used to obtain two dimensional closed form solutions to the point-mass equations of motion. The third purpose of this paper is to describe a least squares fitting procedure for obtaining the new drag functions from observed experimental data. The author claims that results show excellent agreement with six degree of freedom numerical calculations for modern tank ammunition and available published firing tables for center-fired rifle ammunition having a wide variety of shapes and sizes. A Microsoft Excel application has been authored that uses least squares fits of wind tunnel acquired tabular drag coefficients. Alternatively, manufacturer supplied ballistic trajectory data, or Doppler acquired velocity data can be fitted as well to calibrate the model. The Excel application then employs custom macroinstructions to calculate the trajectory variables of interest. A modified 4th order Runge–Kutta integration algorithm is used. Like Pejsa, Colonel Manges claims center-fired rifle accuracies to the nearest one tenth of an inch for bullet position, and nearest foot per second for the projectile velocity. The Proceedings of the 11th International Ballistic Symposium are available through the National Defense Industrial Association (NDIA) at the website http://www.ndia.org/Resources/Pages/Publication_Catalog.aspx . Six degrees of freedom model There are also advanced professional ballistic models like PRODAS available. These are based on six degrees of freedom (6 DoF) calculations. 6 DoF modeling accounts for x, y, and z position in space along with the projectiles pitch, yaw, and roll rates. 6 DoF modeling needs such elaborate data input, knowledge of the employed projectiles and expensive data collection and verification methods that it is impractical for non-professional ballisticians, but not impossible for the curious, computer literate, and mathematically inclined. Semi-empirical aeroprediction models have been developed that reduced extensive test range data on a wide variety of projectile shapes, normalizing dimensional input geometries to calibers; accounting for nose length and radius, body length, and boattail size, and allowing the full set of 6-dof aerodynamic coefficients to be estimated. Early research on spin-stabilized aeroprediction software resulted in the SPINNER computer program. The FINNER aeroprediction code calculates 6-dof inputs for fin stabilized projectiles. Solids modeling software that determines the projectile parameters of mass, center of gravity, axial and transverse moments of inertia necessary for stability analysis are also readily available, and simple to computer program. Finally, algorithms for 6-dof numerical integration suitable to a 4th order Runge-Kutta are readily available. All that is required for the amateur ballistician to investigate the finer analytical details of projectile trajectories, along with bullet nutation and precession behavior, is computer programming determination. Nevertheless, for the small arms enthusiast, aside from academic curiosity, one will discover that being able to predict trajectories to 6-dof accuracy is probably not of practical significance compared to more simplified point mass trajectories based on published bullet ballistic coefficients. 6 DoF is generally used by the aerospace and defense industry and military organizations that study the ballistic behavior of a limited number of (intended) military issue projectiles. Calculated 6 DoF trends can be incorporated as correction tables in more conventional ballistic software applications. Though 6 DoF modeling and software applications are used by professional well equipped organizations for decades, the computing power restrictions of mobile computing devices like (ruggedized) personal digital assistants, tablet computers or smartphones impaired field use as calculations generally have to be done on the fly. In 2016 the Scandinavian ammunition manufacturer Nammo Lapua Oy released a 6 DoF calculation model based ballistic free software named Lapua Ballistics. The software is distributed as a mobile app only and available for Android and iOS devices. The employed 6 DoF model is however limited to Lapua bullets as a 6 DoF solver needs bullet specific drag coefficient (Cd)/Doppler radar data and geometric dimensions of the projectile(s) of interest. For other bullets the Lapua Ballistics solver is limited to and based on G1 or G7 ballistic coefficients and the Mayevski/Siacci method. Artillery software suites Military organizations have developed ballistic models like the NATO Armament Ballistic Kernel (NABK) for fire-control systems for artillery like the SG2 Shareable (Fire Control) Software Suite (S4) from the NATO Army Armaments Group (NAAG). The NATO Armament Ballistic Kernel is a 4-DoF modified point mass model. This is a compromise between a simple point mass model and a computationally intensive 6-DoF model. A six- and seven-degree-of-freedom standard called BALCO has also been developed within NATO working groups. BALCO is a trajectory simulation program based on the mathematical model defined by the NATO Standardization Recommendation 4618. The primary goal of BALCO is to compute high-fidelity trajectories for both conventional axisymmetric and precision-guided projectiles featuring control surfaces. The BALCO trajectory model is a FORTRAN 2003 program that implements the following features: 6/7‐DoF equations of motion 7th‐order Runge‐Kutta‐Fehlberg integration Earth models Atmosphere models Aerodynamic models Thrust and Base Burn models Actuator models The predictions these models yield are subject to comparison study. Doppler radar measurements For the precise establishment of drag or air resistance effects on projectiles, Doppler radar measurements are required. Weibel 1000e or Infinition BR-1001 Doppler radars are used by governments, professional ballisticians, defence forces and a few ammunition manufacturers to obtain real-world data of the flight behavior of projectiles of their interest. Correctly established state of the art Doppler radar measurements can determine the flight behavior of projectiles as small as airgun pellets in three-dimensional space to within a few millimetres accuracy. The gathered data regarding the projectile deceleration can be derived and expressed in several ways, such as ballistic coefficients (BC) or drag coefficients (Cd). Because a spinning projectile experiences both precession and nutation about its center of gravity as it flies, further data reduction of doppler radar measurements is required to separate yaw induced drag and lift coefficients from the zero yaw drag coefficient, in order to make measurements fully applicable to 6-dof trajectory analysis. Doppler radar measurement results for a lathe-turned monolithic solid .50 BMG very-low-drag bullet (Lost River J40 .510-773 grain monolithic solid bullet / twist rate 1:15 in) look like this: The initial rise in the BC value is attributed to a projectile's always present yaw and precession out of the bore. The test results were obtained from many shots not just a single shot. The bullet was assigned 1.062 for its BC number by the bullet's manufacturer Lost River Ballistic Technologies. Doppler radar measurement results for a Lapua GB528 Scenar 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag bullet look like this: This tested bullet experiences its maximum drag coefficient when entering the transonic flight regime around Mach 1.200. With the help of Doppler radar measurements projectile specific drag models can be established that are most useful when shooting at extended ranges where the bullet speed slows to the transonic speed region near the speed of sound. This is where the projectile drag predicted by mathematic modeling can significantly depart from the actual drag experienced by the projectile. Further Doppler radar measurements are used to study subtle in-flight effects of various bullet constructions. Governments, professional ballisticians, defence forces and ammunition manufacturers can supplement Doppler radar measurements with measurements gathered by telemetry probes fitted to larger projectiles. General trends in drag or ballistic coefficient In general, a pointed projectile will have a better drag coefficient (Cd) or ballistic coefficient (BC) than a round nosed bullet, and a round nosed bullet will have a better Cd or BC than a flat point bullet. Large radius curves, resulting in a shallower point angle, will produce lower drags, particularly at supersonic velocities. Hollow point bullets behave much like a flat point of the same point diameter. Projectiles designed for supersonic use often have a slightly tapered base at the rear, called a boat tail, which reduces air resistance in flight. The usefulness of a "tapered rear" for long-range firing was well established already by early 1870s, but technological difficulties prevented their wide adoption before well into 20th century. Cannelures, which are recessed rings around the projectile used to crimp the projectile securely into the case, will cause an increase in drag. Analytical software was developed by the Ballistics Research Laboratory – later called the Army Research Laboratory – which reduced actual test range data to parametric relationships for projectile drag coefficient prediction. Large caliber artillery also employ drag reduction mechanisms in addition to streamlining geometry. Rocket-assisted projectiles employ a small rocket motor that ignites upon muzzle exit providing additional thrust to overcome aerodynamic drag. Rocket assist is most effective with subsonic artillery projectiles. For supersonic long range artillery, where base drag dominates, base bleed is employed. Base bleed is a form of a gas generator that does not provide significant thrust, but rather fills the low-pressure area behind the projectile with gas, effectively reducing the base drag and the overall projectile drag coefficient. Transonic problem A projectile fired at supersonic muzzle velocity will at some point slow to approach the speed of sound. At the transonic region (about Mach 1.2–0.8) the centre of pressure (CP) of most non spherical projectiles shifts forward as the projectile decelerates. That CP shift affects the (dynamic) stability of the projectile. If the projectile is not well stabilized, it cannot remain pointing forward through the transonic region (the projectile starts to exhibit an unwanted precession or coning motion called limit cycle yaw that, if not damped out, can eventually end in uncontrollable tumbling along the length axis). However, even if the projectile has sufficient stability (static and dynamic) to be able to fly through the transonic region and stays pointing forward, it is still affected. The erratic and sudden CP shift and (temporary) decrease of dynamic stability can cause significant dispersion (and hence significant accuracy decay), even if the projectile's flight becomes well behaved again when it enters the subsonic region. This makes accurately predicting the ballistic behavior of projectiles in the transonic region very difficult. Because of this, marksmen normally restrict themselves to engaging targets close enough that the projectile is still supersonic. In 2015, the American ballistician Bryan Litz introduced the "Extended Long Range" concept to define rifle shooting at ranges where supersonic fired (rifle) bullets enter the transonic region. According to Litz, "Extended Long Range starts whenever the bullet slows to its transonic range. As the bullet slows down to approach Mach 1, it starts to encounter transonic effects, which are more complex and difficult to account for, compared to the supersonic range where the bullet is relatively well-behaved." The ambient air density has a significant effect on dynamic stability during transonic transition. Though the ambient air density is a variable environmental factor, adverse transonic transition effects can be negated better by a projectile traveling through less dense air, than when traveling through denser air. Projectile or bullet length also affects limit cycle yaw. Longer projectiles experience more limit cycle yaw than shorter projectiles of the same diameter. Another feature of projectile design that has been identified as having an effect on the unwanted limit cycle yaw motion is the chamfer at the base of the projectile. At the very base, or heel of a projectile or bullet, there is a chamfer, or radius. The presence of this radius causes the projectile to fly with greater limit cycle yaw angles. Rifling can also have a subtle effect on limit cycle yaw. In general faster spinning projectiles experience less limit cycle yaw. Research into guided projectiles To circumvent the transonic problems encountered by spin-stabilized projectiles, projectiles can theoretically be guided during flight. The Sandia National Laboratories announced in January 2012 it has researched and test-fired 4-inch (102 mm) long prototype dart-like, self-guided bullets for small-caliber, smooth-bore firearms that could hit laser-designated targets at distances of more than a mile (about 1,610 meters or 1760 yards). These projectiles are not spin stabilized and the flight path can steered within limits with an electromagnetic actuator 30 times per second. The researchers also claim they have video of the bullet radically pitching as it exits the barrel and pitching less as it flies down range, a disputed phenomenon known to long-range firearms experts as “going to sleep”. Because the bullet's motions settle the longer it is in flight, accuracy improves at longer ranges, Sandia researcher Red Jones said. “Nobody had ever seen that, but we’ve got high-speed video photography that shows that it’s true,” he said. Recent testing indicates it may be approaching or already achieved initial operational capability. Testing the predictive qualities of software Due to the practical inability to know in advance and compensate for all the variables of flight, no software simulation, however advanced, will yield predictions that will always perfectly match real world trajectories. It is however possible to obtain predictions that are very close to actual flight behavior. Empirical measurement method Ballistic prediction computer programs intended for (extreme) long ranges can be evaluated by conducting field tests at the supersonic to subsonic transition range (the last 10 to 20% of the supersonic range of the rifle/cartridge/bullet combination). For a typical .338 Lapua Magnum rifle for example, shooting standard 16.2 gram (250 gr) Lapua Scenar GB488 bullets at 905 m/s (2969 ft/s) muzzle velocity, field testing of the software should be done at ≈ 1200-1300 meters (1312-1422 yd) under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³). To check how well the software predicts the trajectory at shorter to medium range, field tests at 20, 40 and 60% of the supersonic range have to be conducted. At those shorter to medium ranges, transonic problems and hence unbehaved bullet flight should not occur, and the BC is less likely to be transient. Testing the predictive qualities of software at (extreme) long ranges is expensive because it consumes ammunition; the actual muzzle velocity of all shots fired must be measured to be able to make statistically dependable statements. Sample groups of less than 24 shots may not obtain the desired statistically significant confidence interval. Doppler radar measurement method Governments, professional ballisticians, defence forces and a few ammunition manufacturers use Doppler radars and/or telemetry probes fitted to larger projectiles to obtain precise real world data regarding the flight behavior of the specific projectiles of their interest and thereupon compare the gathered real world data against the predictions calculated by ballistic computer programs. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Authorities and projectile manufacturers are generally reluctant to share the results of Doppler radar tests and the test derived drag coefficients (Cd) of projectiles with the general public. Around 2020 more affordable but less capable (amateur) Doppler rader equipment to determine free flight drag coefficients became available for the general public. In January 2009, the Scandinavian ammunition manufacturer Nammo/Lapua published Doppler radar test-derived drag coefficient data for most of their rifle projectiles. In 2015 the US ammunition manufacturer Berger Bullets announced the use of Doppler radar in unison with PRODAS 6 DoF software to generate trajectory solutions. In 2016 US ammunition manufacturer Hornady announced the use of Doppler radar derived drag data in software utilizing a modified point mass model to generate trajectory solutions. With the measurement derived Cd data engineers can create algorithms that utilize both known mathematical ballistic models as well as test specific, tabular data in unison. When used by predictive software like QuickTARGET Unlimited, Lapua Edition, Lapua Ballistics or Hornady 4DOF the Doppler radar test-derived drag coefficient data can be used for more accurate external ballistic predictions. Some of the Lapua-provided drag coefficient data shows drastic increases in the measured drag around or below the Mach 1 flight velocity region. This behavior was observed for most of the measured small calibre bullets, and not so much for the larger calibre bullets. This implies some (mostly smaller calibre) rifle bullets exhibited more limit cycle yaw (coning and/or tumbling) in the transonic/subsonic flight velocity regime. The information regarding unfavourable transonic/subsonic flight behavior for some of the tested projectiles is important. This is a limiting factor for extended range shooting use, because the effects of limit cycle yaw are not easily predictable and potentially catastrophic for the best ballistic prediction models and software. Presented Cd data can not be simply used for every gun-ammunition combination, since it was measured for the barrels, rotational (spin) velocities and ammunition lots the Lapua testers used during their test firings. Variables like differences in rifling (number of grooves, depth, width and other dimensional properties), twist rates and/or muzzle velocities impart different rotational (spin) velocities and rifling marks on projectiles. Changes in such variables and projectile production lot variations can yield different downrange interaction with the air the projectile passes through that can result in (minor) changes in flight behavior. This particular field of external ballistics is currently (2009) not elaborately studied nor well understood. Predictions of several drag resistance modelling and measuring methods The method employed to model and predict external ballistic behavior can yield differing results with increasing range and time of flight. To illustrate this several external ballistic behavior prediction methods for the Lapua Scenar GB528 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag rifle bullet with a manufacturer stated G1 ballistic coefficient (BC) of 0.785 fired at 830 m/s (2723 ft/s) muzzle velocity under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³), Mach 1 = 340.3 m/s, Mach 1.2 = 408.4 m/s), predicted this for the projectile velocity and time of flight from 0 to 3,000 m (0 to 3,281 yd): The table shows the Doppler radar test derived drag coefficients (Cd) prediction method and the 2017 Lapua Ballistics 6 DoF App predictions produce similar results. The 6 DoF modeling estimates bullet stability ((Sd) and (Sg)) that gravitates to over-stabilization for ranges over for this bullet. At the total drop predictions deviate 47.5 cm (19.7 in) or 0.20 mil (0.68 moa) at 50° latitude and up to the total drop predictions are within 0.30 mil (1 moa) at 50° latitude. The 2016 Lapua Ballistics 6 DoF App version predictions were even closer to the Doppler radar test predictions. The traditional Siacci/Mayevski G1 drag curve model prediction method generally yields more optimistic results compared to the modern Doppler radar test derived drag coefficients (Cd) prediction method. At range the differences will be hardly noticeable, but at and beyond the differences grow over 10 m/s (32.8 ft/s) projectile velocity and gradually become significant. At range the projectile velocity predictions deviate 25 m/s (82.0 ft/s), which equates to a predicted total drop difference of 125.6 cm (49.4 in) or 0.83 mil (2.87 moa) at 50° latitude. The Pejsa drag model closed-form solution prediction method, without slope constant factor fine tuning, yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions deviate 10 m/s (32.8 ft/s), which equates to a predicted total drop difference of 23.6 cm (9.3 in) or 0.16 mil (0.54 moa) at 50° latitude. The G7 drag curve model prediction method (recommended by some manufacturers for very-low-drag shaped rifle bullets) when using a G7 ballistic coefficient (BC) of 0.377 yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions have their maximum deviation of 10 m/s (32.8 ft/s). The predicted total drop difference at is 0.4 cm (0.16 in) at 50° latitude. The predicted total drop difference at is 45.0 cm (17.7 in), which equates to 0.25 mil (0.86 moa). Decent prediction models are expected to yield similar results in the supersonic flight regime. The five example models down to all predict supersonic Mach 1.2+ projectile velocities and total drop differences within a 51 cm (20.1 in) bandwidth. In the transonic flight regime at the models predict projectile velocities around Mach 1.0 to Mach 1.1 and total drop differences within a much larger 150 cm (59 in) bandwidth. External factors Wind Wind has a range of effects, the first being the effect of making the projectile deviate to the side (horizontal deflection). From a scientific perspective, the "wind pushing on the side of the projectile" is not what causes horizontal wind drift. What causes wind drift is drag. Drag makes the projectile turn into the wind, much like a weather vane, keeping the centre of air pressure on its nose. From the shooter’s perspective, this causes the nose of the projectile to turn into the wind and the tail to turn away from the wind. The result of this turning effect is that the drag pushes the projectile downwind in a nose-to-tail direction. Wind also causes aerodynamic jump which is the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight of a projectile or at or very near the muzzle leading to dynamic imbalance. The amount of aerodynamic jump is dependent on cross wind speed, the gyroscopic stability of the bullet at the muzzle and if the barrel twist is clockwise or anti-clockwise. Like the wind direction reversing the twist direction will reverse the aerodynamic jump direction. A somewhat less obvious effect is caused by head or tailwinds. A headwind will slightly increase the relative velocity of the projectile, and increase drag and the corresponding drop. A tailwind will reduce the drag and the projectile/bullet drop. In the real world, pure head or tailwinds are rare, since wind is seldom constant in force and direction and normally interacts with the terrain it is blowing over. This often makes ultra long range shooting in head or tailwind conditions difficult. Vertical angles The vertical angle (or elevation) of a shot will also affect the trajectory of the shot. Ballistic tables for small calibre projectiles (fired from pistols or rifles) assume a horizontal line of sight between the shooter and target with gravity acting perpendicular to the earth. Therefore, if the shooter-to-target angle is up or down, (the direction of the gravity component does not change with slope direction), then the trajectory curving acceleration due to gravity will actually be less, in proportion to the cosine of the slant angle. As a result, a projectile fired upward or downward, on a so-called "slant range," will over-shoot the same target distance on flat ground. The effect is of sufficient magnitude that hunters must adjust their target hold off accordingly in mountainous terrain. A well known formula for slant range adjustment to horizontal range hold off is known as the Rifleman's rule. The Rifleman's rule and the slightly more complex and less well known Improved Rifleman's rule models produce sufficiently accurate predictions for many small arms applications. Simple prediction models however ignore minor gravity effects when shooting uphill or downhill. The only practical way to compensate for this is to use a ballistic computer program. Besides gravity at very steep angles over long distances, the effect of air density changes the projectile encounters during flight become problematic. The mathematical prediction models available for inclined fire scenarios, depending on the amount and direction (uphill or downhill) of the inclination angle and range, yield varying accuracy expectation levels. Less advanced ballistic computer programs predict the same trajectory for uphill and downhill shots at the same vertical angle and range. The more advanced programs factor in the small effect of gravity on uphill and on downhill shots resulting in slightly differing trajectories at the same vertical angle and range. No publicly available ballistic computer program currently (2017) accounts for the complicated phenomena of differing air densities the projectile encounters during flight. Ambient air density Air pressure, temperature, and humidity variations make up the ambient air density. Humidity has a counter intuitive impact. Since water vapor has a density of 0.8 grams per litre, while dry air averages about 1.225 grams per litre, higher humidity actually decreases the air density, and therefore decreases the drag. Precipitation Precipitation can cause significant yaw and accompanying deflection when a bullet collides with a raindrop. The further downrange such a coincidental collision occurs, the less the deflection on target will be. The weight of the raindrop and bullet also influences how much yaw is induced during such a collision. A big heavy raindrop and a light bullet will yield maximal yaw effect. A heavy bullet colliding with an equal raindrop will experience significant less yaw effect. Long range factors Gyroscopic drift (spin drift) Gyroscopic drift is an interaction of the bullet's mass and aerodynamics with the atmosphere that it is flying in. Even in completely calm air, with no sideways air movement at all, a spin-stabilized projectile will experience a spin-induced sideways component, due to a gyroscopic phenomenon known as "yaw of repose." For a right hand (clockwise) direction of rotation this component will always be to the right. For a left hand (counterclockwise) direction of rotation this component will always be to the left. This is because the projectile's longitudinal axis (its axis of rotation) and the direction of the velocity vector of the center of gravity (CG) deviate by a small angle, which is said to be the equilibrium yaw or the yaw of repose. The magnitude of the yaw of repose angle is typically less than 0.5 degree. Since rotating objects react with an angular velocity vector 90 degrees from the applied torque vector, the bullet's axis of symmetry moves with a component in the vertical plane and a component in the horizontal plane; for right-handed (clockwise) spinning bullets, the bullet's axis of symmetry deflects to the right and a little bit upward with respect to the direction of the velocity vector, as the projectile moves along its ballistic arc. As the result of this small inclination, there is a continuous air stream, which tends to deflect the bullet to the right. Thus the occurrence of the yaw of repose is the reason for the bullet drifting to the right (for right-handed spin) or to the left (for left-handed spin). This means that the bullet is "skidding" sideways at any given moment, and thus experiencing a sideways component. The following variables affect the magnitude of gyroscopic drift: Projectile or bullet length: longer projectiles experience more gyroscopic drift because they produce more lateral "lift" for a given yaw angle. Spin rate: faster spin rates will produce more gyroscopic drift because the nose ends up pointing farther to the side. Range, time of flight and trajectory height: gyroscopic drift increases with all of these variables. density of the atmosphere: denser air will increase gyroscopic drift. Doppler radar measurement results for the gyroscopic drift of several US military and other very-low-drag bullets at 1000 yards (914.4 m) look like this: The table shows that the gyroscopic drift cannot be predicted on weight and diameter alone. In order to make accurate predictions on gyroscopic drift several details about both the external and internal ballistics must be considered. Factors such as the twist rate of the barrel, the velocity of the projectile as it exits the muzzle, barrel harmonics, and atmospheric conditions, all contribute to the path of a projectile. Magnus effect Spin stabilized projectiles are affected by the Magnus effect, whereby the spin of the bullet creates a force acting either up or down, perpendicular to the sideways vector of the wind. In the simple case of horizontal wind, and a right hand (clockwise) direction of rotation, the Magnus effect induced pressure differences around the bullet cause a downward (wind from the right) or upward (wind from the left) force viewed from the point of firing to act on the projectile, affecting its point of impact. The vertical deflection value tends to be small in comparison with the horizontal wind induced deflection component, but it may nevertheless be significant in winds that exceed 4 m/s (14.4 km/h or 9 mph). Magnus effect and bullet stability The Magnus effect has a significant role in bullet stability because the Magnus force does not act upon the bullet's center of gravity, but the center of pressure affecting the yaw of the bullet. The Magnus effect will act as a destabilizing force on any bullet with a center of pressure located ahead of the center of gravity, while conversely acting as a stabilizing force on any bullet with the center of pressure located behind the center of gravity. The location of the center of pressure depends on the flow field structure, in other words, depending on whether the bullet is in supersonic, transonic or subsonic flight. What this means in practice depends on the shape and other attributes of the bullet, in any case the Magnus force greatly affects stability because it tries to "twist" the bullet along its flight path. Paradoxically, very-low-drag bullets due to their length have a tendency to exhibit greater Magnus destabilizing errors because they have a greater surface area to present to the oncoming air they are travelling through, thereby reducing their aerodynamic efficiency. This subtle effect is one of the reasons why a calculated Cd or BC based on shape and sectional density is of limited use. Poisson effect Another minor cause of drift, which depends on the nose of the projectile being above the trajectory, is the Poisson Effect. This, if it occurs at all, acts in the same direction as the gyroscopic drift and is even less important than the Magnus effect. It supposes that the uptilted nose of the projectile causes an air cushion to build up underneath it. It further supposes that there is an increase of friction between this cushion and the projectile so that the latter, with its spin, will tend to roll off the cushion and move sideways. This simple explanation is quite popular. There is, however, no evidence to show that increased pressure means increased friction and unless this is so, there can be no effect. Even if it does exist it must be quite insignificant compared with the gyroscopic and Coriolis drifts. Both the Poisson and Magnus Effects will reverse their directions of drift if the nose falls below the trajectory. When the nose is off to one side, as in equilibrium yaw, these effects will make minute alterations in range. Coriolis drift The Coriolis effect causes Coriolis drift in a direction perpendicular to the Earth's axis; for most locations on Earth and firing directions, this deflection includes horizontal and vertical components. The deflection is to the right of the trajectory in the northern hemisphere, to the left in the southern hemisphere, upward for eastward shots, and downward for westward shots. The vertical Coriolis deflection is also known as the Eötvös effect. Coriolis drift is not an aerodynamic effect; it is a consequence of the rotation of the Earth. The magnitude of the Coriolis effect is small. For small arms, the magnitude of the Coriolis effect is generally insignificant (for high powered rifles in the order of about at ), but for ballistic projectiles with long flight times, such as extreme long-range rifle projectiles, artillery, and rockets like intercontinental ballistic missiles, it is a significant factor in calculating the trajectory. The magnitude of the drift depends on the firing and target location, azimuth of firing, projectile velocity and time of flight. Horizontal effect Viewed from a non-rotating reference frame (i.e. not one rotating with the Earth) and ignoring the forces of gravity and air resistance, a projectile moves in a straight line. When viewed from a reference frame fixed with respect to the Earth, that straight trajectory appears to curve sideways. The direction of this horizontal curvature is to the right in the northern hemisphere and to the left in the southern hemisphere, and does not depend on the azimuth of the shot. The horizontal curvature is largest at the poles and decreases to zero at the equator. Vertical (Eötvös) effect The Eötvös effect changes the perceived gravitational pull on a moving object based on the relationship between the direction and velocity of movement and the direction of the Earth's rotation. The Eötvös effect is largest at the equator and decreases to zero at the poles. It causes eastward-traveling projectiles to deflect upward, and westward-traveling projectiles to deflect downward. The effect is less pronounced for trajectories in other directions, and is zero for trajectories aimed due north or south. In the case of large changes of momentum, such as a spacecraft being launched into Earth orbit, the effect becomes significant. It contributes to the fastest and most fuel-efficient path to orbit: a launch from the equator that curves to a directly eastward heading. Equipment factors Though not forces acting on projectile trajectories there are some equipment related factors that influence trajectories. Since these factors can cause otherwise unexplainable external ballistic flight behavior they have to be briefly mentioned. Lateral jump Lateral jump is caused by a slight lateral and rotational movement of a gun barrel at the instant of firing. It has the effect of a small error in bearing. The effect is ignored, since it is small and varies from round to round. Lateral throw-off Lateral throw-off is caused by mass imbalance in applied spin stabilized projectiles or pressure imbalances during the transitional flight phase when a projectile leaves a gun barrel off axis leading to static imbalance. If present it causes dispersion. The effect is unpredictable, since it is generally small and varies from projectile to projectile, round to round and/or gun barrel to gun barrel. Maximum effective small arms range The maximum practical range of all small arms and especially high-powered sniper rifles depends mainly on the aerodynamic or ballistic efficiency of the spin stabilised projectiles used. Long-range shooters must also collect relevant information to calculate elevation and windage corrections to be able to achieve first shot strikes at point targets. The data to calculate these fire control corrections has a long list of variables including: ballistic coefficient or test derived drag coefficients (Cd)/behavior of the bullets used height of the sighting components above the rifle bore axis the zero range at which the sighting components and rifle combination were sighted in bullet mass actual muzzle velocity (powder temperature affects muzzle velocity, primer ignition is also temperature dependent) range to target supersonic range of the employed gun, cartridge and bullet combination inclination angle in case of uphill/downhill firing target speed and direction wind speed and direction (main cause for horizontal projectile deflection and generally the hardest ballistic variable to measure and judge correctly. Wind effects can also cause vertical deflection.) air pressure, temperature, altitude and humidity variations (these make up the ambient air density) Earth's gravity (changes slightly with latitude and altitude) gyroscopic drift (horizontal and vertical plane gyroscopic effect — often known as spin drift - induced by the barrel's twist direction and twist rate) Coriolis effect drift (latitude, direction of fire and northern or southern hemisphere data dictate this effect) Eötvös effect (interrelated with the Coriolis effect, latitude and direction of fire dictate this effect) aerodynamic jump (the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight or at or very near the muzzle leading to dynamic imbalance) lateral throw-off (dispersion that is caused by mass imbalance in the applied projectile or it leaving the barrel off axis leading to static imbalance) the inherent potential accuracy and adjustment range of the sighting components the inherent potential accuracy of the rifle the inherent potential accuracy of the ammunition the inherent potential accuracy of the computer program and other firing control components used to calculate the trajectory The ambient air density is at its maximum at Arctic sea level conditions. Cold gunpowder also produces lower pressures and hence lower muzzle velocities than warm powder. This means that the maximum practical range of rifles will be at it shortest at Arctic sea level conditions. The ability to hit a point target at great range has a lot to do with the ability to tackle environmental and meteorological factors and a good understanding of exterior ballistics and the limitations of equipment. Without (computer) support and highly accurate laser rangefinders and meteorological measuring equipment as aids to determine ballistic solutions, long-range shooting beyond 1000 m (1100 yd) at unknown ranges becomes guesswork for even the most expert long-range marksmen. Interesting further reading: Marksmanship Wikibook Using ballistics data Here is an example of a ballistic table for a .30 calibre Speer 169 grain (11 g) pointed boat tail match bullet, with a BC of 0.480. It assumes sights 1.5 inches (38 mm) above the bore line, and sights adjusted to result in point of aim and point of impact matching 200 yards (183 m) and 300 yards (274 m) respectively. This table demonstrates that, even with a fairly aerodynamic bullet fired at high velocity, the "bullet drop" or change in the point of impact is significant. This change in point of impact has two important implications. Firstly, estimating the distance to the target is critical at longer ranges, because the difference in the point of impact between 400 and is 25–32 in (depending on zero), in other words if the shooter estimates that the target is 400 yd away when it is in fact 500 yd away the shot will impact 25–32 in (635–813 mm) below where it was aimed, possibly missing the target completely. Secondly, the rifle should be zeroed to a distance appropriate to the typical range of targets, because the shooter might have to aim so far above the target to compensate for a large bullet drop that he may lose sight of the target completely (for instance being outside the field of view of a telescopic sight). In the example of the rifle zeroed at , the shooter would have to aim 49 in or more than 4 ft (1.2 m) above the point of impact for a target at 500 yd. See also Internal ballistics - The behavior of the projectile and propellant before it leaves the barrel. Transitional ballistics - The behavior of the projectile from the time it leaves the muzzle until the pressure behind the projectile is equalized. Terminal ballistics - The behavior of the projectile upon impact with the target. Trajectory of a projectile - Basic external ballistics mathematic formulas. Rifleman's rule - Procedures or "rules" for a rifleman for aiming at targets at a distance either uphill or downhill. Franklin Ware Mann - Early scientific study of external ballistics. Table of handgun and rifle cartridges Sighting in - Calibrating the sights on a ranged weapon so that the point of aim intersects with the trajectory at a given distance, allowing the user to consistently hit the target being aimed at. Notes References External links General external ballistics (Simplified calculation of the motion of a projectile under a drag force proportional to the square of the velocity) - basketball ballistics. Small arms external ballistics Software for calculating ball ballistics How do bullets fly? by Ruprecht Nennstiel, Wiesbaden, Germany Exterior Ballistics.com articles A Short Course in External Ballistics Articles on long range shooting by Bryan Litz Probabalistic Weapon Employment Zone (WEZ) Analysis A Conceptual Overview by Bryan Litz Weite Schüsse - part 4, Basic explanation of the Pejsa model by Lutz Möller Patagonia Ballistics ballistics mathematical software engine JBM Small Arms Ballistics with online ballistics calculators Bison Ballistics Point Mass Online Ballistics Calculator Virtual Wind Tunnel Experiments for Small Caliber Ammunition Aerodynamic Characterization - Paul Weinacht US Army Research Laboratory Aberdeen Proving Ground, MD Artillery external ballistics British Artillery Fire Control - Ballistics & Data Field Artillery, Volume 6, Ballistics and Ammunition The Production of Firing Tables for Cannon Artillery, BRL rapport no. 1371 by Elizabeth R. Dickinson, U.S. Army Materiel Command Ballistic Research Laboratories, November 1967 NABK (NATO Armament Ballistic Kernel) Based Next Generation Ballistic Table Tookit, 23rd International Symposium on Ballistics, Tarragona, Spain 16-20 April 2007 Trajectory Calculator in C++ that can deduce drag function from firing tables Freeware small arms external ballistics software Hawke X-ACT Pro FREE ballistics app. iOS, Android, OSX & Windows. ChairGun Pro free ballistics for rim fire and pellet guns. Ballistic_XLR. (MS Excel spreadsheet)] - A substantial enhancement & modification of the Pejsa spreadsheet (below). GNU Exterior Ballistics Computer (GEBC) - An open source 3DOF ballistics computer for Windows, Linux, and Mac - Supports the G1, G2, G5, G6, G7, and G8 drag models. Created and maintained by Derek Yates. 6mmbr.com ballistics section links to / hosts 4 freeware external ballistics computer programs. 2DOF & 3DOF R.L. McCoy - Gavre exterior ballistics (zip file) - Supports the G1, G2, G5, G6, G7, G8, GS, GL, GI, GB and RA4 drag models PointBlank Ballistics (zip file) - Siacci/Mayevski G1 drag model. Remington Shoot! A ballistic calculator for Remington factory ammunition (based on Pinsoft's Shoot! software). - Siacci/Mayevski G1 drag model. JBM's small-arms ballistics calculators Online trajectory calculators - Supports the G1, G2, G5, G6, G7 (for some projectiles experimentally measured G7 ballistic coefficients), G8, GI, GL and for some projectiles doppler radar-test derived (Cd) drag models. Pejsa Ballistics (MS Excel spreadsheet) - Pejsa model. Sharpshooter Friend (Palm PDA software) - Pejsa model. Quick Target Unlimited, Lapua Edition - A version of QuickTARGET Unlimited ballistic software (requires free registration to download) - Supports the G1, G2, G5, G6, G7, G8, GL, GS Spherical 9/16"SAAMI, GS Spherical Don Miller, RA4, Soviet 1943, British 1909 Hatches Notebook and for some Lapua projectiles doppler radar-test derived (Cd) drag models. Lapua Ballistics Exterior ballistic software for Java or Android mobile phones. Based on doppler radar-test derived (Cd) drag models for Lapua projectiles and cartridges. Lapua Ballistics App 6 DoF model limited to Lapua bullets for Android and iOS. BfX - Ballistics for Excel Set of MS Excel add-ins functions - Supports the G1, G2, G5, G6, G7 G8 and RA4 and Pejsa drag models as well as one for air rifle pellets. Able to handle user supplied models, e.g. Lapua projectiles doppler radar-test derived (Cd) ones. GunSim "GunSim" free browser-based ballistics simulator program for Windows and Mac. BallisticSimulator "Ballistic Simulator" free ballistics simulator program for Windows. 5H0T Free online web-based ballistics calculator, with data export capability and charting. SAKO Ballistics Free online ballistic calculatoy by SAKO. Calculator also available as an android app (maybe on iOS also, I don't know) under "SAKO Ballistics" name. py-ballisticcalc LGPL Python library for point-mass ballistic calculations . Ballistics Projectiles Aerodynamics Articles containing video clips
0.775593
0.992069
0.769442
Thinking, Fast and Slow
Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman. The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical. The book delineates rational and non-rational motivations or triggers associated with each type of thinking process, and how they complement each other, starting with Kahneman's own research on loss aversion. From framing choices to people's tendency to replace a difficult question with one which is easy to answer, the book summarizes several decades of research to suggest that people have too much confidence in human judgment. Kahneman performed his own research, often in collaboration with Amos Tversky, which enriched his experience to write the book. It covers different phases of his career: his early work concerning cognitive biases, his work on prospect theory and happiness, and with the Israel Defense Forces. The book was a New York Times bestseller and was the 2012 winner of the National Academies Communication Award for best creative work that helps the public understanding of topics in behavioral science, engineering and medicine. The integrity of some priming studies cited in the book has been called into question in the midst of the psychological replication crisis. Two systems In the book's first section, Kahneman describes two different ways the brain forms thoughts: System 1: Fast, automatic, frequent, emotional, stereotypic, unconscious. Examples (in order of complexity) of things system 1 can do: determine that an object is at a greater distance than another localize the source of a specific sound complete the phrase "war and ..." display disgust when seeing a gruesome image solve 2+2=? read text on a billboard drive a car on an empty road think of a good chess move (if you're a chess master) understand simple sentences System 2: Slow, effortful, infrequent, logical, calculating, conscious. Examples of things system 2 can do: prepare yourself for the start of a sprint direct your attention towards the clowns at the circus direct your attention towards someone at a loud party look for the woman with the grey hair try to recognize a sound sustain a faster-than-normal walking rate determine the appropriateness of a particular behavior in a social setting count the number of A's in a certain text give someone your telephone number park into a tight parking space determine the price/quality ratio of two washing machines determine the validity of a complex logical reasoning solve 17 × 24 Kahneman describes a number of experiments which purport to examine the differences between these two thought systems and how they arrive at different results even given the same inputs. Terms and concepts include coherence, attention, laziness, association, jumping to conclusions, WYSIATI (What you see is all there is), and how one forms judgments. The System 1 vs. System 2 debate includes the reasoning or lack thereof for human decision making, with big implications for many areas including law and market research. Heuristics and biases The second section offers explanations for why humans struggle to think statistically. It begins by documenting a variety of situations in which we either arrive at binary decisions or fail to associate precisely reasonable probabilities with outcomes. Kahneman explains this phenomenon using the theory of heuristics. Kahneman and Tversky originally discussed this topic in their 1974 article titled Judgment Under Uncertainty: Heuristics and Biases. Kahneman uses heuristics to assert that System 1 thinking involves associating new information with existing patterns, or thoughts, rather than creating new patterns for each new experience. For example, a child who has only seen shapes with straight edges might perceive an octagon when first viewing a circle. As a legal metaphor, a judge limited to heuristic thinking would only be able to think of similar historical cases when presented with a new dispute, rather than considering the unique aspects of that case. In addition to offering an explanation for the statistical problem, the theory also offers an explanation for human biases. Anchoring The "anchoring effect" names a tendency to be influenced by irrelevant numbers. Shown greater/lesser numbers, experimental subjects gave greater/lesser responses. As an example, most people, when asked whether Gandhi was more than 114 years old when he died, will provide a much greater estimate of his age at death than others who were asked whether Gandhi was more or less than 35 years old. Experiments show that people's behavior is influenced, much more than they are aware, by irrelevant information. Availability The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of events on the basis of how easy it is to think of examples. The availability heuristic operates on the notion that, "if you can think of it, it must be important". The availability of consequences associated with an action is related positively to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the consequences of something, the greater we perceive these consequences to be. Sometimes, this heuristic is beneficial, but the frequencies at which events come to mind are usually not accurate representations of the probabilities of such events in real life. Conjunction fallacy System 1 is prone to substituting a simpler question for a difficult one. In what Kahneman terms their "best-known and most controversial" experiment, "the Linda problem," subjects were told about an imaginary Linda, young, single, outspoken, and intelligent, who, as a student, was very concerned with discrimination and social justice. They asked whether it was more probable that Linda is a bank teller or that she is a bank teller and an active feminist. The overwhelming response was that "feminist bank teller" was more likely than "bank teller," violating the laws of probability. (All feminist bank tellers are bank tellers, so the former can't be more likely). In this case System 1 substituted the easier question, "Is Linda a feminist?", neglecting the occupation qualifier. An alternative interpretation is that the subjects added an unstated cultural implicature to the effect that the other answer implied an exclusive or, that Linda was not a feminist. Optimism and loss aversion Kahneman writes of a "pervasive optimistic bias", which "may well be the most significant of the cognitive biases." This bias generates the illusion of control: the illusion that we have substantial control of our lives. A natural experiment reveals the prevalence of one kind of unwarranted optimism. The planning fallacy is the tendency to overestimate benefits and underestimate costs, impelling people to begin risky projects. In 2002, American kitchen remodeling was expected on average to cost $18,658, but actually cost $38,769. To explain overconfidence, Kahneman introduces the concept he terms What You See Is All There Is (WYSIATI). This theory states that when the mind makes decisions, it deals primarily with Known Knowns, phenomena it has observed already. It rarely considers Known Unknowns, phenomena that it knows to be relevant but about which it does not have information. Finally it appears oblivious to the possibility of Unknown Unknowns, unknown phenomena of unknown relevance. He explains that humans fail to take into account complexity and that their understanding of the world consists of a small and necessarily un-representative set of observations. Furthermore, the mind generally does not account for the role of chance and therefore falsely assumes that a future event will be similar to a past event. Framing Framing is the context in which choices are presented. Experiment: subjects were asked whether they would opt for surgery if the "survival" rate is 90 percent, while others were told that the mortality rate is 10 percent. The first framing increased acceptance, even though the situation was no different. Sunk cost Rather than consider the odds that an incremental investment would produce a positive return, people tend to "throw good money after bad" and continue investing in projects with poor prospects that have already consumed significant resources. In part this is to avoid feelings of regret. Overconfidence This part (part III, sections 19–24) of the book is dedicated to the undue confidence in what the mind believes it knows. It suggests that people often overestimate how much they understand about the world and underestimate the role of chance in particular. This is related to the excessive certainty of hindsight, when an event seems to be understood after it has occurred or developed. Kahneman's opinions concerning overconfidence are influenced by Nassim Nicholas Taleb. Choices In this section Kahneman returns to economics and expands his seminal work on Prospect Theory. He discusses the tendency for problems to be addressed in isolation and how, when other reference points are considered, the choice of that reference point (called a frame) has a disproportionate effect on the outcome. This section also offers advice on how some of the shortcomings of System 1 thinking can be avoided. Prospect theory Kahneman developed prospect theory, the basis for his Nobel prize, to account for experimental errors he noticed in Daniel Bernoulli's traditional utility theory. According to Kahneman, Utility Theory makes logical assumptions of economic rationality that do not represent people's actual choices, and does not take into account cognitive biases. One example is that people are loss-averse: they are more likely to act to avert a loss than to achieve a gain. Another example is that the value people place on a change in probability (e.g., of winning something) depends on the reference point: people seem to place greater value on a change from 0% to 10% (going from impossibility to possibility) than from, say, 45% to 55%, and they place the greatest value of all on a change from 90% to 100% (going from possibility to certainty). This occurs despite the fact that by traditional utility theory all three changes give the same increase in utility. Consistent with loss-aversion, the order of the first and third of those is reversed when the event is presented as losing rather than winning something: there, the greatest value is placed on eliminating the probability of a loss to 0. After the book's publication, the Journal of Economic Literature published a discussion of its parts concerning prospect theory, as well as an analysis of the four fundamental factors on which it is based. Two selves The fifth part of the book describes recent evidence which introduces a distinction between two selves, the 'experiencing self' and 'remembering self'. Kahneman proposed an alternative measure that assessed pleasure or pain sampled from moment to moment, and then summed over time. Kahneman termed this "experienced" well-being and attached it to a separate "self." He distinguished this from the "remembered" well-being that the polls had attempted to measure. He found that these two measures of happiness diverged. Life as a story The author's significant discovery was that the remembering self does not care about the duration of a pleasant or unpleasant experience. Instead, it retrospectively rates an experience by the maximum or minimum of the experience, and by the way it ends. The remembering self dominated the patient's ultimate conclusion. Experienced well-being Kahneman first began the study of well-being in the 1990s. At the time most happiness research relied on polls about life satisfaction. Having previously studied unreliable memories, the author was doubtful that life satisfaction was a good indicator of happiness. He designed a question that emphasized instead the well-being of the experiencing self. The author proposed that "Helen was happy in the month of March" if she spent most of her time engaged in activities that she would rather continue than stop, little time in situations that she wished to escape, and not too much time in a neutral state that wouldn't prefer continuing or stopping the activity either way. Thinking about life Kahneman suggests that emphasizing a life event such as a marriage or a new car can provide a distorted illusion of its true value. This "focusing illusion" revisits earlier ideas of substituting difficult questions and WYSIATI. Awards and honors 2011 Los Angeles Times Book Prize (Current Interest) National Academy of Sciences Best Book Award in 2012 The New York Times Book Review, one of the best books of 2011 Globe and Mail Best Books of the Year 2011 One of The Economist'''s 2011 Books of the Year One of The Wall Street Journal's Best Nonfiction Books of the Year 2011 Reception As of 2012 the book had sold over one million copies. On the year of its publication, it was on the New York Times Bestseller List. The book was reviewed in media including the Huffington Post, The Guardian, The New York Times, The Financial Times, The Independent, Bloomberg and The New York Review of Books. On Book Marks, the book received a "rave" consensus, based on eight critic reviews: six "rave" and two "positive". In Bookmarks March/April 2012 issue, a magazine that aggregates critic reviews of books, the book received a (4.00 out of 5) with the critical summary stating, "Either way, it's an enlightening tome on how--fast or slow--we make decisions". The book was also widely reviewed in academic journals, including the Journal of Economic Literature, American Journal of Education, The American Journal of Psychology, Planning Theory, The American Economist, The Journal of Risk and Insurance, The Michigan Law Review, American Scientist, Contemporary Sociology, Science, Contexts, The Wilson Quarterly, Technical Communication, The University of Toronto Law Journal, A Review of General Semantics and Scientific American Mind. The book was also reviewed in a monthly magazine Observer, published by the Association for Psychological Science. The book has achieved a large following among baseball scouts and baseball executives. The ways of thinking described in the book are believed to help scouts, who have to make major judgements off little information and can easily fall into prescriptive yet inaccurate patterns of analysis. The last chapter of Paul Bloom's Against Empathy discusses concepts also touched in Daniel Kahneman's book, Thinking, Fast and Slow, that suggest people make a series of rational and irrational decisions. He criticizes the argument that "regardless of reason's virtues, we just aren't any good at it." His point is that people are not as "stupid as scholars think they are." He explains that people are rational because they make thoughtful decisions in their everyday lives. For example, when someone has to make a big life decision they critically assess the outcomes, consequences, and alternative options. Author Nicholas Taleb has equated the book's importance to that of Adam Smith’s “The Wealth of Nations” and Sigmund Freud’s “The Interpretation of Dreams.” Replication crisis Part of the book has been swept up in the replication crisis facing psychology and the social sciences. It was discovered many prominent research findings were difficult or impossible for others to replicate, and thus the original findings were called into question. An analysis of the studies cited in chapter 4, "The Associative Machine", found that their replicability index (R-index) is 14, indicating essentially low to no reliability. Kahneman himself responded to the study in blog comments and acknowledged the chapter's shortcomings: "I placed too much faith in underpowered studies." Others have noted the irony in the fact that Kahneman made a mistake in judgment similar to the ones he studied. A later analysis made a bolder claim that, despite Kahneman's previous contributions to the field of decision making, most of the book's ideas are based on 'scientific literature with shaky foundations'. A general lack of replication in the empirical studies cited in the book was given as a justification. See also Behavioral economics Cognitive reflection test Decision theory Dual process theory List of cognitive biases Outline of thought Peak–end rule References External links How To Think Fast & Slow, excerpt at Penguin Books Australia Daniel Kahneman changed the way we think about thinking. But what do other thinkers think of him? – Various interviews about Kahneman and Thinking, Fast and Slow'' in an article in The Guardian. 2011 non-fiction books 2011 in economic history Books about bias Books about cognition Books about creativity Choice modelling Cognitive biases Cognition Decision-making Farrar, Straus and Giroux books Heuristics Prospect theory Psychology of learning Risk analysis Self Thought Thought experiments
0.770035
0.99922
0.769434
Temperature dependence of viscosity
Viscosity depends strongly on temperature. In liquids it usually decreases with increasing temperature, whereas, in most gases, viscosity increases with increasing temperature. This article discusses several models of this dependence, ranging from rigorous first-principles calculations for monatomic gases, to empirical correlations for liquids. Understanding the temperature dependence of viscosity is important for many applications, for instance engineering lubricants that perform well under varying temperature conditions (such as in a car engine), since the performance of a lubricant depends in part on its viscosity. Engineering problems of this type fall under the purview of tribology. Here dynamic viscosity is denoted by and kinematic viscosity by . The formulas given are valid only for an absolute temperature scale; therefore, unless stated otherwise temperatures are in kelvins. Physical causes Viscosity in gases arises from molecules traversing layers of flow and transferring momentum between layers. This transfer of momentum can be thought of as a frictional force between layers of flow. Since the momentum transfer is caused by free motion of gas molecules between collisions, increasing thermal agitation of the molecules results in a larger viscosity. Hence, gaseous viscosity increases with temperature. In liquids, viscous forces are caused by molecules exerting attractive forces on each other across layers of flow. Increasing temperature results in a decrease in viscosity because a larger temperature means particles have greater thermal energy and are more easily able to overcome the attractive forces binding them together. An everyday example of this viscosity decrease is cooking oil moving more fluidly in a hot frying pan than in a cold one. Gases The kinetic theory of gases allows accurate calculation of the temperature-variation of gaseous viscosity. The theoretical basis of the kinetic theory is given by the Boltzmann equation and Chapman–Enskog theory, which allow accurate statistical modeling of molecular trajectories. In particular, given a model for intermolecular interactions, one can calculate with high precision the viscosity of monatomic and other simple gases (for more complex gases, such as those composed of polar molecules, additional assumptions must be introduced which reduce the accuracy of the theory). The viscosity predictions for four molecular models are discussed below. The predictions of the first three models (hard-sphere, power-law, and Sutherland) can be simply expressed in terms of elementary functions. The Lennard–Jones model predicts a more complicated -dependence, but is more accurate than the other three models and is widely used in engineering practice. Hard-sphere kinetic theory If one models gas molecules as elastic hard spheres (with mass and diameter ), then elementary kinetic theory predicts that viscosity increases with the square root of absolute temperature : where is the Boltzmann constant. While correctly predicting the increase of gaseous viscosity with temperature, the trend is not accurate; the viscosity of real gases increases more rapidly than this. Capturing the actual dependence requires more realistic models of molecular interactions, in particular the inclusion of attractive interactions which are present in all real gases. Power-law force A modest improvement over the hard-sphere model is a repulsive inverse power-law force, where the force between two molecules separated by distance is proportional to , where is an empirical parameter. This is not a realistic model for real-world gases (except possibly at high temperature), but provides a simple illustration of how changing intermolecular interactions affects the predicted temperature dependence of viscosity. In this case, kinetic theory predicts an increase in temperature as , where . More precisely, if is the known viscosity at temperature , then Taking recovers the hard-sphere result, . For finite , corresponding to softer repulsion, is greater than , which results in faster increase of viscosity compared with the hard-sphere model. Fitting to experimental data for hydrogen and helium gives predictions for and shown in the table. The model is modestly accurate for these two gases, but inaccurate for other gases. Sutherland model Another simple model for gaseous viscosity is the Sutherland model, which adds weak intermolecular attractions to the hard-sphere model. If the attractions are small, they can be treated perturbatively, which leads to where , called the Sutherland constant, can be expressed in terms of the parameters of the intermolecular attractive force. Equivalently, if is a known viscosity at temperature , then Values of obtained from fitting to experimental data are shown in the table below for several gases. The model is modestly accurate for a number of gases (nitrogen, oxygen, argon, air, and others), but inaccurate for other gases like hydrogen and helium. In general, it has been argued that the Sutherland model is actually a poor model of intermolecular interactions, and is useful only as a simple interpolation formula for a restricted set of gases over a restricted range of temperatures. Lennard-Jones Under fairly general conditions on the molecular model, the kinetic theory prediction for can be written in the form where is called the collision integral and is a function of temperature as well as the parameters of the intermolecular interaction. It is completely determined by the kinetic theory, being expressed in terms of integrals over collisional trajectories of pairs of molecules. In general, is a complicated function of both temperature and the molecular parameters; the power-law and Sutherland models are unusual in that can be expressed in terms of elementary functions. The Lennard–Jones model assumes an intermolecular pair potential of the form where and are parameters and is the distance separating the centers of mass of the molecules. As such, the model is designed for spherically symmetric molecules. Nevertheless, it is frequently used for non-spherically symmetric molecules provided these do not possess a large dipole moment. The collisional integral for the Lennard-Jones model cannot be expressed exactly in terms of elementary functions. Nevertheless, it can be calculated numerically, and the agreement with experiment is good – not only for spherically symmetric molecules such as the noble gases, but also for many polyatomic gases as well. An approximate form of has also been suggested: where . This equation has an average deviation of only 0.064 percent of the range . Values of and estimated from experimental data are shown in the table below for several common gases. Liquids In contrast with gases, there is no systematic microscopic theory for liquid viscosity. However, there are several empirical models which extrapolate a temperature dependence based on available experimental viscosities. Two-parameter exponential A simple and widespread empirical correlation for liquid viscosity is a two-parameter exponential: This equation was first proposed in 1913, and is commonly known as the Andrade equation (named after British physicist Edward Andrade). It accurately describes many liquids over a range of temperatures. Its form can be motivated by modeling momentum transport at the molecular level as an activated rate process, although the physical assumptions underlying such models have been called into question. The table below gives estimated values of and for representative liquids. Comprehensive tables of these parameters for hundreds of liquids can be found in the literature. Three- and four-parameter exponentials One can also find tabulated exponentials with additional parameters, for example and Representative values are given in the tables below. Models for kinematic viscosity The effect of temperature on the kinematic viscosity has also been described by a number of empirical equations. The Walther formula is typically written in the form where is a shift constant, and and are empirical parameters. In lubricant specifications, normally only two temperatures are specified, in which case a standard value of = 0.7 is normally assumed. The Wright model has the form where an additional function , often a polynomial fit to experimental data, has been added to the Walther formula. The Seeton model is based on curve fitting the viscosity dependence of many liquids (refrigerants, hydrocarbons and lubricants) versus temperature and applies over a large temperature and viscosity range: where is absolute temperature in kelvins, is the kinematic viscosity in centistokes, is the zero order modified Bessel function of the second kind, and and are empirical parameters specific to each liquid. For liquid metal viscosity as a function of temperature, Seeton proposed: See also Viscosity index Tribology Transport phenomena Molecular modeling Intermolecular force Force field (chemistry) Joback method Notes References . Non-Newtonian fluids
0.778214
0.988653
0.769383
Gyromagnetic ratio
In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol , gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1). The term "gyromagnetic ratio" is often used as a synonym for a different but closely related quantity, the -factor. The -factor only differs from the gyromagnetic ratio in being dimensionless. For a classical rotating body Consider a nonconductive charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment due to the movement of charge and an angular momentum due to the movement of mass arising from its rotation. It can be shown that as long as its charge and mass density and flow are distributed identically and rotationally symmetric, its gyromagnetic ratio is where is its charge and is its mass. The derivation of this relation is as follows. It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result then follows from an integration. Suppose the ring has radius , area , mass , charge , and angular momentum . Then the magnitude of the magnetic dipole moment is For an isolated electron An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by the absolute value of the electron's -factor, which is denoted : where is the Bohr magneton. The gyromagnetic ratio due to electron spin is twice that due to the orbiting of an electron. In the framework of relativistic quantum mechanics, where is the fine-structure constant. Here the small corrections to the relativistic result come from the quantum field theory calculations of the anomalous magnetic dipole moment. The electron -factor is known to twelve decimal places by measuring the electron magnetic moment in a one-electron cyclotron: The electron gyromagnetic ratio is The electron -factor and are in excellent agreement with theory; see Precision tests of QED for details. Gyromagnetic factor not as a consequence of relativity Since a gyromagnetic factor equal to 2 follows from Dirac's equation, it is a frequent misconception to think that a -factor 2 is a consequence of relativity; it is not. The factor 2 can be obtained from the linearization of both the Schrödinger equation and the relativistic Klein–Gordon equation (which leads to Dirac's). In both cases a 4-spinor is obtained and for both linearizations the -factor is found to be equal to 2; Therefore, the factor 2 is a consequence of the minimal coupling and of the fact of having the same order of derivatives for space and time. Physical spin particles which cannot be described by the linear gauged Dirac equation satisfy the gauged Klein–Gordon equation extended by the term according to, Here, and stand for the Lorentz group generators in the Dirac space, and the electromagnetic tensor respectively, while is the electromagnetic four-potential. An example for such a particle, is the spin companion to spin in the representation space of the Lorentz group. This particle has been shown to be characterized by and consequently to behave as a truly quadratic fermion. For a nucleus Protons, neutrons, and many nuclei carry nuclear spin, which gives rise to a gyromagnetic ratio as above. The ratio is conventionally written in terms of the proton mass and charge, even for neutrons and for other nuclei, for the sake of simplicity and consistency. The formula is: where is the nuclear magneton, and is the -factor of the nucleon or nucleus in question. The ratio equal to , is 7.622593285(47) MHz/T. The gyromagnetic ratio of a nucleus plays a role in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). These procedures rely on the fact that bulk magnetization due to nuclear spins precess in a magnetic field at a rate called the Larmor frequency, which is simply the product of the gyromagnetic ratio with the magnetic field strength. With this phenomenon, the sign of determines the sense (clockwise vs counterclockwise) of precession. Most common nuclei such as 1H and 13C have positive gyromagnetic ratios. Approximate values for some common nuclei are given in the table below. Larmor precession Any free system with a constant gyromagnetic ratio, such as a rigid system of charges, a nucleus, or an electron, when placed in an external magnetic field (measured in teslas) that is not aligned with its magnetic moment, will precess at a frequency (measured in hertz), that is proportional to the external field: For this reason, values of , in units of hertz per tesla (Hz/T), are often quoted instead of . Heuristic derivation The derivation of this relation is as follows: First we must prove that the torque resulting from subjecting a magnetic moment to a magnetic field is The identity of the functional form of the stationary electric and magnetic fields has led to defining the magnitude of the magnetic dipole moment equally well as , or in the following way, imitating the moment of an electric dipole: The magnetic dipole can be represented by a needle of a compass with fictitious magnetic charges on the two poles and vector distance between the poles under the influence of the magnetic field of earth By classical mechanics the torque on this needle is But as previously stated so the desired formula comes up. is the unit distance vector. The model of the spinning electron we use in the derivation has an evident analogy with a gyroscope. For any rotating body the rate of change of the angular momentum equals the applied torque : Note as an example the precession of a gyroscope. The earth's gravitational attraction applies a force or torque to the gyroscope in the vertical direction, and the angular momentum vector along the axis of the gyroscope rotates slowly about a vertical line through the pivot. In the place of the gyroscope imagine a sphere spinning around the axis and with its center on the pivot of the gyroscope, and along the axis of the gyroscope two oppositely directed vectors both originated in the center of the sphere, upwards and downwards Replace the gravity with a magnetic flux density represents the linear velocity of the pike of the arrow along a circle whose radius is where is the angle between and the vertical. Hence the angular velocity of the rotation of the spin is Consequently, This relationship also explains an apparent contradiction between the two equivalent terms, gyromagnetic ratio versus magnetogyric ratio: whereas it is a ratio of a magnetic property (i.e. dipole moment) to a gyric (rotational, from , "turn") property (i.e. angular momentum), it is also, at the same time, a ratio between the angular precession frequency (another gyric property) and the magnetic field. The angular precession frequency has an important physical meaning: It is the angular cyclotron frequency, the resonance frequency of an ionized plasma being under the influence of a static finite magnetic field, when we superimpose a high frequency electromagnetic field. See also Charge-to-mass ratio Chemical shift Landé -factor Larmor equation Proton gyromagnetic ratio References Atomic physics Nuclear magnetic resonance Ratios
0.773613
0.994526
0.769378
Servomechanism
In mechanical and control engineering, a servomechanism (also called servo system, or simply servo) is a control system for the position and its time derivatives, such as velocity, of a mechanical system. It often includes a servomotor, and uses closed-loop control to reduce steady-state error and improve dynamic response. In closed-loop control, error-sensing negative feedback is used to correct the action of the mechanism. In displacement-controlled applications, it usually includes a built-in encoder or other position feedback mechanism to ensure the output is achieving the desired effect. Following a specified motion trajectory is called servoing, where "servo" is used as a verb. The servo prefix originates from the Latin word servus meaning slave. The term correctly applies only to systems where the feedback or error-correction signals help control mechanical position, speed, attitude or any other measurable variables. For example, an automotive power window control is not a servomechanism, as there is no automatic feedback that controls position—the operator does this by observation. By contrast a car's cruise control uses closed-loop feedback, which classifies it as a servomechanism. Applications Position control A common type of servo provides position control. Commonly, servos are electric, hydraulic, or pneumatic. They operate on the principle of negative feedback, where the control input is compared to the actual position of the mechanical system as measured by some type of transducer at the output. Any difference between the actual and wanted values (an "error signal") is amplified (and converted) and used to drive the system in the direction necessary to reduce or eliminate the error. This procedure is one widely used application of control theory. Typical servos can give a rotary (angular) or linear output. Speed control Speed control via a governor is another type of servomechanism. The steam engine uses mechanical governors; another early application was to govern the speed of water wheels. Prior to World War II the constant speed propeller was developed to control engine speed for maneuvering aircraft. Fuel controls for gas turbine engines employ either hydromechanical or electronic governing. Others Positioning servomechanisms were first used in military fire-control and marine navigation equipment. Today servomechanisms are used in automatic machine tools, satellite-tracking antennas, remote control airplanes, automatic navigation systems on boats and planes, and antiaircraft-gun control systems. Other examples are fly-by-wire systems in aircraft which use servos to actuate the aircraft's control surfaces, and radio-controlled models which use RC servos for the same purpose. Many autofocus cameras also use a servomechanism to accurately move the lens. A hard disk drive has a magnetic servo system with sub-micrometer positioning accuracy. In industrial machines, servos are used to perform complex motion, in many applications. Servomotor A servomotor is a specific type of motor that is combined with a rotary encoder or a potentiometer to form a servomechanism. This assembly may in turn form part of another servomechanism. A potentiometer provides a simple analog signal to indicate position, while an encoder provides position and usually speed feedback, which by the use of a PID controller allow more precise control of position and thus faster achievement of a stable position (for a given motor power). Potentiometers are subject to drift when the temperature changes whereas encoders are more stable and accurate. Servomotors are used for both high-end and low-end applications. On the high end are precision industrial components that use a rotary encoder. On the low end are inexpensive radio control servos (RC servos) used in radio-controlled models which use a free-running motor and a simple potentiometer position sensor with an embedded controller. The term servomotor generally refers to a high-end industrial component while the term servo is most often used to describe the inexpensive devices that employ a potentiometer. Stepper motors are not considered to be servomotors, although they too are used to construct larger servomechanisms. Stepper motors have inherent angular positioning, owing to their construction, and this is generally used in an open-loop manner without feedback. They are generally used for medium-precision applications. RC servos are used to provide actuation for various mechanical systems such as the steering of a car, the control surfaces on a plane, or the rudder of a boat. Due to their affordability, reliability, and simplicity of control by microprocessors, they are often used in small-scale robotics applications. A standard RC receiver (or a microcontroller) sends pulse-width modulation (PWM) signals to the servo. The electronics inside the servo translate the width of the pulse into a position. When the servo is commanded to rotate, the motor is powered until the potentiometer reaches the value corresponding to the commanded position. History James Watt's steam engine governor is generally considered the first powered feedback system. The windmill fantail is an earlier example of automatic control, but since it does not have an amplifier or gain, it is not usually considered a servomechanism. The first feedback position control device was the ship steering engine, used to position the rudder of large ships based on the position of the ship's wheel. John McFarlane Gray was a pioneer. His patented design was used on the SS Great Eastern in 1866. Joseph Farcot may deserve equal credit for the feedback concept, with several patents between 1862 and 1868. The telemotor was invented around 1872 by Andrew Betts Brown, allowing elaborate mechanisms between the control room and the engine to be greatly simplified. Steam steering engines had the characteristics of a modern servomechanism: an input, an output, an error signal, and a means for amplifying the error signal used for negative feedback to drive the error towards zero. The Ragonnet power reverse mechanism was a general purpose air or steam-powered servo amplifier for linear motion patented in 1909. Electrical servomechanisms were used as early as 1888 in Elisha Gray's Telautograph. Electrical servomechanisms require a power amplifier. World War II saw the development of electrical fire-control servomechanisms, using an amplidyne as the power amplifier. Vacuum tube amplifiers were used in the UNISERVO tape drive for the UNIVAC I computer. The Royal Navy began experimenting with Remote Power Control (RPC) on HMS Champion in 1928 and began using RPC to control searchlights in the early 1930s. During WW2 RPC was used to control gun mounts and gun directors. Modern servomechanisms use solid state power amplifiers, usually built from MOSFET or thyristor devices. Small servos may use power transistors. The origin of the word is believed to come from the French "Le Servomoteur" or the slavemotor, first used by J. J. L. Farcot in 1868 to describe hydraulic and steam engines for use in ship steering. The simplest kind of servos use bang–bang control. More complex control systems use proportional control, PID control, and state space control, which are studied in modern control theory. Types of performances Servos can be classified by means of their feedback control systems: type 0 servos: under steady-state conditions they produce a constant value of the output with a constant error signal; type 1 servos: under steady-state conditions they produce a constant value of the output with null error signal, but a constant rate of change of the reference implies a constant error in tracking the reference; type 2 servos: under steady-state conditions they produce a constant value of the output with null error signal. A constant rate of change of the reference implies a null error in tracking the reference. A constant rate of acceleration of the reference implies a constant error in tracking the reference. The servo bandwidth indicates the capability of the servo to follow rapid changes in the commanded input. See also Further reading Hsue-Shen Tsien (1954) Engineering Cybernetics, McGraw Hill, link from HathiTrust References External links Ontario News "pioneer in servo technology" Rane Pro Audio Reference definition of "servo-loop" Seattle Robotics Society's "What is a Servo?" different types of servo motors" Control theory Control devices Mechanical amplifiers
0.776539
0.990776
0.769376
Dispersion relation
In the physical sciences and electrical engineering, dispersion relations describe the effect of dispersion on the properties of waves in a medium. A dispersion relation relates the wavelength or wavenumber of a wave to its frequency. Given the dispersion relation, one can calculate the frequency-dependent phase velocity and group velocity of each sinusoidal component of a wave in the medium, as a function of frequency. In addition to the geometry-dependent and material-dependent dispersion relations, the overarching Kramers–Kronig relations describe the frequency-dependence of wave propagation and attenuation. Dispersion may be caused either by geometric boundary conditions (waveguides, shallow water) or by interaction of the waves with the transmitting medium. Elementary particles, considered as matter waves, have a nontrivial dispersion relation, even in the absence of geometric constraints and other media. In the presence of dispersion, a wave does not propagate with an unchanging waveform, giving rise to the distinct frequency-dependent phase velocity and group velocity. Dispersion Dispersion occurs when sinusoidal waves of different wavelengths have different propagation velocities, so that a wave packet of mixed wavelengths tends to spread out in space. The speed of a plane wave, , is a function of the wave's wavelength : The wave's speed, wavelength, and frequency, f, are related by the identity The function expresses the dispersion relation of the given medium. Dispersion relations are more commonly expressed in terms of the angular frequency and wavenumber . Rewriting the relation above in these variables gives where we now view f as a function of k. The use of ω(k) to describe the dispersion relation has become standard because both the phase velocity ω/k and the group velocity dω/dk have convenient representations via this function. The plane waves being considered can be described by where A is the amplitude of the wave, A0 = A(0, 0), x is a position along the wave's direction of travel, and t is the time at which the wave is described. Plane waves in vacuum Plane waves in vacuum are the simplest case of wave propagation: no geometric constraint, no interaction with a transmitting medium. Electromagnetic waves in vacuum For electromagnetic waves in vacuum, the angular frequency is proportional to the wavenumber: This is a linear dispersion relation. In this case, the phase velocity and the group velocity are the same: and thus both are equal to the speed of light in vacuum, which is frequency-independent. De Broglie dispersion relations For de Broglie matter waves the frequency dispersion relation is non-linear: The equation says the matter wave frequency in vacuum varies with wavenumber in the non-relativistic approximation. The variation has two parts: a constant part due to the de Broglie frequency of the rest mass and a quadratic part due to kinetic energy. Derivation While applications of matter waves occur at non-relativistic velocity, de Broglie applied special relativity to derive his waves. Starting from the relativistic energy–momentum relation: use the de Broglie relations for energy and momentum for matter waves, where is the angular frequency and is the wavevector with magnitude , equal to the wave number. Divide by and take the square root. This gives the relativistic frequency dispersion relation: Practical work with matter waves occurs at non-relativistic velocity. To approximate, we pull out the rest-mass dependent frequency: Then we see that the factor is very small so for not too large, we expand and multiply: This gives the non-relativistic approximation discussed above. If we start with the non-relativistic Schrödinger equation we will end up without the first, rest mass, term. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" ! Animation: phase and group velocity of electrons |- | This animation portrays the de Broglie phase and group velocities (in slow motion) of three free electrons traveling over a field 0.4 ångströms in width. The momentum per unit mass (proper velocity) of the middle electron is lightspeed, so that its group velocity is 0.707 c. The top electron has twice the momentum, while the bottom electron has half. Note that as the momentum increases, the phase velocity decreases down to c, whereas the group velocity increases up to c, until the wave packet and its phase maxima move together near the speed of light, whereas the wavelength continues to decrease without bound. Both transverse and longitudinal coherence widths (packet sizes) of such high energy electrons in the lab may be orders of magnitude larger than the ones shown here. |} Frequency versus wavenumber As mentioned above, when the focus in a medium is on refraction rather than absorption—that is, on the real part of the refractive index—it is common to refer to the functional dependence of angular frequency on wavenumber as the dispersion relation. For particles, this translates to a knowledge of energy as a function of momentum. Waves and optics The name "dispersion relation" originally comes from optics. It is possible to make the effective speed of light dependent on wavelength by making light pass through a material which has a non-constant index of refraction, or by using light in a non-uniform medium such as a waveguide. In this case, the waveform will spread over time, such that a narrow pulse will become an extended pulse, i.e., be dispersed. In these materials, is known as the group velocity and corresponds to the speed at which the peak of the pulse propagates, a value different from the phase velocity. Deep water waves The dispersion relation for deep water waves is often written as where g is the acceleration due to gravity. Deep water, in this respect, is commonly denoted as the case where the water depth is larger than half the wavelength. In this case the phase velocity is and the group velocity is Waves on a string For an ideal string, the dispersion relation can be written as where T is the tension force in the string, and μ is the string's mass per unit length. As for the case of electromagnetic waves in vacuum, ideal strings are thus a non-dispersive medium, i.e. the phase and group velocities are equal and independent (to first order) of vibration frequency. For a nonideal string, where stiffness is taken into account, the dispersion relation is written as where is a constant that depends on the string. Electron band structure In the study of solids, the study of the dispersion relation of electrons is of paramount importance. The periodicity of crystals means that many levels of energy are possible for a given momentum and that some energies might not be available at any momentum. The collection of all possible energies and momenta is known as the band structure of a material. Properties of the band structure define whether the material is an insulator, semiconductor or conductor. Phonons Phonons are to sound waves in a solid what photons are to light: they are the quanta that carry it. The dispersion relation of phonons is also non-trivial and important, being directly related to the acoustic and thermal properties of a material. For most systems, the phonons can be categorized into two main types: those whose bands become zero at the center of the Brillouin zone are called acoustic phonons, since they correspond to classical sound in the limit of long wavelengths. The others are optical phonons, since they can be excited by electromagnetic radiation. Electron optics With high-energy (e.g., ) electrons in a transmission electron microscope, the energy dependence of higher-order Laue zone (HOLZ) lines in convergent beam electron diffraction (CBED) patterns allows one, in effect, to directly image cross-sections of a crystal's three-dimensional dispersion surface. This dynamical effect has found application in the precise measurement of lattice parameters, beam energy, and more recently for the electronics industry: lattice strain. History Isaac Newton studied refraction in prisms but failed to recognize the material dependence of the dispersion relation, dismissing the work of another researcher whose measurement of a prism's dispersion did not match Newton's own. Dispersion of waves on water was studied by Pierre-Simon Laplace in 1776. The universality of the Kramers–Kronig relations (1926–27) became apparent with subsequent papers on the dispersion relation's connection to causality in the scattering theory of all types of waves and particles. See also Ellipsometry Ultrashort pulse Waves in plasmas References External links Poster on CBED simulations to help visualize dispersion surfaces, by Andrey Chuvilin and Ute Kaiser Angular frequency calculator Equations of physics
0.773676
0.994418
0.769357
Heat engine
A heat engine is a system that converts heat to usable energy, particularly mechanical energy, which can then be used to do mechanical work. While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag. In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications. Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models. Overview In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two. In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature. The efficiency of various heat engines proposed or used today has a large range: 3% (97 percent waste heat using low quality heat) for the ocean thermal energy conversion (OTEC) ocean power proposal 25% for most automotive gasoline engines 49% for a supercritical coal-fired power station such as the Avedøre Power Station 60% for a combined cycle gas turbine The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency. Examples Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines. Everyday examples Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature. In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states. Earth's heat engine Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe. A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy. Phase-change cycles In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression. Rankine cycle (classical steam engine) Regenerative cycle (steam engine more efficient than Rankine cycle) Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water) Vapor to liquid cycle (drinking bird, injector, Minto wheel) Liquid to solid cycle (frost heaving – water changing from ice to liquid and back again can lift rock up to 60 cm.) Solid to gas cycle (firearms – solid propellants combust to hot gases.) Gas-only cycles In these cycles and engines the working fluid is always a gas (i.e., there is no phase change): Carnot cycle (Carnot heat engine) Ericsson cycle (Caloric Ship John Ericsson) Stirling cycle (Stirling engine, thermoacoustic devices) Internal combustion engine (ICE): Otto cycle (e.g. gasoline/petrol engine) Diesel cycle (e.g. Diesel engine) Atkinson cycle (Atkinson engine) Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine) Lenoir cycle (e.g., pulse jet engine) Miller cycle (Miller engine) Liquid-only cycles In these cycles and engines the working fluid are always like liquid: Stirling cycle (Malone engine) Electron cycles Johnson thermoelectric energy converter Thermoelectric (Peltier–Seebeck effect) Thermogalvanic cell Thermionic emission Thermotunnel cooling Magnetic cycles Thermo-magnetic motor (Tesla) Cycles used for refrigeration A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible. Refrigeration cycles include: Air cycle machine Gas-absorption refrigerator Magnetic refrigeration Stirling cryocooler Vapor-compression refrigeration Vuilleumier cycle Evaporative heat engines The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air. Mesoscopic heat engines Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality Efficiency The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics, after a completed cycle: and therefore where is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is done by the engine.) is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is added to the engine.) is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is lost by the engine to the sink.) In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink. In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as The efficiency is less than 100% because of the waste heat unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again. The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero: Note that is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, and , and thus , which gives and thus the Carnot limit for heat-engine efficiency, where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins. The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle. Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine. Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature. Endo-reversible heat-engines By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired. A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, very similar to a Carnot engine, but where the thermal reservoirs at temperature and are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: and . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, and the classical Carnot result is found , but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes (Note: T in units of K or °R) This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics): As shown, the Curzon–Ahlborn efficiency much more closely models that observed. History Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today. Enhancements Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules: Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOx production (if the heat source is combustion with ambient air) restrict the maximum temperature on workable heat-engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the critical point (supercritical water). The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is supercritical CO2. SO2 and xenon have also been considered for such applications. Downsides include issues of corrosion and erosion, the different chemical behavior above and below the critical point, the needed high pressures and – in the case of sulfur dioxide and to a lesser extent carbon dioxide – toxicity. Among the mentioned compounds xenon is least suitable for use in a nuclear reactor due to the high neutron absorption cross section of almost all isotopes of xenon, whereas carbon dioxide and water can also double as a neutron moderator for a thermal spectrum reactor. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized. Heat engine processes Each process is one of the following: isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink) isobaric (at constant pressure) isometric/isochoric (at constant volume), also referred to as iso-volumetric adiabatic (no heat is added or removed from the system during adiabatic process) isentropic (reversible adiabatic process, no heat is added or removed during isentropic process) See also Carnot heat engine Cogeneration Einstein refrigerator Heat pump Reciprocating engine for a general description of the mechanics of piston engines Stirling engine Thermosynthesis Timeline of heat engine technology References Energy conversion Engine technology Engines Heating, ventilation, and air conditioning Thermodynamics
0.772761
0.995588
0.769351
Covariance and contravariance of vectors
In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance. Introduction In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (or tuple) of numbers such as The numbers in the list depend on the choice of coordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the components v1, v2, and v3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a certain way in passing from one coordinate system to another. A simple illustrative case is that of a Euclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always vary opposite to that of the basis vectors. That vector is therefore defined as a contravariant tensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is, dividing the scale of the reference axes by 100, so that the basis vectors now are meters long), the components of the measured position vector are multiplied by 100. A vector's components change scale inversely to changes in scale to the reference axes, and consequently a vector is called a contravariant tensor. A vector, which is an example of a contravariant tensor, has components that transform inversely to the transformation of the reference axes, (with example transformations including rotation and dilation). The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that the basis vectors transform according to , then the components of a vector v in the original basis must be similarly transformed via . The components of a vector are often represented arranged in a column. By contrast, a covector has components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined as , where is a vector. The components of this covector in some arbitrary basis are , with being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vector , with components ). The covariance of these covector components is then seen by noting that if a transformation described by an invertible matrix M were to be applied to the basis vectors in the corresponding vector space, , then the components of the covector will transform with the same matrix , namely, . The components of a covector are often represented arranged in a row. A third concept related to covariance and contravariance is invariance. A scalar (also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physical observable that is a scalar is the mass of a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is called invariant. The magnitude of a vector (such as distance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length meters, if all Cartesian basis vectors are changed from meters in length to meters in length, the length of the position vector remains unchanged at meters, although the vector components will all increase by a factor of ). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors are dual to vectors. Thus, to summarize: A vector or tangent vector, has components that contra-vary with a change of basis to compensate. That is, the matrix that transforms the vector components must be the inverse of the matrix that transforms the basis vectors. The components of vectors (as opposed to those of covectors) are said to be contravariant. In Einstein notation (implicit summation over repeated index), contravariant components are denoted with upper indices as in A covector or cotangent vector has components that co-vary with a change of basis in the corresponding (initial) vector space. That is, the components must be transformed by the same matrix as the change of basis matrix in the corresponding (initial) vector space. The components of covectors (as opposed to those of vectors) are said to be covariant. In Einstein notation, covariant components are denoted with lower indices as in The scalar product of a vector and covector is the scalar , which is invariant. It is the duality pairing of vectors and covectors. Definition The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under a change of basis (passive transformation). Thus let V be a vector space of dimension n over a field of scalars S, and let each of and be a basis of V. Also, let the change of basis from f to f′ be given by for some invertible n×n matrix A with entries . Here, each vector Yj of the f′ basis is a linear combination of the vectors Xi of the f basis, so that Contravariant transformation A vector in V is expressed uniquely as a linear combination of the elements of the f basis as where v[f] are elements of the field S known as the components of v in the f basis. Denote the column vector of components of v by v[f]: so that can be rewritten as a matrix product The vector v may also be expressed in terms of the f′ basis, so that However, since the vector v itself is invariant under the choice of basis, The invariance of v combined with the relationship between f and f′ implies that giving the transformation rule In terms of components, where the coefficients are the entries of the inverse matrix of A. Because the components of the vector v transform with the inverse of the matrix A, these components are said to transform contravariantly under a change of basis. The way A relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change: Covariant transformation A linear functional α on V is expressed uniquely in terms of its components (elements in S) in the f basis as These components are the action of α on the basis vectors Xi of the f basis. Under the change of basis from f to f′ (via ), the components transform so that Denote the row vector of components of α by α[f]: so that can be rewritten as the matrix product Because the components of the linear functional α transform with the matrix A, these components are said to transform covariantly under a change of basis. The way A relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction: Had a column vector representation been used instead, the transformation law would be the transpose Coordinates The choice of basis f on the vector space V defines uniquely a set of coordinate functions on V, by means of The coordinates on V are therefore contravariant in the sense that Conversely, a system of n quantities vi that transform like the coordinates xi on V defines a contravariant vector (or simply vector). A system of n quantities that transform oppositely to the coordinates is then a covariant vector (or covector). This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (a manifold) on which vectors live as tangent vectors or cotangent vectors. Given a local coordinate system xi on the manifold, the reference axes for the coordinate system are the vector fields This gives rise to the frame at every point of the coordinate patch. If yi is a different coordinate system and then the frame f' is related to the frame f by the inverse of the Jacobian matrix of the coordinate transition: Or, in indices, A tangent vector is by definition a vector that is a linear combination of the coordinate partials . Thus a tangent vector is defined by Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has Therefore, the components of a tangent vector transform via Accordingly, a system of n quantities vi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector. Covariant and contravariant components of a vector with a metric In a finite-dimensional vector space V over a field K with a symmetric bilinear form (which may be referred to as the metric tensor), there is little distinction between covariant and contravariant vectors, because the bilinear form allows covectors to be identified with vectors. That is, a vector v uniquely determines a covector α via for all vectors w. Conversely, each covector α determines a unique vector v by this equation. Because of this identification of vectors with covectors, one may speak of the covariant components or contravariant components of a vector, that is, they are just representations of the same vector using the reciprocal basis. Given a basis of V, there is a unique reciprocal basis of V determined by requiring that the Kronecker delta. In terms of these bases, any vector v can be written in two ways: The components vi[f] are the contravariant components of the vector v in the basis f, and the components vi[f] are the covariant components of v in the basis f. The terminology is justified because under a change of basis, Euclidean plane In the Euclidean plane, the dot product allows for vectors to be identified with covectors. If is a basis, then the dual basis satisfies Thus, e1 and e2 are perpendicular to each other, as are e2 and e1, and the lengths of e1 and e2 normalized against e1 and e2, respectively. Example For example, suppose that we are given a basis e1, e2 consisting of a pair of vectors making a 45° angle with one another, such that e1 has length 2 and e2 has length 1. Then the dual basis vectors are given as follows: e2 is the result of rotating e1 through an angle of 90° (where the sense is measured by assuming the pair e1, e2 to be positively oriented), and then rescaling so that holds. e1 is the result of rotating e2 through an angle of 90°, and then rescaling so that holds. Applying these rules, we find and Thus the change of basis matrix in going from the original basis to the reciprocal basis is since For instance, the vector is a vector with contravariant components The covariant components are obtained by equating the two expressions for the vector v: so Three-dimensional Euclidean space In the three-dimensional Euclidean space, one can also determine explicitly the dual basis to a given set of basis vectors e1, e2, e3 of E3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are: Even when the ei and ei are not orthonormal, they are still mutually reciprocal: Then the contravariant components of any vector v can be obtained by the dot product of v with the dual basis vectors: Likewise, the covariant components of v can be obtained from the dot product of v with basis vectors, viz. Then v can be expressed in two (reciprocal) ways, viz. or Combining the above relations, we have and we can convert between the basis and dual basis with and If the basis vectors are orthonormal, then they are the same as the dual basis vectors. General Euclidean spaces More generally, in an n-dimensional Euclidean space V, if a basis is the reciprocal basis is given by (double indices are summed over), where the coefficients gij are the entries of the inverse matrix of Indeed, we then have The covariant and contravariant components of any vector are related as above by and Use in tensor analysis The distinction between covariance and contravariance is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and in Einstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors. In tensor analysis, a covariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression. On a manifold, a tensor field will typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with a metric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices by contracting with the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates. The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle. A contravariant vector is one which transforms like , where are the coordinates of a particle at its proper time . A covariant vector is one which transforms like , where is a scalar field. Algebra and geometry In category theory, there are covariant functors and contravariant functors. The assignment of the dual space to a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from a -torsor to the fundamental representation of . Similarly, tensors of higher degree are functors with values in other representations of . However, some constructions of multilinear algebra are of "mixed" variance, which prevents them from being functors. In differential geometry, the components of a vector relative to a basis of the tangent bundle are covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or 1-forms) actually pull back under smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually a contravariant functor. Likewise, vectors whose components are contravariant push forward under smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is a covariant functor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates. See also Active and passive transformation Mixed tensor Two-point tensor, a generalization allowing indices to reference multiple vector bases Notes Citations References . . . . . . External links Invariance, Contravariance, and Covariance Tensors Differential geometry Riemannian geometry Vectors (mathematics and physics)
0.772342
0.996085
0.769319
Langevin dynamics
In physics, Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems using the Langevin equation. It was originally developed by French physicist Paul Langevin. The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations. Langevin dynamics simulations are a kind of Monte Carlo simulation. Overview A real world molecular system is unlikely to be present in vacuum. Jostling of solvent or air molecules causes friction, and the occasional high velocity collision will perturb the system. Langevin dynamics attempts to extend molecular dynamics to allow for these effects. Also, Langevin dynamics allows temperature to be controlled as with a thermostat, thus approximating the canonical ensemble. Langevin dynamics mimics the viscous aspect of a solvent. It does not fully model an implicit solvent; specifically, the model does not account for the electrostatic screening and also not for the hydrophobic effect. For denser solvents, hydrodynamic interactions are not captured via Langevin dynamics. For a system of particles with masses , with coordinates that constitute a time-dependent random variable, the resulting Langevin equation is where is the particle interaction potential; is the gradient operator such that is the force calculated from the particle interaction potentials; the dot is a time derivative such that is the velocity and is the acceleration; is the damping constant (units of reciprocal time), also known as the collision frequency; is the temperature, is the Boltzmann constant; and is a delta-correlated stationary Gaussian process with zero-mean, satisfying Here, is the Dirac delta. If the main objective is to control temperature, care should be exercised to use a small damping constant . As grows, it spans from the inertial all the way to the diffusive (Brownian) regime. The Langevin dynamics limit of non-inertia is commonly described as Brownian dynamics. Brownian dynamics can be considered as overdamped Langevin dynamics, i.e. Langevin dynamics where no average acceleration takes place. The Langevin equation can be reformulated as a Fokker–Planck equation that governs the probability distribution of the random variable X. See also Hamiltonian mechanics Statistical mechanics Implicit solvation Stochastic differential equations Langevin equation Klein–Kramers equation References External links Langevin Dynamics (LD) Simulation Classical mechanics Statistical mechanics Dynamical systems Symplectic geometry
0.776893
0.990242
0.769312
Process engineering
Process engineering is the understanding and application of the fundamental principles and laws of nature that allow humans to transform raw material and energy into products that are useful to society, at an industrial level. By taking advantage of the driving forces of nature such as pressure, temperature and concentration gradients, as well as the law of conservation of mass, process engineers can develop methods to synthesize and purify large quantities of desired chemical products. Process engineering focuses on the design, operation, control, optimization and intensification of chemical, physical, and biological processes. Their work involves analyzing the chemical makeup of various ingredients and determining how they might react with one another. A process engineer can specialize in a number of areas, including the following: -Agriculture processing -Food and dairy production -Beer and whiskey production -Cosmetics production -Pharmaceutical production -Petrochemical manufacturing -Mineral processing -Printed circuit board production Overview Process engineering involves the utilization of multiple tools and methods. Depending on the exact nature of the system, processes need to be simulated and modeled using mathematics and computer science. Processes where phase change and phase equilibria are relevant require analysis using the principles and laws of thermodynamics to quantify changes in energy and efficiency. In contrast, processes that focus on the flow of material and energy as they approach equilibria are best analyzed using the disciplines of fluid mechanics and transport phenomena. Disciplines within the field of mechanics need to be applied in the presence of fluids or porous and dispersed media. Materials engineering principles also need to be applied, when relevant. Manufacturing in the field of process engineering involves an implementation of process synthesis steps. Regardless of the exact tools required, process engineering is then formatted through the use of a process flow diagram (PFD) where material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns, receiver/head tanks, mixing, separations, pumping, etc.) and flowrates are specified, as well as a list of all pipes and conveyors and their contents, material properties such as density, viscosity, particle-size distribution, flowrates, pressures, temperatures, and materials of construction for the piping and unit operations. The process flow diagram is then used to develop a piping and instrumentation diagram (P&ID) which graphically displays the actual process occurring. P&ID are meant to be more complex and specific than a PFD. They represent a less muddled approach to the design. The P&ID is then used as a basis of design for developing the "system operation guide" or "functional design specification" which outlines the operation of the process. It guides the process through operation of machinery, safety in design, programming and effective communication between engineers. From the P&ID, a proposed layout (general arrangement) of the process can be shown from an overhead view (plot plan) and a side view (elevation), and other engineering disciplines are involved such as civil engineers for site work (earth moving), foundation design, concrete slab design work, structural steel to support the equipment, etc. All previous work is directed toward defining the scope of the project, then developing a cost estimate to get the design installed, and a schedule to communicate the timing needs for engineering, procurement, fabrication, installation, commissioning, startup, and ongoing production of the process. Depending on needed accuracy of the cost estimate and schedule that is required, several iterations of designs are generally provided to customers or stakeholders who feed back their requirements. The process engineer incorporates these additional instructions (scope revisions) into the overall design and additional cost estimates, and schedules are developed for funding approval. Following funding approval, the project is executed via project management. Principal areas of focus in process engineering Process engineering activities can be divided into the following disciplines: Process design: synthesis of energy recovery networks, synthesis of distillation systems (azeotropic), synthesis of reactor networks, hierarchical decomposition flowsheets, superstructure optimization, design multiproduct batch plants, design of the production reactors for the production of plutonium, design of nuclear submarines. Process control: model predictive control, controllability measures, robust control, nonlinear control, statistical process control, process monitoring, thermodynamics-based control, denoted by three essential items, a collection of measurements, method of taking measurements, and a system of controlling the desired measurement. Process operations: scheduling process networks, multiperiod planning and optimization, data reconciliation, real-time optimization, flexibility measures, fault diagnosis. Supporting tools: sequential modular simulation, equation-based process simulation, AI/expert systems, large-scale nonlinear programming (NLP), optimization of differential algebraic equations (DAEs), mixed-integer nonlinear programming (MINLP), global optimization, optimization under uncertainty, and quality function deployment (QFD). Process Economics: This includes using simulation software such as ASPEN, Super-Pro to find out the break even point, net present value, marginal sales, marginal cost, return on investment of the industrial plant after the analysis of the heat and mass transfer of the plant. Process Data Analytics: Applying data analytics and machine learning methods for process manufacturing problems. History of process engineering Various chemical techniques have been used in industrial processes since time immemorial. However, it wasn't until the advent of thermodynamics and the law of conservation of mass in the 1780s that process engineering was properly developed and implemented as its own discipline. The set of knowledge that is now known as process engineering was then forged out of trial and error throughout the industrial revolution. The term process, as it relates to industry and production, dates back to the 18th century. During this time period, demands for various products began to drastically increase, and process engineers were required to optimize the process in which these products were created. By 1980, the concept of process engineering emerged from the fact that chemical engineering techniques and practices were being used in a variety of industries. By this time, process engineering had been defined as "the set of knowledge necessary to design, analyze, develop, construct, and operate, in an optimal way, the processes in which the material changes". By the end of the 20th century, process engineering had expanded from chemical engineering-based technologies to other applications, including metallurgical engineering, agricultural engineering, and product engineering. See also Chemical process modeling Chemical technologist Industrial engineering Industrial process Low-gravity process engineering Materials science Modular process skid Process chemistry Process flowsheeting Process integration Systems engineering process References External links Advanced Process Engineering at Cranfield University (Cranfield, UK) Sargent Centre for Process Systems Engineering (Imperial) Process Systems Engineering at Cornell University (Ithaca, New York) Department of Process Engineering at Stellenbosch University Process Research and Intelligent Systems Modeling (PRISM) group at BYU Process Systems Engineering at CMU Process Systems Engineering Laboratory at RWTH Aachen The Process Systems Engineering Laboratory (MIT) Process Engineering Consulting at Canada Process engineering Engineering disciplines Chemical processes
0.774253
0.993616
0.76931
Convection (heat transfer)
Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases. Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two. Overview Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when the fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate. The convection heat transfer mode comprises two mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion. Types Two types of convective heat transfer may be distinguished: Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below. Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current. In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection). Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts. Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces. For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated. Newton's law of cooling Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling. Newton's law states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze. The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment. In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Convective heat transfer The basic relationship for heat transfer by convection is: where is the heat transferred per unit time, A is the area of the object, h is the heat transfer coefficient, T is the object's surface temperature, and Tf is the fluid temperature. The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of h have been measured and tabulated for commonly encountered fluids and flow situations. See also Conjugate convective heat transfer Convection Forced convection Natural convection Mixed convection Heat transfer coefficient Heat transfer enhancement Heisler chart Thermal conductivity Convection–diffusion equation References Thermodynamics Heat transfer
0.772266
0.996156
0.769298
Andragogy
Andragogy refers to methods and principles used in adult education. The word comes from the Greek ἀνδρ- (andr-), meaning "adult male", and ἀγωγός (agogos), meaning "leader of". Therefore, andragogy literally means "leading men (adult males)", whereas "pedagogy" literally means "leading children". Definitions There are many different theories in the areas of learning, teaching and training. Andragogy commonly is defined as the art or science of teaching adults or helping adults learn. In contrast to pedagogy, or the teaching of children, andragogy is based on a humanistic conception of self-directed and autonomous learners where teachers are defined as facilitators of learning. Although Malcolm Knowles proposed andragogy as a theory, others posit that there is no single theory of adult learning or andragogy. In the literature where adult learning theory is often identified as a principle or an assumption, there are a variety of different approaches and theories that are also evolving in view of evolving higher education instruction, workplace training, new technology and online learning (Omoregie, 2021). Malcolm Knowles identified these adult learner characteristics related to the motivation of adult learning.   Need to know: Adults need to know the reason for learning something. Foundation: Experience (including error) provides the basis for learning activities. Self-concept: Adults need to be responsible for their decisions on education; involvement in the planning and evaluation of their instruction. Readiness: Adults are most interested in learning subjects having immediate relevance to their work and/or personal lives. Orientation: Adult learning is problem-centered rather than content-oriented. Motivation: Adults respond better to internal versus external motivators. Blaschke (2012) described Malcolm Knowles' 1973 theory as "self-directed" learning. The goals include helping learners develop the capacity for self-direction, supporting transformational learning and promoting "emancipatory learning and social action" (Blaschke, 2019, p. 76). Although Knowles' andragogy is a well-known theory in the English-speaking world, his theory has an ancillary role internationally. This is especially true in European countries where andragogy is a term used to refer to a field of systematic reflection. The acceptance of andragogy in European countries, according to St. Clair and Käpplinger (2021) is to accept andragogy as the "scientific study of learning in adults and the concomitant teaching approaches" (p. 485). Further, the definition of andragogy and its application to adult learning is more variable currently due to both the impact of globalization and the rapid expansion of adult online learning. History The term was originally coined by German educator Alexander Kapp in 1833. Andragogy was developed into a theory of adult education by Eugen Rosenstock-Huessy. It later became very popular in the US by the American educator Malcolm Knowles. Knowles asserted that andragogy (Greek: "man-leading") should be distinguished from the more commonly used term pedagogy (Greek: "child-leading"). Knowles collected ideas about a theory of adult education from the end of World War II until he was introduced to the term "androgogy". In 1966, Knowles met Dušan Savićević in Boston. Savićević was the one who shared the term andragogy with Knowles and explained how it was used in the European context. In 1967, Knowles made use of the term "andragogy" to explain his theory of adult education. Then after consulting with Merriam-Webster, he corrected the spelling of the term to "andragogy" and continued to make use of the term to explain his multiple ideas about adult learning. Knowles' theory can be stated with six assumptions related to the motivation of adult learning: Need to know: Adults need to know the reason for learning something. Foundation: Experience (including error) provides the basis for learning activities. Self-concept: Adults need to be responsible for their decisions on education; involvement in the planning and evaluation of their instruction. Readiness: Adults are most interested in learning subjects having immediate relevance to their work and/or personal lives. Orientation: Adult learning is problem-centered rather than content-oriented. Motivation: Adults respond better to internal versus external motivators. In most European countries, the Knowles discussion played at best, a marginal role. "Andragogy" was, from 1970 on, connected with emerging academic and professional institutions, publications, or programs, triggered by a similar growth of adult education in practice and theory as in the United States. "Andragogy" functioned here as a header for (places of) systematic reflections, parallel to other academic headers like "biology", "medicine", and "physics". Early examples of this use of andragogy are the Yugoslavian (scholarly) journal for adult education, named Andragogija in 1969, and the Yugoslavian Society for Andragogy; at Palacky University in Olomouc (Czech Republic) the Katedra sociologie a andragogiky (Sociology and Andragogy Department) was established in 1990. Also, Prague University has a Katedra Andragogiky (Andragogical Department); in 1993, Slovenia's Andragoski Center Republike Slovenije (Slovenian Republic Andragogy Center) was founded with the journal Andragoska Spoznanja; in 1995, Bamberg University (Germany) named a Lehrstuhl Andragogik (Androgogy Chair). On this formal level "above practice" and specific approaches, the term "andragogy" could be used relating to all types of theories, for reflection, analysis, training, in person-oriented programs, or human resource development. Principles Adult learning is based upon comprehension, organization and synthesis of knowledge rather than rote memory. Some scholars have proposed seven principles of adult learning: Adults must want to learn: They learn effectively only when they are free to direct their own learning and have a strong inner motivation to develop a new skill or acquire a particular type of knowledge, this sustains learning. Adults will learn only what they feel they need to learn – Adults are practical in their approach to learning; they want to know, "How is this going to help me right now? Is it relevant (content, connection, and application) and does it meet my targeted goals?" Adults learn by doing: Adolescents learn by doing, but adults learn through active practice and participation. This helps in integrating component skills into a coherent whole. Adult learning focuses on problem solving: Adolescents tend to learn skills sequentially. Adults tend to start with a problem and then work to find a solution. A meaningful engagement, such as posing and answering realistic questions and problems is necessary for deeper learning. This leads to more elaborate, longer lasting, and stronger representations of the knowledge (Craik & Lockhart, 1972). Experience affects adult learning: Adults have more experience than adolescents. This can be an asset and a liability, if prior knowledge is inaccurate, incomplete, or immature, it can interfere with or distort the integration of incoming information (Clement, 1982; National Research Council, 2000). Adults learn best in an informal situation: Adolescents have to follow a curriculum. Often, adults learn by taking responsibility for the value and need of content they have to understand and the particular goals it will achieve. Being in an inviting, collaborative and networking environment as an active participant in the learning process makes it efficient. Adults want guidance and consideration as equal partners in the process: Adults want information that will help them improve their situation. They do not want to be told what to do and they evaluate what helps and what doesn't. They want to choose options based on their individual needs and the meaningful impact a learning engagement could provide. Socialization is more important among adults. Academic discipline In the field of adult education during recent decades, a process of growth and differentiation emerged as a scholarly and scientific approach, andragogy. It refers to the academic discipline(s) within university programs that focus on the education of adults; andragogy exists today worldwide. The term refers to a new type of education which was not qualified by missions and visions, but by academic learning including: reflection, critique, and historical analyses. Dušan Savićević, who provided Knowles with the term andragogy, explicitly claims andragogy as a discipline, the subject of which is the study of education and learning of adults in all its forms of expression' (Savicevic, 1999, p. 97, similarly Henschke, 2003,), Reischmann, 2003. Recent research and the COVID 19 pandemic have expanded andragogy into the online world internationally, as evidenced by country and international organizations that foster the development of adult learning, research and collaboration in educating adults. New and expanding online instruction is fostered by national organizations, literacy organizations, academic journals and higher education institutions that are helping adults to achieve learning and skills that will contribute to individual economic improvement. New learning resources and approaches are identified, such as finding that using collaborative tools like a wiki can encourage learners to become more self-directed, thereby enriching the classroom environment. Andragogy gives scope to self-directed learners and helps in designing and delivering the focused instructions. The methods used by andragogy can be used in different educational environments (e.g. adolescent education). Internationally there are many academic journals, adult education organizations (including government agencies) and centers for adult learning housed in a plethora of international colleges and universities that are working to promote the field of adult learning, as well as adult learning opportunities in training, traditional classes and in online learning. In academic fields, andrologists are those who practice and specialize in the field of andragogy. Andragologists have received a doctoral degree from an accredited university in Education (EdD) or a Philosophy (PhD) and focused their dissertation utilizing andragogy as a main component of their theoretical framework. Differences in learning: The Pedagogy, andragogy and heutagogy continuum In the 20th century, adult educators began to challenge the application of pedagogical theory and teacher-centered approaches to the teaching of adults. Unlike children, adult learners are not transmitted knowledge. Rather, the adult learner is an active participant in their learning. Adult students also are asked to actively plan their learning process to include identifying learning objectives and how they will be achieved. Knowles (1980) summarized the key characteristics of andragogy in this model: 1) independency or self-directedness 2) using past experiences to construct learning, 3) association with readiness to learn, and 4) changing education perspectives from subject-centered one to performance centered perspectives. A new educational strategy has evolved in response to globalization that identifies learners as self-determined, especially in higher education and work-place settings: heutagogy, a process where students learn on their own with some guidance from the teacher. The motivation to learn comes from the students' interest in not only performing, but being recognized for their accomplishment (Akiyildiz, 2019). In addition, in heutagogy, learning is learner-centric - where the decisions relating to the learning process are managed by the student. Further, the student determines whether or not the learning objectives are met. Differences between pedagogy, andragogy, and heutagogy include: Critique There is no consensus internationally on whether andragogy is a learning theory or a set of principles, characteristics or assumptions of adult learning. Knowles himself changed his position on whether andragogy applied only to adults and came to believe that "pedagogy-andragogy represents a continuum ranging from teacher-directed to student-directed learning and that both approaches are appropriate with children and adults, depending on the situation." Hanson (1996) argues that the difference in learning is not related to the age and stage of one's life, but instead related to individual characteristics and the differences in "context, culture and power" within different educational settings. In another critique of Knowles' work, Knowles was not able to use one of his principles (Self-concept) with adult learners to the extent that he describes in his practices. In one course, Knowles appears to allow "near total freedom in learner determination of objectives" but still "intended" the students to choose from a list of 18 objectives on the syllabus. Self-concept can be critiqued not just from the instructor's point of view, but also from the student's point of view. Not all adult learners will know exactly what they want to learn in a course and may seek a more structured outline from an instructor. An instructor cannot assume that an adult will desire self-directed learning in every situation. Kidd (1978) goes further by claiming that principles of learning have to be applied to lifelong development. He suggested that building a theory on adult learning would be meaningless, as there is no real basis for it. Jarvis even implies that andragogy would be more the result of an ideology than a scientific contribution to the comprehension of the learning processes. Knowles himself mentions that andragogy is a "model of assumptions about learning or a conceptual framework that serves as a basis for an emergent theory." There appears to be a lack of research on whether this framework of teaching and learning principles is more relevant to adult learners or if it is just a set of good practices that could be used for both children and adult learners. The way adults learn is different from the pedagogical approach used to foster learning in K-12 settings. These learning differences are key and can be used to show that the six characteristics/principles of andragogy remain applicable when designing teaching and learning materials, in English as a Foreign Language (EFL), for example. See also References Further reading Loeng, S. (2012). Eugen Rosenstock-Huessy – an andragogical pioneer. Studies in Continuing Education, Reischmann, Jost (2005): Andragogy. In: English, Leona (ed): International Encyclopedia of Adult Education. London: Palgrave Macmillan. S. 58–63. (.pdf-download) Smith, M. K. (1996; 1999) 'Andragogy', in the Encyclopedia of Informal Education. Andragogy and other Learning Theories Philosophy of education
0.774244
0.993581
0.769274
Data and information visualization
Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization). When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentational or explanatory visualization), it is typically called information graphics. Data visualization is concerned with visually presenting sets of primarily quantitative raw data in a schematic form. The visual formats used in data visualization include tables, charts and graphs (e.g. pie charts, bar charts, line charts, area charts, cone charts, pyramid charts, donut charts, histograms, spectrograms, cohort charts, waterfall charts, funnel charts, bullet graphs, etc.), diagrams, plots (e.g. scatter plots, distribution plots, box-and-whisker plots), geospatial maps (such as proportional symbol maps, choropleth maps, isopleth maps and heat maps), figures, correlation matrices, percentage gauges, etc., which sometimes can be combined in a dashboard. Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization include maps (such as tree maps), animations, infographics, Sankey diagrams, flow charts, network diagrams, semantic networks, entity-relationship diagrams, venn diagrams, timelines, mind maps, etc. Emerging technologies like virtual, augmented and mixed reality have the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user's visual perception and cognition. In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected from databases, information systems, file systems, documents, business and financial data, etc. (presentational and exploratory visualization) which is different from the field of scientific visualization, where the goal is to render realistic images based on physical and spatial scientific data to confirm or reject hypotheses (confirmatory visualization). Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion. Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research. In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models. In business, data and information visualization can constitute a part of data storytelling, where they are paired with a coherent narrative structure or storyline to contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to create business value. This can be contrasted with the field of statistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them perform exploratory data analysis or to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important. The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines of descriptive statistics (as early as the 18th century), visual communication, graphic design, cognitive science and, more recently, interactive computer graphics and human-computer interaction. Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science. The neighboring field of visual analytics marries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do. Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information. On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminate misinformation, manipulate public perception and divert public opinion toward a certain agenda. Thus data visualization literacy has become an important component of data and information literacy in the information age akin to the roles played by textual, mathematical and visual literacy in the past. Overview The field of data and information visualization has emerged "from research in human–computer interaction, computer science, graphics, visual design, psychology, and business methods. It is increasingly applied as a critical component in scientific research, digital libraries, data mining, financial data analysis, market studies, manufacturing production control, and drug discovery". Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways." Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.), statistics (hypothesis test, regression, PCA, etc.), data mining (association mining, etc.), and machine learning methods (clustering, classification, decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing. To communicate information clearly and efficiently, data visualization uses statistical graphics, plots, information graphics and other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable, and usable, but can also be reductive. Users may have particular analytical tasks, such as making comparisons or understanding causality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables. Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps in data analysis or data science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information". Indeed, Fernanda Viegas and Martin M. Wattenberg suggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention. Data visualization is closely related to information graphics, information visualization, scientific visualization, exploratory data analysis and statistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization. In the commercial environment data visualization is often referred to as dashboards. Infographics are another very common form of data visualization. Principles Characteristics of effective graphical displays Edward Tufte has explained that users of information displays are executing particular analytical tasks such as making comparisons. The design principle of the information graphic should support the analytical task. As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts. In his 1983 book The Visual Display of Quantitative Information, Edward Tufte defines 'graphical displays' and principles for effective graphical display in the following passage: "Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should: show the data induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else avoid distorting what the data has to say present many numbers in a small space make large data sets coherent encourage the eye to compare different pieces of data reveal the data at several levels of detail, from a broad overview to the fine structure serve a reasonably clear purpose: description, exploration, tabulation, or decoration be closely integrated with the statistical and verbal descriptions of a data set. Graphics reveal data. Indeed, graphics can be more precise and revealing than conventional statistical computations." For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn." Not applying these principles may result in misleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte, chartjunk refers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible. The Congressional Budget Office summarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report. Quantitative messages Author Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message: Time-series: A single variable is captured over a period of time, such as the unemployment rate or temperature measures over a 10-year period. A line chart may be used to demonstrate the trend over time. Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by sales persons (the category, with each sales person a categorical subdivision) during a single period. A bar chart may be used to show the comparison across the sales persons. Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market. Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount. Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis. A boxplot helps visualize key statistics about the distribution, such as median, quartiles, outliers, etc. Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message. Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison. Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used. Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part of exploratory data analysis. Visual perception and data visualization A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing. Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison). Human perception/cognition and data visualization Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations. Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving. Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means of data exploration. Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text. History The modern study of visualization started with computer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization in Scientific Computing. Since then there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH". They have been devoted to the general topics of data visualization, information visualization and scientific visualization, and more specific areas such as volume visualization. In 1786, William Playfair published the first presentation graphics. There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines. Michael Friendly and Daniel J Denis of York University are engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found in Lascaux Cave in Southern France) since the Pleistocene era. Physical artefacts such as Mesopotamian clay tokens (5500 BC), Inca quipus (2600 BC) and Marshall Islands stick charts (n.d.) can also be considered as visualizing quantitative information. The first documented data visualization can be tracked back to 1160 B.C. with Turin Papyrus Map which accurately illustrates the distribution of geological resources and provides information about quarrying of those resources. Such maps can be categorized as thematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example, Linear B tablets of Mycenae provided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude by Claudius Ptolemy [–] in Alexandria would serve as reference standards until the 14th century. The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools. The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time. By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed by Tycho Brahe [1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately. Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596). French philosopher and mathematician René Descartes and Pierre de Fermat developed analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat and Blaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data. According to the Interaction Design Foundation, these developments allowed and helped William Playfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics. In the second half of the 20th century, Jacques Bertin used quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently". John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization. Programs like SAS, SOFA, R, Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such as D3, Python and JavaScript help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs like The Data Incubator or paid programs like General Assembly. Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization. The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics. Terminology Data visualization involves specific terminology, some of which is derived from statistics. For example, author Stephen Few defines two types of data, which are used in combination to support a meaningful analysis or visualization: Categorical: Represent groups of objects with a particular characteristic. Categorical variables can either be nominal or ordinal. Nominal variables for example gender have no order between them and are thus nominal. Ordinal variables are categories with an order, for sample recording the age group someone falls into. Quantitative: Represent measurements, such as the height of a person or the temperature of an environment. Quantitative variables can either be continuous or discrete. Continuous variables capture the idea that measurements can always be made more precisely. While discrete variables have only a finite number of possibilities, such as a count of some outcomes or an age measured in whole years. The distinction between quantitative and categorical variables is important because the two types require different methods of visualization. Two primary types of information displays are tables and graphs. A table contains quantitative data organized into rows and columns with categorical labels. It is primarily used to look up specific values. In the example above, the table might have categorical column labels representing the name (a qualitative variable) and age (a quantitative variable), with each row of data representing one person (the sampled experimental unit or category subdivision). A graph is primarily used to show relationships among data and portrays values encoded as visual objects (e.g., lines, bars, or points). Numerical values are displayed within an area delineated by one or more axes. These axes provide scales (quantitative and categorical) used to label and assign values to the visual objects. Many graphs are also referred to as charts. Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound. In "Visualization Analysis and Design" Tamara Munzner writes "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner agues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods." Techniques Other techniques Cartogram Cladogram (phylogeny) Concept Mapping Dendrogram (classification) Information visualization reference model Grand tour Graph drawing Heatmap HyperbolicTree Multidimensional scaling Parallel coordinates Problem solving environment Treemapping Interactivity Interactive data visualization enables direct actions on a graphical plot to change elements and link between multiple plots. Interactive data visualization has been a pursuit of statisticians since the late 1960s. Examples of the developments can be found on the American Statistical Association video lending library. Common interactions include: Brushing: works by using the mouse to control a paintbrush, directly changing the color or glyph of elements of a plot. The paintbrush is sometimes a pointer and sometimes works by drawing an outline of sorts around points; the outline is sometimes irregularly shaped, like a lasso. Brushing is most commonly used when multiple plots are visible and some linking mechanism exists between the plots. There are several different conceptual models for brushing and a number of common linking mechanisms. Brushing scatterplots can be a transient operation in which points in the active plot only retain their new characteristics. At the same time, they are enclosed or intersected by the brush, or it can be a persistent operation, so that points retain their new appearance after the brush has been moved away. Transient brushing is usually chosen for linked brushing, as we have just described. Painting: Persistent brushing is useful when we want to group the points into clusters and then proceed to use other operations, such as the tour, to compare the groups. It is becoming common terminology to call the persistent operation painting, Identification: which could also be called labeling or label brushing, is another plot manipulation that can be linked. Bringing the cursor near a point or edge in a scatterplot, or a bar in a barchart, causes a label to appear that identifies the plot element. It is widely available in many interactive graphics, and is sometimes called mouseover. Scaling: maps the data onto the window, and changes in the area of the. mapping function help us learn different things from the same plot. Scaling is commonly used to zoom in on crowded regions of a scatterplot, and it can also be used to change the aspect ratio of a plot, to reveal different features of the data. Linking: connects elements selected in one plot with elements in another plot. The simplest kind of linking, one-to-one, where both plots show different projections of the same data, and a point in one plot corresponds to exactly one point in the other. When using area plots, brushing any part of an area has the same effect as brushing it all and is equivalent to selecting all cases in the corresponding category. Even when some plot elements represent more than one case, the underlying linking rule still links one case in one plot to the same case in other plots. Linking can also be by categorical variable, such as by a subject id, so that all data values corresponding to that subject are highlighted, in all the visible plots. Other perspectives There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization: statistical graphics, and thematic cartography. In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization: Articles & resources Displaying connections Displaying data Displaying news Displaying websites Mind maps Tools and services All these subjects are closely related to graphic design and information representation. On the other hand, from a computer science perspective, Frits H. Post in 2002 categorized the field into sub-fields: Information visualization Interaction techniques and architectures Modelling techniques Multiresolution methods Visualization algorithms and techniques Volume visualization Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation. To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals. These four types of visual communication are as follows; idea illustration (conceptual & declarative). Used to teach, explain and/or simply concepts. For example, organisation charts and decision trees. idea generation (conceptual & exploratory). Used to discover, innovate and solve problems. For example, a whiteboard after a brainstorming session. visual discovery (data-driven & exploratory). Used to spot trends and make sense of data. This type of visual is more common with large and complex data where the dataset is somewhat unknown and the task is open-ended. everyday data-visualisation (data-driven & declarative). The most common and simple type of visualisation used for affirming and setting context. For example, a line graph of GDP over time. Applications Data and information visualization insights are being applied in areas such as: Scientific research Digital libraries Data mining Information graphics Financial data analysis Health care Market studies Manufacturing production control Crime mapping eGovernance and Policy Modeling Digital Humanities Data Art Organization Notable academic and industry laboratories in the field are: Adobe Research IBM Research Google Research Microsoft Research Panopticon Software Scientific Computing and Imaging Institute Tableau Software University of Maryland Human-Computer Interaction Lab Conferences in this field, ranked by significance in data visualization research, are: IEEE Visualization: An annual international conference on scientific visualization, information visualization, and visual analytics. Conference is held in October. ACM SIGGRAPH: An annual international conference on computer graphics, convened by the ACM SIGGRAPH organization. Conference dates vary. Conference on Human Factors in Computing Systems (CHI): An annual international conference on human–computer interaction, hosted by ACM SIGCHI. Conference is usually held in April or May. Eurographics: An annual Europe-wide computer graphics conference, held by the European Association for Computer Graphics. Conference is usually held in April or May. For further examples, see: :Category:Computer graphics organizations Data presentation architecture Data presentation architecture (DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge. Historically, the term data presentation architecture is attributed to Kelly Lautt: "Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value of Business Intelligence. Data presentation architecture weds the science of numbers, data and statistics in discovering valuable information from data and making it usable, relevant and actionable with the arts of data visualization, communications, organizational psychology and change management in order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA." Objectives DPA has two main objectives: To use data to provide knowledge in the most efficient manner possible (minimize noise, complexity, and unnecessary data or detail given each audience's needs and roles) To use data to provide knowledge in the most effective manner possible (provide relevant, timely and complete data to each audience member in a clear and understandable manner that conveys important meaning, is actionable and can affect understanding, behavior and decisions) Scope With the above objectives in mind, the actual work of data presentation architecture consists of: Creating effective delivery mechanisms for each audience member depending on their role, tasks, locations and access to technology Defining important meaning (relevant knowledge) that is needed by each audience member in each context Determining the required periodicity of data updates (the currency of the data) Determining the right timing for data presentation (when and how often the user needs to see the data) Finding the right data (subject area, historical reach, breadth, level of detail, etc.) Utilizing appropriate analysis, grouping, visualization, and other presentation formats Related fields DPA work shares commonalities with several other fields, including: Business analysis in determining business goals, collecting requirements, mapping processes. Business process improvement in that its goal is to improve and streamline actions and decisions in furtherance of business goals Data visualization in that it uses well-established theories of visualization to add or highlight meaning or importance in data presentation. Digital humanities explores more nuanced ways of visualising complex data. Information architecture, but information architecture's focus is on unstructured data and therefore excludes both analysis (in the statistical/data sense) and direct transformation of the actual content (data, for DPA) into new entities and combinations. HCI and interaction design, since many of the principles in how to design interactive data visualisation have been developed cross-disciplinary with HCI. Visual journalism and data-driven journalism or data journalism: Visual journalism is concerned with all types of graphic facilitation of the telling of news stories, and data-driven and data journalism are not necessarily told with data visualisation. Nevertheless, the field of journalism is at the forefront in developing new data visualisations to communicate data. Graphic design, conveying information through styling, typography, position, and other aesthetic concerns. See also Analytics Big data Climate change art Color coding in data visualization Computational visualistics Information art Data management Data physicalization Data Presentation Architecture Data profiling Data warehouse Geovisualization Grand Tour (data visualisation) imc FAMOS (1987), graphical data analysis Infographics Information design Information management List of graphical methods List of information graphics software List of countries by economic complexity, example of Treemapping Patent visualisation Software visualization Statistical analysis Visual analytics Warming stripes Notes References Further reading Kawa Nazemi (2014). Adaptive Semantics Visualization Eurographics Association. Andreas Kerren, John T. Stasko, Jean-Daniel Fekete, and Chris North (2008). Information Visualization – Human-Centered Issues and Perspectives. Volume 4950 of LNCS State-of-the-Art Survey, Springer. Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, . Jeffrey Heer, Stuart K. Card, James Landay (2005). "Prefuse: a toolkit for interactive information visualization" . In: ACM Human Factors in Computing Systems CHI 2005. Ben Bederson and Ben Shneiderman (2003). The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann. Colin Ware (2000). Information Visualization: Perception for design. Morgan Kaufmann. Stuart K. Card, Jock D. Mackinlay and Ben Shneiderman (1999). Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers. Schwabish, Jonathan A. 2014. "An Economist's Guide to Visualizing Data." Journal of Economic Perspectives, 28 (1): 209–34. External links Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization, An illustrated chronology of innovations by Michael Friendly and Daniel J. Denis. Duke University-Christa Kelleher Presentation-Communicating through infographics-visualizing scientific & engineering information-March 6, 2015 Visualization (graphics) Statistical charts and diagrams Information technology governance de:Informationsvisualisierung
0.772938
0.995247
0.769264
Warp drive
A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek, and a subject of ongoing physics research. The general concept of "warp drive" was introduced by John W. Campbell in his 1957 novel Islands of Space and was popularized by the Star Trek series. Its closest real-life equivalent is the Alcubierre drive, a theoretical solution of the field equations of general relativity. History and characteristics Warp drive, or a drive enabling space warp, is one of several ways of travelling through space found in science fiction. It has been often discussed as being conceptually similar to hyperspace. A warp drive is a device that distorts the shape of the space-time continuum. A spacecraft equipped with a warp drive may travel at speeds greater than that of light by many orders of magnitude. In contrast to some other fictitious faster-than-light technologies such as a jump drive, the warp drive does not permit instantaneous travel and transfers between two points, but rather involves a measurable passage of time which is pertinent to the concept. In contrast to hyperspace, spacecraft at warp velocity would continue to interact with objects in "normal space". The general concept of warp drive was introduced by John W. Campbell in his 1957 novel Islands of Space. Brave New Words gave the earliest example of the term "space-warp drive" as Fredric Brown's Gateway to Darkness (1949), and also cited an unnamed story from Cosmic Stories (May 1941) as using the word "warp" in the context of space travel, although the usage of this term as a "bend or curvature" in space which facilitates travel can be traced to several works as far back as the mid-1930s, for example Jack Williamson's The Cometeers (1936). Einstein's space warp and real-world physics Einstein's theory of special relativity states that speed of light travel is impossible for material objects that, unlike photons, have a non-zero rest mass. The problem of a material object exceeding light speed is that an infinite amount of kinetic energy would be required to travel at exactly the speed of light. Warp drives are one of the science-fiction tropes that serve to circumvent this limitation in fiction to facilitate stories set at galactic scales. However, the concept of space warp has been criticized as "illogical", and has been connected to several other rubber science ideas that do not fit into our current understanding of physics, such as antigravity or negative mass. Some argue that these effects mean that although it's not possible to travel faster than the speed of light, both space and time "warp" to allow travelling the distance of one light year, in less than a year. Although it is not possible to travel faster than the speed of light, the effective speed is faster than light. This warping of space and time is precisely mathematically specified by the Lorentz factor, which depends on velocity. Although only theoretical when published over 100 years ago, the effect has since been measured and confirmed many times. In the limit, at light speed time stops completely (relative to a certain reference frame) and it is possible to travel infinite distances across space with no passage of time. Although the concept of warp drive has originated in fiction, it has received some scientific consideration, most notably related to the 1990s concept of the Alcubierre drive. Alcubierre stated in an email to William Shatner that his theory was directly inspired by the term used in the TV series Star Trek and cites the "'warp drive' of science fiction" in his 1994 article. In 2021, DARPA-funded researcher Harold White, of the Limitless Space Institute, claimed that he had succeeded in creating a real warp bubble, saying "our detailed numerical analysis of our custom Casimir cavities helped us identify a real and manufacturable nano/microstructure that is predicted to generate a negative vacuum energy density such that it would manifest a real nanoscale warp bubble, not an analog, but the real thing." Star Trek Warp drive is one of the fundamental features of the Star Trek franchise and one of the best-known examples of space warp (warp drive) in fiction. In the first pilot episode of Star Trek: The Original Series, "The Cage", it is referred to as a "hyperdrive", with Captain Pike stating the speed to reach planet TalosIV as "time warp, factor 7". The warp drive in Star Trek is one of the most detailed fictional technologies. Compared to the hyperspace drives of other fictional universes, it differs in that a spaceship does not leave the normal space-time continuum and instead the space-time itself is distorted, as is made possible in the general theory of relativity. The basic functional principle of the warp drive in Star Trek is the same for all spaceships. A strong energy source, usually a so-called warp core or sometimes called intermix chamber, generates a high-energy plasma. This plasma is transported to the so-called warp field generators via lines that are reminiscent of pipes. These generators are basically coils in warp nacelles protruding from the spaceship. These generate a subspace field, the so-called warp field or a warp bubble, which distort space-time and propels the bubble and spaceship in the bubble forward. The warp core can be designed in various forms. Humans and most of the other fictional races use a moderated reaction of antideuterium and deuterium. The energy produced passes through a matrix, which is made of a fictional chemical element, called dilithium. However, other species are shown to use different methods for faster-than-light propulsion. The Romulans, for example, use artificial micro-black holes called quantum singularities. The speeds are given in warp factors and follow a geometric progression. The first scale developed by Franz Joseph was simply a cubic progression with no limit. This leads to the use of ever growing warp factors in the Original Series and the Animated Series. For example, warp 14.1 in the TOS-episode "That Which Survives" or warp 36 in the TAS-episode "The Counter-Clock Incident". In order to focus more on the story and away from the technobabble, Gene Roddenberry commissioned Michael Okuda to invent a revised warp scale. Warp 10 should be the absolute limit and stand for infinite speed. In homage to Gene Roddenberry, this limit was also called "Eugene's Limit". Okuda explains this in an author's comment in his technical manual for the USS Enterprise-D. Between Warp 1 (the speed of light) and Warp 9, the increase was still roughly geometric, but the exponent was adjusted so that the speeds were higher compared to the old scale. For instance, Warp 9 is more than 1500 times faster than Warp 1 in comparison to the 729 times (nine to the power of 3) calculated using the original cubic formula. In the same author's comment, Okuda explains that the motivation was to fulfill fan expectations that the new Enterprise is much faster than the original, but without changing the warp factor numbers. Between Warp 9 and Warp 10, the new scale grows exponentially. Only in a single episode of Star Trek Voyager there was a specific numerical speed value given for a warp factor. In the episode "The 37's", Tom Paris tells Amelia Earhart that Warp 9.9 is about 4 billion miles per second (using customary units for the character's benefit). That is more than 14 times the value of Warp 9 and equal to around 21,400 times speed of light. However, this statement contradicts the technical manuals and encyclopedias written by Rick Sternbach and Michael Okuda, where a speed of 3053 times the speed of light was established for a warp factor of 9.9 and a speed of 7912 times the speed of light for a warp factor of 9.99. Both numerical values are well below the value given by Tom Paris. In the episode "Vis à Vis", a coaxial warp drive is mentioned. The working principle is explained in more detail in the Star Trek Encyclopedia. This variant of a warp drive uses spatial folding instead of a warp field and allows an instant movement with nearly infinite velocity. Star Trek has also introduced a so-called Transwarp concept, but without a fixed definition. It is effectively a catch-all phrase for any and all technologies and natural phenomena that enable speeds above Warp 9.99. Rick Sternbach described the basic idea in the Technical Manual: "Finally, we had to provide some loophole for various powerful aliens like Q, who have a knack for tossing the ship million of light years in the time of a commercial break. [...] This lets Q and his friends have fun in the 9.9999+ range, but also lets our ship travel slowly enough to keep the galaxy a big place, and meets the other criteria." See also Bussard collector Exotic matter Gravitational interaction of antimatter Krasnikov tube Negative energy Tachyons Timeline of black hole physics Timeline of gravitational physics and relativity References External links Embedding of the Alcubierre Warp drive 2d plot in Google Warp Drive, When? A NASA feasibility article Special Relativity Simulator What would things look like at near-warp speeds? Alcubierre Warp Drive at the Encyclopedia of Astrobiology, Astronomy, and Spaceflight The Warp Drive Could Become Science Fact Fiction about faster-than-light travel Science fiction themes Star Trek devices
0.771766
0.996757
0.769263
Ergodic process
In physics, statistics, econometrics and signal processing, a stochastic process is said to be in an ergodic regime if an observable's ensemble average equals the time average. In this regime, any collection of random samples from a process must represent the average statistical properties of the entire regime. Conversely, a regime of a process that is not ergodic is said to be in non-ergodic regime. A regime implies a time-window of a process whereby ergodicity measure is applied. Specific definitions One can discuss the ergodicity of various statistics of a stochastic process. For example, a wide-sense stationary process has constant mean and autocovariance that depends only on the lag and not on time . The properties and are ensemble averages (calculated over all possible sample functions ), not time averages. The process is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate converges in squared mean to the ensemble average as . Likewise, the process is said to be autocovariance-ergodic or d moment if the time average estimate converges in squared mean to the ensemble average , as . A process which is ergodic in the mean and autocovariance is sometimes called ergodic in the wide sense. Discrete-time random processes The notion of ergodicity also applies to discrete-time random processes for integer . A discrete-time random process is ergodic in mean if converges in squared mean to the ensemble average , as . Examples Ergodicity means the ensemble average equals the time average. Following are examples to illustrate this principle. Call centre Each operator in a call centre spends time alternately speaking and listening on the telephone, as well as taking breaks between calls. Each break and each call are of different length, as are the durations of each 'burst' of speaking and listening, and indeed so is the rapidity of speech at any given moment, which could each be modelled as a random process. Take N call centre operators (N should be a very large integer) and plot the number of words spoken per minute for each operator over a long period (several shifts). For each operator you will have a series of points, which could be joined with lines to create a 'waveform'. Calculate the average value of those points in the waveform; this gives you the time average. There are N waveforms and N operators. These N waveforms are known as an ensemble. Now take a particular instant of time in all those waveforms and find the average value of the number of words spoken per minute. That gives you the ensemble average for that instant. If ensemble average always equals time average, then the system is ergodic. Electronics Each resistor has an associated thermal noise that depends on the temperature. Take N resistors (N should be very large) and plot the voltage across those resistors for a long period. For each resistor you will have a waveform. Calculate the average value of that waveform; this gives you the time average. There are N waveforms as there are N resistors. These N plots are known as an ensemble. Now take a particular instant of time in all those plots and find the average value of the voltage. That gives you the ensemble average for each plot. If ensemble average and time average are the same then it is ergodic. Examples of non-ergodic random processes An unbiased random walk is non-ergodic. Its expectation value is zero at all times, whereas its time average is a random variable with divergent variance. Suppose that we have two coins: one coin is fair and the other has two heads. We choose (at random) one of the coins first, and then perform a sequence of independent tosses of our selected coin. Let X[n] denote the outcome of the nth toss, with 1 for heads and 0 for tails. Then the ensemble average is   ( +  1) = ; yet the long-term average is for the fair coin and 1 for the two-headed coin. So the long term time-average is either 1/2 or 1. Hence, this random process is not ergodic in mean. See also Ergodic hypothesis Ergodicity Ergodic theory, a branch of mathematics concerned with a more general formulation of ergodicity Loschmidt's paradox Poincaré recurrence theorem Notes References Ergodic theory Signal processing
0.779712
0.986585
0.769252
CMA-ES
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as ) are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence. In an evolution strategy, new candidate solutions are usually sampled according to a multivariate normal distribution in . Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. This is particularly useful if the function is ill-conditioned. Adaptation of the covariance matrix amounts to learning a second order model of the underlying objective function similar to the approximation of the inverse Hessian matrix in the quasi-Newton method in classical optimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For example, the ranking could come about from pairwise competitions between the candidate solutions in a Swiss-system tournament. Principles Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm. First, a maximum-likelihood principle, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that the likelihood of previously successful candidate solutions is maximized. The covariance matrix of the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as a natural gradient descent. Also, in consequence, the CMA conducts an iterated principal components analysis of successful search steps while retaining all principal axes. Estimation of distribution algorithms and the Cross-Entropy Method are based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solution points instead of successful search steps. Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively prevents premature convergence yet allowing fast convergence to an optimum. Algorithm In the following the most commonly used (μ/μw, λ)-CMA-ES is outlined, where in each iteration step a weighted combination of the μ best out of λ new candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. A pseudocode of the algorithm looks as follows. set // number of samples per iteration, at least two, generally > 4 initialize , , , , // initialize state variables while not terminate do // iterate for in do // sample new solutions and evaluate them sample_multivariate_normal(mean, covariance_matrix) ← with // sort solutions // we need later and ← update_m // move mean to better solutions ← update_ps // update isotropic evolution path ← update_pc // update anisotropic evolution path ← update_C // update covariance matrix ← update_sigma // update step-size using isotropic path length return or The order of the five update assignments is relevant: must be updated first, and must be updated before , and must be updated last. The update equations for the five state variables are specified in the following. Given are the search space dimension and the iteration step . The five state variables are , the distribution mean and current favorite solution to the optimization problem, , the step-size, , a symmetric and positive-definite covariance matrix with and , two evolution paths, initially set to the zero vector. The iteration starts with sampling candidate solutions from a multivariate normal distribution , i.e. for The second line suggests the interpretation as unbiased perturbation (mutation) of the current favorite solution vector (the distribution mean vector). The candidate solutions are evaluated on the objective function to be minimized. Denoting the -sorted candidate solutions as the new mean value is computed as where the positive (recombination) weights sum to one. Typically, and the weights are chosen such that . The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indices . The step-size is updated using cumulative step-size adaptation (CSA), sometimes also denoted as path length control. The evolution path (or search path) is updated first. where is the backward time horizon for the evolution path and larger than one ( is reminiscent of an exponential decay constant as where is the associated lifetime and the half-life), is the variance effective selection mass and by definition of , is the unique symmetric square root of the inverse of , and is the damping parameter usually close to one. For or the step-size remains unchanged. The step-size is increased if and only if is larger than the expected value and decreased if it is smaller. For this reason, the step-size update tends to make consecutive steps -conjugate, in that after the adaptation has been successful . Finally, the covariance matrix is updated, where again the respective evolution path is updated first. where denotes the transpose and is the backward time horizon for the evolution path and larger than one, and the indicator function evaluates to one iff or, in other words, , which is usually the case, makes partly up for the small variance loss in case the indicator is zero, is the learning rate for the rank-one update of the covariance matrix and is the learning rate for the rank- update of the covariance matrix and must not exceed . The covariance matrix update tends to increase the likelihood for and for to be sampled from . This completes the iteration step. The number of candidate samples per iteration, , is not determined a priori and can vary in a wide range. Smaller values, for example , lead to more local search behavior. Larger values, for example with default value , render the search more global. Sometimes the algorithm is repeatedly restarted with increasing by a factor of two for each restart. Besides of setting (or possibly instead, if for example is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user. Example code in MATLAB/Octave function xmin=purecmaes % (mu/mu_w, lambda)-CMA-ES % -------------------- Initialization -------------------------------- % User defined input parameters (need to be edited) strfitnessfct = 'frosenbrock'; % name of objective/fitness function N = 20; % number of objective variables/problem dimension xmean = rand(N,1); % objective variables initial point sigma = 0.3; % coordinate wise standard deviation (step size) stopfitness = 1e-10; % stop if fitness < stopfitness (minimization) stopeval = 1e3*N^2; % stop after stopeval number of function evaluations % Strategy parameter setting: Selection lambda = 4+floor(3*log(N)); % population size, offspring number mu = lambda/2; % number of parents/points for recombination weights = log(mu+1/2)-log(1:mu)'; % muXone array for weighted recombination mu = floor(mu); weights = weights/sum(weights); % normalize recombination weights array mueff=sum(weights)^2/sum(weights.^2); % variance-effectiveness of sum w_i x_i % Strategy parameter setting: Adaptation cc = (4+mueff/N) / (N+4 + 2*mueff/N); % time constant for cumulation for C cs = (mueff+2) / (N+mueff+5); % t-const for cumulation for sigma control c1 = 2 / ((N+1.3)^2+mueff); % learning rate for rank-one update of C cmu = min(1-c1, 2 * (mueff-2+1/mueff) / ((N+2)^2+mueff)); % and for rank-mu update damps = 1 + 2*max(0, sqrt((mueff-1)/(N+1))-1) + cs; % damping for sigma % usually close to 1 % Initialize dynamic (internal) strategy parameters and constants pc = zeros(N,1); ps = zeros(N,1); % evolution paths for C and sigma B = eye(N,N); % B defines the coordinate system D = ones(N,1); % diagonal D defines the scaling C = B * diag(D.^2) * B'; % covariance matrix C invsqrtC = B * diag(D.^-1) * B'; % C^-1/2 eigeneval = 0; % track update of B and D chiN=N^0.5*(1-1/(4*N)+1/(21*N^2)); % expectation of % ||N(0,I)|| == norm(randn(N,1)) % -------------------- Generation Loop -------------------------------- counteval = 0; % the next 40 lines contain the 20 lines of interesting code while counteval < stopeval % Generate and evaluate lambda offspring for k=1:lambda arx(:,k) = xmean + sigma * B * (D .* randn(N,1)); % m + sig * Normal(0,C) arfitness(k) = feval(strfitnessfct, arx(:,k)); % objective function call counteval = counteval+1; end % Sort by fitness and compute weighted mean into xmean [arfitness, arindex] = sort(arfitness); % minimization xold = xmean; xmean = arx(:,arindex(1:mu))*weights; % recombination, new mean value % Cumulation: Update evolution paths ps = (1-cs)*ps ... + sqrt(cs*(2-cs)*mueff) * invsqrtC * (xmean-xold) / sigma; hsig = norm(ps)/sqrt(1-(1-cs)^(2*counteval/lambda))/chiN < 1.4 + 2/(N+1); pc = (1-cc)*pc ... + hsig * sqrt(cc*(2-cc)*mueff) * (xmean-xold) / sigma; % Adapt covariance matrix C artmp = (1/sigma) * (arx(:,arindex(1:mu))-repmat(xold,1,mu)); C = (1-c1-cmu) * C ... % regard old matrix + c1 * (pc*pc' ... % plus rank one update + (1-hsig) * cc*(2-cc) * C) ... % minor correction if hsig==0 + cmu * artmp * diag(weights) * artmp'; % plus rank mu update % Adapt step size sigma sigma = sigma * exp((cs/damps)*(norm(ps)/chiN - 1)); % Decomposition of C into B*diag(D.^2)*B' (diagonalization) if counteval - eigeneval > lambda/(c1+cmu)/N/10 % to achieve O(N^2) eigeneval = counteval; C = triu(C) + triu(C,1)'; % enforce symmetry [B,D] = eig(C); % eigen decomposition, B==normalized eigenvectors D = sqrt(diag(D)); % D is a vector of standard deviations now invsqrtC = B * diag(D.^-1) * B'; end % Break, if fitness is good enough or condition exceeds 1e14, better termination methods are advisable if arfitness(1) <= stopfitness || max(D) > 1e7 * min(D) break; end end % while, end generation loop xmin = arx(:, arindex(1)); % Return best point of last iteration. % Notice that xmean is expected to be even % better. end % --------------------------------------------------------------- function f=frosenbrock(x) if size(x,1) < 2 error('dimension must be greater one'); end f = 100*sum((x(1:end-1).^2 - x(2:end)).^2) + sum((x(1:end-1)-1).^2); end Theoretical foundations Given the distribution parameters—mean, variances and covariances—the normal probability distribution for sampling new candidate solutions is the maximum entropy probability distribution over , that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following. Variable metric The CMA-ES implements a stochastic variable-metric method. In the very particular case of a convex-quadratic objective function the covariance matrix adapts to the inverse of the Hessian matrix , up to a scalar factor and small random fluctuations. More general, also on the function , where is strictly increasing and therefore order preserving, the covariance matrix adapts to , up to a scalar factor and small random fluctuations. For selection ratio (and hence population size ), the selected solutions yield an empirical covariance matrix reflective of the inverse-Hessian even in evolution strategies without adaptation of the covariance matrix. This result has been proven for on a static model, relying on the quadratic approximation. Maximum-likelihood updates The update equations for mean and covariance matrix maximize a likelihood while resembling an expectation–maximization algorithm. The update of the mean vector maximizes a log-likelihood, such that where denotes the log-likelihood of from a multivariate normal distribution with mean and any positive definite covariance matrix . To see that is independent of remark first that this is the case for any diagonal matrix , because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosing non-diagonal are equivalent. The rank- update of the covariance matrix, that is, the right most summand in the update equation of , maximizes a log-likelihood in that for (otherwise is singular, but substantially the same result holds for ). Here, denotes the likelihood of from a multivariate normal distribution with zero mean and covariance matrix . Therefore, for and , is the above maximum-likelihood estimator. See estimation of covariance matrices for details on the derivation. Natural gradient descent in the space of sample distributions Akimoto et al. and Glasmachers et al. discovered independently that the update of the distribution parameters resembles the descent in direction of a sampled natural gradient of the expected objective function value (to be minimized), where the expectation is taken under the sample distribution. With the parameter setting of and , i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation of Natural Evolution Strategies (NES). The natural gradient is independent of the parameterization of the distribution. Taken with respect to the parameters of the sample distribution , the gradient of can be expressed as where depends on the parameter vector . The so-called score function, , indicates the relative sensitivity of w.r.t. , and the expectation is taken with respect to the distribution . The natural gradient of , complying with the Fisher information metric (an informational distance measure between probability distributions and the curvature of the relative entropy), now reads where the Fisher information matrix is the expectation of the Hessian of and renders the expression independent of the chosen parameterization. Combining the previous equalities we get A Monte Carlo approximation of the latter expectation takes the average over samples from where the notation from above is used and therefore are monotonically decreasing in . Ollivier et al. finally found a rigorous derivation for the weights, , as they are defined in the CMA-ES. The weights are an asymptotically consistent estimator of the CDF of at the points of the th order statistic , as defined above, where , composed with a fixed monotonically decreasing transformation , that is, . These weights make the algorithm insensitive to the specific -values. More concisely, using the CDF estimator of instead of itself let the algorithm only depend on the ranking of -values but not on their underlying distribution. This renders the algorithm invariant to strictly increasing -transformations. Now we define such that is the density of the multivariate normal distribution . Then, we have an explicit expression for the inverse of the Fisher information matrix where is fixed and for and, after some calculations, the updates in the CMA-ES turn out as and where mat forms the proper matrix from the respective natural gradient sub-vector. That means, setting , the CMA-ES updates descend in direction of the approximation of the natural gradient while using different step-sizes (learning rates 1 and ) for the orthogonal parameters and respectively. More recent versions allow a different learning rate for the mean as well. The most recent version of CMA-ES also use a different function for and with negative values only for the latter (so-called active CMA). Stationarity or unbiasedness It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, where , we find that and under some mild additional assumptions on the initial conditions and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find Invariance Invariance properties imply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES. Invariance under order-preserving transformations of the objective function value , in that for any the behavior is identical on for all strictly increasing . This invariance is easy to verify, because only the -ranking is used in the algorithm, which is invariant under the choice of . Scale-invariance, in that for any the behavior is independent of for the objective function given and . Invariance under rotation of the search space in that for any and any the behavior on is independent of the orthogonal matrix , given . More general, the algorithm is also invariant under general linear transformations when additionally the initial covariance matrix is chosen as . Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is the Nelder–Mead method, where the initial simplex must be chosen respectively. Convergence Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simpler evolution strategies, and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted as . On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initial and . Empirically, the fastest possible convergence rate in for rank-based direct search methods can often be observed (depending on the context denoted as linear convergence or log-linear or exponential convergence). Informally, we can write for some , and more rigorously or similarly, This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely by . The convergence rate is roughly , given is not much larger than the dimension . Even with optimal and , the convergence rate cannot largely exceed , given the above recombination weights are all non-negative. The actual linear dependencies in and are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing. Interpretation as coordinate-system transformation Using a non-identity covariance matrix for the multivariate normal distribution in evolution strategies is equivalent to a coordinate system transformation of the solution vectors, mainly because the sampling equation can be equivalently expressed in an "encoded space" as The covariance matrix defines a bijective transformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simple evolution strategy with identity covariance matrix. This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method. Performance in practice In contrast to most other evolutionary algorithms, the CMA-ES is, from the user's perspective, quasi-parameter-free. The user has to choose an initial solution point, , and the initial step-size, . Optionally, the number of candidate samples λ (population size) can be modified by the user in order to change the characteristic search behavior (see above) and termination conditions can or should be adjusted to the problem at hand. The CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions. One survey of Black-Box optimizations found it outranked 31 other optimization algorithms, performing especially strongly on "difficult functions" or larger-dimensional search spaces. The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions: on low-dimensional functions, say , for example by the downhill simplex method or surrogate-based methods (like kriging with expected improvement); on separable functions without or with only negligible dependencies between the design variables in particular in the case of multi-modality or large dimension, for example by differential evolution; on (nearly) convex-quadratic functions with low or moderate condition number of the Hessian matrix, where BFGS or NEWUOA or SLSQP are typically at least ten times faster; on functions that can already be solved with a comparatively small number of function evaluations, say no more than , where CMA-ES is often slower than, for example, NEWUOA or Multilevel Coordinate Search (MCS). On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than function evaluations, the CMA-ES shows most often superior performance. Variations and extensions The (1+1)-CMA-ES generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. For the (1+1)-CMA-ES is a close variant of Gaussian adaptation. Some Natural Evolution Strategies are close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES setting ) and they formalize the update of variances and covariances on a Cholesky factor instead of a covariance matrix. The CMA-ES has also been extended to multiobjective optimization as MO-CMA-ES. Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA. Using the additional active CMA update is considered as the default variant nowadays. See also References Bibliography Hansen N, Ostermeier A (2001). Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2) pp. 159–195. Hansen N, Müller SD, Koumoutsakos P (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1) pp. 1–18. Hansen N, Kern S (2004). Evaluating the CMA evolution strategy on multimodal test functions. In Xin Yao et al., editors, Parallel Problem Solving from Nature – PPSN VIII, pp. 282–291, Springer. Igel C, Hansen N, Roth S (2007). Covariance Matrix Adaptation for Multi-objective Optimization. Evolutionary Computation, 15(1) pp. 1–28. External links A short introduction to CMA-ES by N. Hansen The CMA Evolution Strategy: A Tutorial CMA-ES source code page Evolutionary algorithms Stochastic optimization Optimization algorithms and methods fr:Stratégie d'évolution#CMA-ES
0.778245
0.988438
0.769247
Centrifugation
Bold text'Centrifugation' is a mechanical process which involves the use of the centrifugal force to separate particles from a solution according to their size, shape, density, medium viscosity and rotor speed. The denser components of the mixture migrate away from the axis of the centrifuge, while the less dense components of the mixture migrate towards the axis. Chemists and biologists may increase the effective gravitational force of the test tube so that the precipitate (pellet) will travel quickly and fully to the bottom of the tube. The remaining liquid that lies above the precipitate is called a supernatant or supernate. There is a correlation between the size and density of a particle and the rate that the particle separates from a heterogeneous mixture, when the only force applied is that of gravity. The larger the size and the larger the density of the particles, the faster they separate from the mixture. By applying a larger effective gravitational force to the mixture, like a centrifuge does, the separation of the particles is accelerated. This is ideal in industrial and lab settings because particles that would naturally separate over a long period of time can be separated in much less time. The rate of centrifugation is specified by the angular velocity usually expressed as revolutions per minute (RPM), or acceleration expressed as g. The conversion factor between RPM and g depends on the radius of the centrifuge rotor. The particles' settling velocity in centrifugation is a function of their size and shape, centrifugal acceleration, the volume fraction of solids present, the density difference between the particle and the liquid, and the viscosity. The most common application is the separation of solid from highly concentrated suspensions, which is used in the treatment of sewage sludges for dewatering where less consistent sediment is produced. The centrifugation method has a wide variety of industrial and laboratorial applications; not only is this process used to separate two miscible substances, but also to analyze the hydrodynamic properties of macromolecules. It is one of the most important and commonly used research methods in biochemistry, cell and molecular biology. In the chemical and food industries, special centrifuges can process a continuous stream of particle turning into separated liquid like plasma. Centrifugation is also the most common method used for uranium enrichment, relying on the slight mass difference between atoms of U-238 and U-235 in uranium hexafluoride gas. Mathematical formula In a liquid suspension, many particles or cells will gradually fall to the bottom of the container due to gravity; however, the amount of time taken for such separations is not feasible. Other particles, which are very small, can not be isolated at all in solution until they are exposed to a high centrifugal force. As the suspension is rotated at a certain speed or revolutions per minute (RPM), the centrifugal force allows the particles to travel radially away from the rotation axis. The general formula for calculating the revolutions per minute (RPM) of a centrifuge is: , where g represents the relative centrifugal force (RCF) and r the radius from the center of the rotor to a point in the sample. However, depending on the centrifuge model used, the respective angle of the rotor and the radius may vary, thus the formula gets modified. For example, the Sorvall #SS-34 rotor has a maximum radius of 10.8 cm, so the formula becomes , which can further simplify to . When compared to gravity, the particle force is called the 'Relative Centrifugal Force' (RCF). It is the perpendicular force exerted on the contents of the rotor as a result of the rotation, always relative to the gravity of the Earth, which measures the strength of rotors of different types and sizes. For instance, the RCF of 1000 x g means that the centrifugal force is 1000 times stronger than the Earth's gravitational force. RCF is dependent on the speed of rotation in rpm and the distance of the particles from the center of rotation. The most common formula used for calculating RCF is: , where is a constant; r is the radius, expressed in centimetres, between the axis of rotation and a point in the sample; and rpm is the speed in revolutions per minute. Historically, many separations have been carried out at the speed of 3000 rpm; a rough guide to the ‘g’ force exerted at this speed is to multiply the centrifugation radius by a factor of 10, so a radius of 160 mm gives approximately 1600 x g. This is a rather arbitrary approach, since the RCF applied is linearly dependent on the radius, so a 10% larger radius means that a 10% higher RCF is applied at the same speed. Roughly, the above formula can be simplified to , with an error of only 0.62%. Centrifugation in biological research Microcentrifuges Microcentrifuges are specially designed table-top models with light, small-volume rotors capable of very fast acceleration up to approximately 17,000 rpm. They are lightweight devices which are primarily used for short-time centrifugation of samples up to around 0.2–2.0 mL. However, due to their small scale, they are readily transportable and, if necessary, can be operated in a cold room. They can be refrigerated or not. The microcentrifuge is normally used in research laboratories where small samples of biological molecules, cells, or nuclei are required to be subjected to high RCF for relatively short time intervals. Microcentrifuges designed for high-speed operation can reach up to 35,000 rpm, giving RCF up to 30000×g, and are called high-speed microcentrifuges. Low-speed centrifuges Low-speed centrifuges are used to harvest chemical precipitates, intact cells (animal, plant and some microorganisms), nuclei, chloroplasts, large mitochondria and the larger plasma-membrane fragments. Density gradients for purifying cells are also run in these centrifuges. Swinging-bucket rotors tend to be used very widely because of the huge flexibility of sample size through the use of adaptors. These machines have maximum rotor speeds of less than 10 and vary from small, bench-top to large, floor-standing centrifuges. High-speed centrifuges High-speed centrifuges are typically used to harvest microorganisms, viruses, mitochondria, lysosomes, peroxisomes and intact tubular Golgi membranes. The majority of the simple pelleting tasks are carried out in fixed angle rotors. Some density-gradient work for purifying cells and organelles can be carried out in swinging-bucket rotors, or in the case of Percoll gradients in fixed-angle rotors.<ref arvest all membrane vesicles derived from the plasma membrane, endoplasmic reticulum (ER) and Golgi membrane, endosomes, ribosomes, ribosomal subunits, plasmids, DNA, RNA and proteins in fixed-angle rotors. Compared to microcentrifuges or high-speed centrifuges, ultracentrifuges can isolate much smaller particles and, additionally, whilst microcentrifuges and supercentrifuges separate particles in batches (limited volumes of samples must be handled manually in test tubes or bottles), ultracentrifuges can separate molecules in batch or continuous flow systems. Ultracentrifugation is employed for separation of macromolecules/ligand binding kinetic studies, separation of various lipoprotein fractions from plasma and deprotonisation of physiological fluids for amino acid analysis. They are the most commonly used centrifuge for the density-gradient purification of all particles except cells, and, whilst swinging buckets have been traditionally used for this purpose, fixed-angle rotors and vertical rotors are also used, particularly for self-generated gradients and can improve the efficiency of separation greatly. There are two kinds of ultracentrifuges: the analytical and the preparative. Analytical ultracentrifugation Analytical ultracentrifugation (AUC) can be used for determination of the properties of macromolecules such as shape, mass, composition, and conformation. It is a commonly used biomolecular analysis technique used to evaluate sample purity, to characterize the assembly and disassembly mechanisms of biomolecular complexes, to determine subunit stoichiometries, to identify and characterize macromolecular conformational changes, and to calculate equilibrium constants and thermodynamic parameters for self-associating and hetero-associating systems. Analytical ultracentrifuges incorporate a scanning visible/ultraviolet light-based optical detection system for real-time monitoring of the sample’s progress during a spin. Samples are centrifuged with a high-density solution such as sucrose, caesium chloride, or iodixanol. The high-density solution may be at a uniform concentration throughout the test tube ("cushion") or a varying concentration ("gradient"). Molecular properties can be modeled through sedimentation velocity analysis or sedimentation equilibrium analysis. During the run, the particle or molecules will migrate through the test tube at different speeds depending on their physical properties and the properties of the solution, and eventually form a pellet at the bottom of the tube, or bands at various heights. Preparative ultracentrifugation Preparative ultracentrifuges are often used for separating particles according to their densities, isolating and/or harvesting denser particles for collection in the pellet, and clarifying suspensions containing particles. Sometimes researchers also use preparative ultracentrifuges if they need the flexibility to change the type of rotor in the instrument. Preparative ultracentrifuges can be equipped with a wide range of different rotor types, which can spin samples of different numbers, at different angles, and at different speeds. Fractionation process In biological research, cell fractionation typically includes the isolation of cellular components while retaining the individual roles of each component. Generally, the cell sample is stored in a suspension which is: Buffered—neutral pH, preventing damage to the structure of proteins including enzymes (which could affect ionic bonds) Isotonic (of equal water potential)—this prevents water gain or loss by the organelles Cool—reducing the overall activity of enzyme released later in the procedure Centrifugation is the first step in most fractionations. Through low-speed centrifugation, cell debris may be removed, leaving a supernatant preserving the contents of the cell. Repeated centrifugation at progressively higher speeds will fractionate homogenates of cells into their components. In general, the smaller the subcellular component, the greater is the centrifugal force required to sediment it. The soluble fraction of any lysate can then be further separated into its constituents using a variety of methods. Differential centrifugation Differential centrifugation is the simplest method of fractionation by centrifugation, commonly used to separate organelles and membranes found in cells. Organelles generally differ from each other in density and in size, making the use of differential centrifugation, and centrifugation in general, possible. The organelles can then be identified by testing for indicators that are unique to the specific organelles. The most widely used application of this technique is to produce crude subcellular fractions from a tissue homogenate such as that from rat liver. Particles of different densities or sizes in a suspension are sedimented at different rates, with the larger and denser particles sedimenting faster. These sedimentation rates can be increased by using centrifugal force. A suspension of cells is subjected to a series of increasing centrifugal force cycles to produce a series of pellets comprising cells with a declining sedimentation rate. Homogenate includes nuclei, mitochondria, lysosomes, peroxisomes, plasma membrane sheets and a broad range of vesicles derived from a number of intracellular membrane compartments and also from the plasma membrane, typically in a buffered medium. Density gradient centrifugation Density gradient centrifugation is known to be one of the most efficient methods for separating suspended particles, and is used both as a separation technique and as a method for measuring the density of particles or molecules in a mixture. It is used to separate particles on the basis of size, shape, and density by using a medium of graded densities. During a relatively short or slow centrifugation, the particles are separated by size, with larger particles sedimenting farther than smaller ones. Over a long or fast centrifugation, particles travel to locations in the gradient where the density of the medium is the same as that of the particle density; (ρp – ρm) → 0. Therefore, a small, dense particle initially sediments less readily than a large, low density particle. The large particles reach their equilibrium density position early, while the small particles slowly migrate across the large particle zone and ultimately take up an equilibrium position deeper into the gradient. A tube, after being centrifuged by this method, has particles in order of density based on height. The object or particle of interest will reside in the position within the tube corresponding to its density. Nevertheless, some non-ideal sedimentations are still possible when using this method. The first potential issue is the unwanted aggregation of particles, but this can occur in any centrifugation. The second possibility occurs when droplets of solution that contain particles sediment. This is more likely to occur when working with a solution that has a layer of suspension floating on a dense liquid, which in fact have little to no density gradient. Other applications A centrifuge can be used to isolate small quantities of solids retained in suspension from liquids, such as in the separation of chalk powder from water. In biological research, it can be used in the purification of mammalian cells, fractionation of subcellular organelles, fractionation of membrane vesicles, fractionation of macromolecules and macromolecular complexes, etc. Centrifugation is used in many different ways in the food industry. For example, in the dairy industry, it is typically used in the clarification and skimming of milk, extraction of cream, production and recovery of casein, cheese production, removing bacterial contaminants, etc. This processing technique is also used in the production of beverages, juices, coffee, tea, beer, wine, soy milk, oil and fat processing/recovery, cocoa butter, sugar production, etc. It is also used in the clarification and stabilization of wine. In forensic and research laboratories, it can be used in the separation of urine and blood components. It also aids in separation of proteins using purification techniques such as salting out, e.g. ammonium sulfate precipitation. Centrifugation is also an important technique in waste treatment, being one of the most common processes used for sludge dewatering. This process also plays a role in cyclonic separation, where particles are separated from an air-flow without the use of filters. In a cyclone collector, air moves in a helical path. Particles with high inertia are separated by the centrifugal force whilst smaller particles continue with the air-flow. Centrifuges have also been used to a small degree to isolate lighter-than-water compounds, such as oil. In such situations, the aqueous discharge is obtained at the opposite outlet from which solids with a specific gravity greater than one are the target substances for separation. History By 1923 Theodor Svedberg and his student H. Rinde had successfully analyzed large-grained sols in terms of their gravitational sedimentation. Sols consist of a substance evenly distributed in another substance, also known as a colloid. However, smaller grained sols, such as those containing gold, could not be analyzed. To investigate this problem Svedberg developed an analytical centrifuge, equipped with a photographic absorption system, which would exert a much greater centrifugal effect. In addition, he developed the theory necessary to measure molecular weight. During this time, Svedberg's attention shifted from gold to proteins. By 1900, it had been generally accepted that proteins were composed of amino acids; however, whether proteins were colloids or macromolecules was still under debate. One protein being investigated at the time was hemoglobin. It was determined to have 712 carbon, 1,130 hydrogen, 243 oxygen, two sulfur atoms, and at least one iron atom. This gave hemoglobin a resulting weight of approximately 16,000 dalton (Da) but it was uncertain whether this value was a multiple of one or four (dependent upon the number of iron atoms present). Through a series of experiments utilizing the sedimentation equilibrium technique, two important observations were made: hemoglobin has a molecular weight of 68,000 Da, suggesting that there are four iron atoms present rather than one, and that, no matter where the hemoglobin was isolated from, it had exactly the same molecular weight. How something of such a large molecular mass could be consistently found, regardless of where it was sampled from in the body, was unprecedented and favored the idea that proteins are macromolecules rather than colloids. In order to investigate this phenomenon, a centrifuge with even higher speeds was needed, and thus the ultracentrifuge was created to apply the theory of sedimentation-diffusion. The same molecular mass was determined, and the presence of a spreading boundary suggested that it was a single compact particle. Further application of centrifugation showed that under different conditions the large homogeneous particles could be broken down into discrete subunits. The development of centrifugation was a great advance in experimental protein science. Linderstorm-Lang, in 1937, discovered that density gradient tubes could be used for density measurements. He discovered this when working with potato yellow-dwarf virus. This method was also used in Meselson and Stahl's famous experiment in which they proved that DNA replication is semi-conservative by using different isotopes of nitrogen. They used density gradient centrifugation to determine which isotope or isotopes of nitrogen were present in the DNA after cycles of replication. See also Centrifuge References Sources Harrison, Roger G., Todd, Paul, Rudge, Scott R., Petrides D.P. Bioseparations Science and Engineering. Oxford University Press, 2003. Dishon, M., Weiss, G.H., Yphantis, D.A. Numerical Solutions of the Lamm Equation. I. Numerical Procedure. Biopolymers, Vol. 4, 1966. pp. 449–455. Cao, W., Demeler B. Modeling Analytical Ultracentrifugation Experiments with an Adaptive Space-Time Finite Element Solution for Multicomponent Reacting Systems. Biophysical Journal, Vol. 95, 2008. pp. 54–65. Howlett, G.J., Minton, A.P., Rivas, G. Analytical Ultracentrifugation for the Study of Protein Association and Assembly. Current Opinion in Chemical Biology, Vol. 10, 2006. pp. 430–436. Dam, J., Velikovsky, C.A., Mariuzza R.A., et al. Sedimentation Velocity Analysis of Heterogeneous Protein-Protein Interactions: Lamm Equation Modeling and Sedimentation Coefficient Distributions c(s). Biophysical Journal, Vol. 89, 2005. pp. 619–634. Berkowitz, S.A., Philo, J.S. Monitoring the Homogeneity of Adenovirus Preparations (a Gene Therapy Delivery System) Using Analytical Ultracentrifugation''. Analytical Biochemistry, Vol. 362, 2007. pp. 16–37. A
0.773852
0.994036
0.769237
Otolith
An otolith (, ear + , , a stone), also called statoconium, otoconium or statolith, is a calcium carbonate structure in the saccule or utricle of the inner ear, specifically in the vestibular system of vertebrates. The saccule and utricle, in turn, together make the otolith organs. These organs are what allows an organism, including humans, to perceive linear acceleration, both horizontally and vertically (gravity). They have been identified in both extinct and extant vertebrates. Counting the annual growth rings on the otoliths is a common technique in estimating the age of fish. Description Endolymphatic infillings such as otoliths are structures in the saccule and utricle of the inner ear, specifically in the vestibular labyrinth of all vertebrates (fish, amphibians, reptiles, mammals and birds). In vertebrates, the saccule and utricle together make the otolith organs. Both statoconia and otoliths are used as gravity, balance, movement, and directional indicators in all vertebrates and have a secondary function in sound detection in higher aquatic and terrestrial vertebrates. They are sensitive to gravity and linear acceleration. Because of their orientation in the head, the utricle is sensitive to a change in horizontal movement, and the saccule gives information about vertical acceleration (such as when in an elevator). Similar balance receptors called statocysts can be found in many invertebrate groups but are not contained in the structure of an inner ear. Mollusk statocysts are of a similar morphology to the displacement-sensitive organs of vertebrates; however, the function of the mollusk statocyst is restricted to gravity detection and possibly some detection of angular momentum. These are analogous structures, with similar form and function but not descended from a common structure. Statoconia (also called otoconia) are numerous grains, often spherical in shape, between 1 and 50 μm; collectively. Statoconia are also sometimes termed a statocyst. Otoliths (also called statoliths) are agglutinated crystals or crystals precipitated around a nucleus, with well defined morphology and together all may be termed endolymphatic infillings. Mechanism The semicircular canals and sacs in all vertebrates are attached to endolymphatic ducts, which in some groups (such as sharks) end in small openings, called endolymphatic pores, on the dorsal surface of the head. Extrinsic grains may enter through these openings, typically less than a millimeter in diameter. The size of material that enters is limited to sand-sized particles and in the case of sharks is bound together with an endogenous organic matrix that the animal secretes. In mammals, otoliths are small particles, consisting of a combination of a gelatinous matrix and calcium carbonate in the viscous fluid of the saccule and utricle. The weight and inertia of these small particles causes them to stimulate hair cells when the head moves. The hair cells are made up of 40 to 70 stereocilia and one kinocilium, which is connected to an afferent nerve. Hair cells send signals down sensory nerve fibers which are interpreted by the brain as motion. In addition to sensing acceleration of the head, the otoliths can help to sense the orientation via gravity's effect on them. When the head is in a normal upright position, the otolith presses on the sensory hair cell receptors. This pushes the hair cell processes down and prevents them from moving side to side. However, when the head is tilted, the pull of gravity on otoliths shifts the hair cell processes to the side, distorting them and sending a message to the central nervous system that the head is tilted. There is evidence that the vestibular system of mammals has retained some of its ancestral acoustic sensitivity and that this sensitivity is mediated by the otolithic organs (most likely the sacculus, due to its anatomical location). In mice lacking the otoconia of the utricle and saccule, this retained acoustic sensitivity is lost. In humans vestibular evoked myogenic potentials occur in response to loud, low-frequency acoustic stimulation in patients with the sensorineural hearing loss. Vestibular sensitivity to ultrasonic sounds has also been hypothesized to be involved in the perception of speech presented at artificially high frequencies, above the range of the human cochlea (~18 kHz). In mice, sensation of acoustic information via the vestibular system has been demonstrated to have a behaviourally relevant effect; response to an elicited acoustic startle reflex is larger in the presence of loud, low frequency sounds that are below the threshold for the mouse cochlea (~4 Hz), raising the possibility that the acoustic sensitivity of the vestibular system may extend the hearing range of small mammals. Paleontology After the death and decomposition of a fish, otoliths may be preserved within the body of an organism or be dispersed before burial and fossilization. Dispersed otoliths are one of the many microfossils which can be found through a micropalaeontological analysis of a fine sediment. Their stratigraphic significance is minimal, but can still be used to characterize a level or interval. Fossil otoliths are rarely found in situ (on the remains of the animal), likely because they are not recognized separately from the surrounding rock matrix. In some cases, due to differences in colour, grain size, or a distinctive shape, they can be identified. These rare cases are of special significance, since the presence, composition, and morphology of the material can clarify the relationship of species and groups. In the case of primitive fish, various fossil material shows that endolymphatic infillings were similar in elemental composition to the rock matrix but were restricted to coarse grained material, which presumably is better for the detection of gravity, displacement, and sound. The presence of these extrinsic grains in osteostracans, chondrichthyans, and acanthodians indicates a common inner ear physiology and presence of open endolymphatic ducts. An unclassified fossil named Gluteus minimus has been thought to be possible otoliths, but it is hitherto unknown to which animal they could belong to. Ecology Composition The composition of fish otoliths is also proving useful to fisheries scientists. The calcium carbonate that the otolith is composed of is primarily derived from the water. As the otolith grows, new calcium carbonate crystals form. As with any crystal structure, lattice vacancies will exist during crystal formation allowing trace elements from the water to bind with the otolith. Studying the trace elemental composition or isotopic signatures of trace elements within a fish otolith gives insight to the water bodies fish have previously occupied. Fish otoliths as old as 172 million years have been used to study the environment in which the fish lived. Robotic micromilling devices have also been used to recover very high resolution records of life history, including diet and temperatures throughout the life of the fish, as well as their natal origin. The most studied trace and isotopic signatures are strontium due to the same charge and similar ionic radius to calcium; however, scientists can study multiple trace elements within an otolith to discriminate more specific signatures. A common tool used to measure trace elements in an otolith is a laser ablation inductively coupled plasma mass spectrometer. This tool can measure a variety of trace elements simultaneously. A secondary ion mass spectrometer can also be used. This instrument can allow for greater chemical resolution but can only measure one trace element at a time. The hope of this research is to provide scientists with valuable information on where fish have frequented. Combined with otolith annuli, scientists can add how old fish were when they traveled through different bodies of water. This information can be used to determine fish life cycles so that fisheries scientists can make better informed decisions about fish stocks. Growth rate and age Finfish (class Osteichthyes) have three pairs of otoliths – the sagittae (singular sagitta), lapilli (singular lapillus), and asterisci (singular asteriscus). The sagittae are largest, found just behind the eyes and approximately level with them vertically. The lapilli and asterisci (smallest of the three) are located within the semicircular canals. The sagittae are normally composed of aragonite (although vaterite abnormalities can occur), as are the lapilli, while the asterisci are normally composed of vaterite. The shapes and proportional sizes of the otoliths vary with fish species. In general, fish from highly structured habitats such as reefs or rocky bottoms (e.g. snappers, groupers, many drums and croakers) will have larger otoliths than fish that spend most of their time swimming at high speed in straight lines in the open ocean (e.g. tuna, mackerel, dolphinfish). Flying fish have unusually large otoliths, possibly due to their need for balance when launching themselves out of the water to "fly" in the air. Often, the fish species can be identified from distinct morphological characteristics of an isolated otolith. Fish otoliths accrete layers of calcium carbonate and gelatinous matrix throughout their lives. The accretion rate varies with growth of the fish – often less growth in winter and more in summer – which results in the appearance of rings that resemble tree rings. By counting the rings, it is possible to determine the age of the fish in years. Typically the sagitta is used, as it is largest, but sometimes lapilli are used if they have a more convenient shape. The asteriscus, which is smallest of the three, is rarely used in age and growth studies. In addition, in most species the accretion of calcium carbonate and gelatinous matrix alternates on a daily cycle. It is therefore also possible to determine fish age in days. This latter information is often obtained under a microscope, and provides significant data to early life history studies. By measuring the thickness of individual rings, it has been assumed (at least in some species) to estimate fish growth because fish growth is directly proportional to otolith growth. However, some studies disprove a direct link between body growth and otolith growth. At times of lower or zero body growth the otolith continues to accrete leading some researchers to believe the direct link is to metabolism, not growth per se. Otoliths, unlike scales, do not reabsorb during times of decreased energy making it even more useful tool to age a fish. Fish never stop growing entirely, though growth rate in mature fish is reduced. Rings corresponding to later parts of the life cycle tend to be closer together as a result. Furthermore, a small percentage of otoliths in some species bear deformities over time. Age and growth studies of fish are important for understanding such things as timing and magnitude of spawning, recruitment and habitat use, larval and juvenile duration, and population age structure. Such knowledge is in turn important for designing appropriate fisheries management policies. Due to the amount of required human labour in otolith age reading, there is active research in automating that process. Diet research Since the compounds in fish otoliths are resistant to digestion, they are found in the digestive tracts and scats of seabirds and piscivorous marine mammals, such as dolphins, seals, sea lions and walruses. Many fish can be identified to genus and species by their otoliths. Otoliths can therefore, to some extent, be used to deduce and reconstruct the prey composition of marine mammal and seabird diets. Otoliths (sagittae) are bilaterally symmetrical, with each fish having one right and one left. Separating recovered otoliths into right and left, therefore, allows one to infer a minimum number of prey individuals ingested for a given fish species. Otolith size is also proportional to the length and weight of a fish. They can therefore be used to back-calculate prey size and biomass, useful when trying to estimate marine mammal prey consumption, and potential impacts on fish stocks. Otoliths cannot be used alone to reliably estimate cetacean or pinniped diets, however. They may suffer partial or complete erosion in the digestive tract, skewing measurements of prey number and biomass. Species with fragile, easily digested otoliths may be underestimated in the diet. To address these biases, otolith correction factors have been developed through captive feeding experiments, in which seals are fed fish of known size, and the degree of otolith erosion is quantified for different prey taxa. The inclusion of fish vertebrae, jaw bones, teeth, and other informative skeletal elements improves prey identification and quantification over otolith analysis alone. This is especially true for fish species with fragile otoliths, but other distinctive bones, such as Atlantic mackerel (Scomber scombrus), and Atlantic herring (Clupea harengus). Otolith ornaments 'Sea gems' ornaments from fish otoliths have been introduced in the market in India recently, with the efforts of a group of enthusiastic fisher women in Vizhinjam. Scientists from Central Marine Fisheries Research Institute (CMFRI) have trained these fisher-women. Ornaments from fish otoliths, known to the Romans and Egyptians as lucky stones, are continued to be used in countries like Brazil and the Faeröer, and are being collected and sold in an organized and sustainable manner in India. See also Ossicles Otolithic membrane Otolith microchemical analysis Orbiting Frog Otolith, 1970 space mission References External links Otolith Research Lab – Bedford Institute of Oceanography. Auditory system Fish anatomy Paleozoology Articles containing video clips
0.773698
0.994204
0.769214
Diffusion equation
The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances. Statement The equation is usually written as: where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear. The equation above applies when the diffusion coefficient is isotropic; in the case of anisotropic diffusion, is a symmetric positive definite matrix, and the equation is written (for three dimensional diffusion) as: The diffusion equation has numerous analytic solutions. If is constant, then the equation reduces to the following linear differential equation: which is identical to the heat equation. Historical origin The particle diffusion equation was originally derived by Adolf Fick in 1855. Derivation The diffusion equation can be trivially derived from the continuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed: where j is the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenological Fick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient: If drift must be taken into account, the Fokker–Planck equation provides an appropriate generalization. Discretization The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise. In discretizing space alone, the Green's function becomes the discrete Gaussian kernel, rather than the continuous Gaussian kernel. In discretizing both time and space, one obtains the random walk. Discretization in image processing The product rule is used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering: where "tr" denotes the trace of the 2nd rank tensor, and superscript "T" denotes transpose, in which in image filtering D(ϕ, r) are symmetric matrices constructed from the eigenvectors of the image structure tensors. The spatial derivatives can then be approximated by two first order and a second order central finite differences. The resulting diffusion algorithm can be written as an image convolution with a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D. See also Continuity equation Heat equation Fokker–Planck equation Fick's laws of diffusion Maxwell–Stefan equation Radiative transfer equation and diffusion theory for photon transport in biological tissue Streamline diffusion Numerical solution of the convection–diffusion equation References Further reading Carslaw, H. S. and Jaeger, J. C. (1959). Conduction of Heat in Solids Oxford: Clarendon Press Jacobs. M.H. (1935) Diffusion Processes Berlin/Heidelberg: Springer Crank, J. (1956). The Mathematics of Diffusion. Oxford: Clarendon Press Mathews, Jon; Walker, Robert L. (1970). Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers. McGraw-Hill Ghez, R (2001) Diffusion Phenomena. Long Island, NY, USA: Dover Publication Inc Bennett, T.D: (2013) Transport by Advection and Diffusion. John Wiley & Sons Vogel, G (2019) Adventure Diffusion Springer Gillespie, D.T.; Seitaridou, E (2013) Simple Brownian Diffusion. Oxford University Press External links Diffusion Calculator for Impurities & Dopants in Silicon A tutorial on the theory behind and solution of the Diffusion Equation. Classical and nanoscale diffusion (with figures and animations) Diffusion Partial differential equations Parabolic partial differential equations Functions of space and time it:Leggi di Fick
0.77328
0.994697
0.769179
Michelson–Morley experiment
The Michelson–Morley experiment was an attempt to measure the motion of the Earth relative to the luminiferous aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed between April and July 1887 by American physicists Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University in Cleveland, Ohio, and published in November of the same year. The experiment compared the speed of light in perpendicular directions in an attempt to detect the relative motion of matter, including their laboratory, through the luminiferous aether, or "aether wind" as it was sometimes called. The result was negative, in that Michelson and Morley found no significant difference between the speed of light in the direction of movement through the presumed aether, and the speed at right angles. This result is generally considered to be the first strong evidence against some aether theories, as well as initiating a line of research that eventually led to special relativity, which rules out motion against an aether. Of this experiment, Albert Einstein wrote, "If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a (halfway) redemption." Michelson–Morley type experiments have been repeated many times with steadily increasing sensitivity. These include experiments from 1902 to 1905, and a series of experiments in the 1920s. More recently, in 2009, optical resonator experiments confirmed the absence of any aether wind at the 10−17 level. Together with the Ives–Stilwell and Kennedy–Thorndike experiments, Michelson–Morley type experiments form one of the fundamental tests of special relativity. Detecting the aether Physics theories of the 19th century assumed that just as surface water waves must have a supporting substance, i.e., a "medium", to move across (in this case water), and audible sound requires a medium to transmit its wave motions (such as air or water), so light must also require a medium, the "luminiferous aether", to transmit its wave motions. Because light can travel through a vacuum, it was assumed that even a vacuum must be filled with aether. Because the speed of light is so great, and because material bodies pass through the aether without obvious friction or drag, it was assumed to have a highly unusual combination of properties. Designing experiments to investigate these properties was a high priority of 19th-century physics. Earth orbits around the Sun at a speed of around , or . The Earth is in motion, so two main possibilities were considered: (1) The aether is stationary and only partially dragged by Earth (proposed by Augustin-Jean Fresnel in 1818), or (2) the aether is completely dragged by Earth and thus shares its motion at Earth's surface (proposed by Sir George Stokes, 1st Baronet in 1844). In addition, James Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's equations, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown. Eventually, Fresnel's idea of an (almost) stationary aether was preferred because it appeared to be confirmed by the Fizeau experiment (1851) and the aberration of star light. According to the stationary and the partially dragged aether hypotheses, Earth and the aether are in relative motion, implying that a so-called "aether wind" (Fig. 2) should exist. Although it would be theoretically possible for the Earth's motion to match that of the aether at one moment in time, it was not possible for the Earth to remain at rest with respect to the aether at all times, because of the variation in both the direction and the speed of the motion. At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analyzing the return speed of light in different directions at various different times, it was thought to be possible to measure the motion of the Earth relative to the aether. The expected relative difference in the measured speed of light was quite small, given that the velocity of the Earth in its orbit around the Sun has a magnitude of about one hundredth of one percent of the speed of light. During the mid-19th century, measurements of aether wind effects of first order, i.e., effects proportional to v/c (v being Earth's velocity, c the speed of light) were thought to be possible, but no direct measurement of the speed of light was possible with the accuracy required. For instance, the Fizeau wheel could measure the speed of light to perhaps 5% accuracy, which was quite inadequate for measuring directly a first-order 0.01% change in the speed of light. A number of physicists therefore attempted to make measurements of indirect first-order effects not of the speed of light itself, but of variations in the speed of light (see First order aether-drift experiments). The Hoek experiment, for example, was intended to detect interferometric fringe shifts due to speed differences of oppositely propagating light waves through water at rest. The results of such experiments were all negative. This could be explained by using Fresnel's dragging coefficient, according to which the aether and thus light are partially dragged by moving matter. Partial aether-dragging would thwart attempts to measure any first order change in the speed of light. As pointed out by Maxwell (1878), only experimental arrangements capable of measuring second order effects would have any hope of detecting aether drift, i.e., effects proportional to v2/c2. Existing experimental setups, however, were not sensitive enough to measure effects of that size. 1881 and 1887 experiments Michelson experiment (1881) Michelson had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. In 1877, while teaching at his alma mater, the United States Naval Academy in Annapolis, Michelson conducted his first known light speed experiments as a part of a classroom demonstration. In 1881, he left active U.S. Naval service while in Germany concluding his studies. In that year, Michelson used a prototype experimental device to make several more measurements. The device he designed, later known as a Michelson interferometer, sent yellow light from a sodium flame (for alignment), or white light (for the actual observations), through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal vs. the transverse arms. If the Earth is traveling through an aether medium, a light beam traveling parallel to the flow of that aether will take longer to reflect back and forth than would a beam traveling perpendicular to the aether, because the increase in elapsed time from traveling against the aether wind is more than the time saved by traveling with the aether wind. Michelson expected that the Earth's motion would produce a fringe shift equal to 0.04 fringes—that is, of the separation between areas of the same intensity. He did not observe the expected shift; the greatest average deviation that he measured (in the northwest direction) was only 0.018 fringes; most of his measurements were much less. His conclusion was that Fresnel's hypothesis of a stationary aether with partial aether dragging would have to be rejected, and thus he confirmed Stokes' hypothesis of complete aether dragging. However, Alfred Potier (and later Hendrik Lorentz) pointed out to Michelson that he had made an error of calculation, and that the expected fringe shift should have been only 0.02 fringes. Michelson's apparatus was subject to experimental errors far too large to say anything conclusive about the aether wind. Definitive measurement of the aether wind would require an experiment with greater accuracy and better controls than the original. Nevertheless, the prototype was successful in demonstrating that the basic method was feasible. Michelson–Morley experiment (1887) In 1885, Michelson began a collaboration with Edward Morley, spending considerable time and money to confirm with higher accuracy Fizeau's 1851 experiment on Fresnel's drag coefficient, to improve on Michelson's 1881 experiment, and to establish the wavelength of light as a standard of length. At this time Michelson was professor of physics at the Case School of Applied Science, and Morley was professor of chemistry at Western Reserve University (WRU), which shared a campus with the Case School on the eastern edge of Cleveland. Michelson suffered a mental health crisis in September 1885, from which he recovered by October 1885. Morley ascribed this breakdown to the intense work of Michelson during the preparation of the experiments. In 1886, Michelson and Morley successfully confirmed Fresnel's drag coefficient – this result was also considered as a confirmation of the stationary aether concept. This result strengthened their hope of finding the aether wind. Michelson and Morley created an improved version of the Michelson experiment with more than enough accuracy to detect this hypothetical effect. The experiment was performed in several periods of concentrated observations between April and July 1887, in the basement of Adelbert Dormitory of WRU (later renamed Pierce Hall, demolished in 1962). As shown in the diagram to the right, the light was repeatedly reflected back and forth along the arms of the interferometer, increasing the path length to . At this length, the drift would be about 0.4 fringes. To make that easily detectable, the apparatus was assembled in a closed room in the basement of the heavy stone dormitory, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a large block of sandstone (Fig. 1), about a foot thick and square, which was then floated in a circular trough of mercury. They estimated that effects of about 0.01 fringe would be detectable. Michelson and Morley and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used (partially) monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Purely monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even when the interferometer was set up in a basement. Because the fringes would occasionally disappear due to vibrations caused by passing horse traffic, distant thunderstorms and the like, an observer could easily "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. As Dayton Miller wrote, "White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings." Use of partially monochromatic light (yellow sodium light) during initial alignment enabled the researchers to locate the position of equal path length, more or less easily, before switching to white light. The mercury trough allowed the device to turn with close to zero friction, so that once having given the sandstone block a single push it would slowly rotate through the entire range of possible angles to the "aether wind", while measurements were continuously observed by looking through the eyepiece. The hypothesis of aether drift implies that because one of the arms would inevitably turn into the direction of the wind at the same time that another arm was turning perpendicularly to the wind, an effect should be noticeable even over a period of minutes. The expectation was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. This result could have been expected because during each full rotation, each arm would be parallel to the wind twice (facing into and away from the wind giving identical readings) and perpendicular to the wind twice. Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during the course of a sidereal day. Because of the motion of the Earth around the Sun, the measured data were also expected to show annual variations. Most famous "failed" experiment After all this thought and preparation, the experiment became what has been called the most famous failed experiment in history. Instead of providing insight into the properties of the aether, Michelson and Morley's article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement (Fig. 7), but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was "probably less than one-sixth" of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of speed relative to the aether, and it was understood to be within the range of an experimental error that would allow the speed to actually be zero. For instance, Michelson wrote about the "decidedly negative result" in a letter to Lord Rayleigh in August 1887: From the standpoint of the then current aether models, the experimental results were conflicting. The Fizeau experiment and its 1886 repetition by Michelson and Morley apparently confirmed the stationary aether with partial aether dragging, and refuted complete aether dragging. On the other hand, the much more precise Michelson–Morley experiment (1887) apparently confirmed complete aether dragging and refuted the stationary aether. In addition, the Michelson–Morley null result was further substantiated by the null results of other second-order experiments of different kind, namely the Trouton–Noble experiment (1903) and the experiments of Rayleigh and Brace (1902–1904). These problems and their solution led to the development of the Lorentz transformation and special relativity. After the "failed" experiment Michelson and Morley ceased their aether drift measurements and started to use their newly developed technique to establish the wavelength of light as a standard of length. Light path analysis and consequences Observer resting in the aether The beam travel time in the longitudinal direction can be derived as follows: Light is sent from the source and propagates with the speed of light in the aether. It passes through the half-silvered mirror at the origin at . The reflecting mirror is at that moment at distance (the length of the interferometer arm) and is moving with velocity . The beam hits the mirror at time and thus travels the distance . At this time, the mirror has traveled the distance . Thus and consequently the travel time . The same consideration applies to the backward journey, with the sign of reversed, resulting in and . The total travel time is: Michelson obtained this expression correctly in 1881, however, in transverse direction he obtained the incorrect expression because he overlooked the increase in path length in the rest frame of the aether. This was corrected by Alfred Potier (1882) and Hendrik Lorentz (1886). The derivation in the transverse direction can be given as follows (analogous to the derivation of time dilation using a light clock): The beam is propagating at the speed of light and hits the mirror at time , traveling the distance . At the same time, the mirror has traveled the distance in the x direction. So in order to hit the mirror, the travel path of the beam is in the y direction (assuming equal-length arms) and in the x direction. This inclined travel path follows from the transformation from the interferometer rest frame to the aether rest frame. Therefore, the Pythagorean theorem gives the actual beam travel distance of . Thus and consequently the travel time , which is the same for the backward journey. The total travel time is: The time difference between and is given by To find the path difference, simply multiply by ; The path difference is denoted by because the beams are out of phase by a some number of wavelengths. To visualise this, consider taking the two beam paths along the longitudinal and transverse plane, and lying them straight (an animation of this is shown at minute 11:00, The Mechanical Universe, episode 41). One path will be longer than the other, this distance is . Alternatively, consider the rearrangement of the speed of light formula . If the relation is true (if the velocity of the aether is small relative to the speed of light), then the expression can be simplified using a first order binomial expansion; So, rewriting the above in terms of powers; Applying binomial simplification; Therefore; It can be seen from this derivation that aether wind manifests as a path difference. The path difference is zero only when the interferometer is aligned with or perpendicular to the aether wind, and it reaches a maximum when it is at a 45° angle. The path difference can be any fraction of the wavelength, depending on the angle and speed of the aether wind. To prove the existence of the aether, Michelson and Morley sought to find the "fringe shift". The idea was simple, the fringes of the interference pattern should shift when rotating it by 90° as the two beams have exchanged roles. To find the fringe shift, subtract the path difference in first orientation by the path difference in the second, then divide by the wavelength, , of light; Note the difference between , which is some number of wavelengths, and which is a single wavelength. As can be seen by this relation, fringe shift n is a unitless quantity. Since L ≈ 11 meters and λ ≈ 500 nanometers, the expected fringe shift was n ≈ 0.44. The negative result led Michelson to the conclusion that there is no measurable aether drift. However, he never accepted this on a personal level, and the negative result haunted him for the rest of his life. Observer comoving with the interferometer If the same situation is described from the view of an observer co-moving with the interferometer, then the effect of aether wind is similar to the effect experienced by a swimmer, who tries to move with velocity against a river flowing with velocity . In the longitudinal direction the swimmer first moves upstream, so his velocity is diminished due to the river flow to . On his way back moving downstream, his velocity is increased to . This gives the beam travel times and as mentioned above. In the transverse direction, the swimmer has to compensate for the river flow by moving at a certain angle against the flow direction, in order to sustain his exact transverse direction of motion and to reach the other side of the river at the correct location. This diminishes his speed to , and gives the beam travel time as mentioned above. Mirror reflection The classical analysis predicted a relative phase shift between the longitudinal and transverse beams which in Michelson and Morley's apparatus should have been readily measurable. What is not often appreciated (since there was no means of measuring it), is that motion through the hypothetical aether should also have caused the two beams to diverge as they emerged from the interferometer by about 10−8 radians. For an apparatus in motion, the classical analysis requires that the beam-splitting mirror be slightly offset from an exact 45° if the longitudinal and transverse beams are to emerge from the apparatus exactly superimposed. In the relativistic analysis, Lorentz-contraction of the beam splitter in the direction of motion causes it to become more perpendicular by precisely the amount necessary to compensate for the angle discrepancy of the two beams. Length contraction and Lorentz transformation A first step to explaining the Michelson and Morley experiment's null result was found in the FitzGerald–Lorentz contraction hypothesis, now simply called length contraction or Lorentz contraction, first proposed by George FitzGerald (1889) in a letter to same journal that published the Michelson-Morley paper, as "almost the only hypothesis that can reconcile" the apparent contradictions. It was independently also proposed by Hendrik Lorentz (1892). According to this law all objects physically contract by along the line of motion (originally thought to be relative to the aether), being the Lorentz factor. This hypothesis was partly motivated by Oliver Heaviside's discovery in 1888 that electrostatic fields are contracting in the line of motion. But since there was no reason at that time to assume that binding forces in matter are of electric origin, length contraction of matter in motion with respect to the aether was considered an ad hoc hypothesis. If length contraction of is inserted into the above formula for , then the light propagation time in the longitudinal direction becomes equal to that in the transverse direction: However, length contraction is only a special case of the more general relation, according to which the transverse length is larger than the longitudinal length by the ratio . This can be achieved in many ways. If is the moving longitudinal length and the moving transverse length, being the rest lengths, then it is given: can be arbitrarily chosen, so there are infinitely many combinations to explain the Michelson–Morley null result. For instance, if the relativistic value of length contraction of occurs, but if then no length contraction but an elongation of occurs. This hypothesis was later extended by Joseph Larmor (1897), Lorentz (1904) and Henri Poincaré (1905), who developed the complete Lorentz transformation including time dilation in order to explain the Trouton–Noble experiment, the Experiments of Rayleigh and Brace, and Kaufmann's experiments. It has the form It remained to define the value of , which was shown by Lorentz (1904) to be unity. In general, Poincaré (1905) demonstrated that only allows this transformation to form a group, so it is the only choice compatible with the principle of relativity, i.e., making the stationary aether undetectable. Given this, length contraction and time dilation obtain their exact relativistic values. Special relativity Albert Einstein formulated the theory of special relativity by 1905, deriving the Lorentz transformation and thus length contraction and time dilation from the relativity postulate and the constancy of the speed of light, thus removing the ad hoc character from the contraction hypothesis. Einstein emphasized the kinematic foundation of the theory and the modification of the notion of space and time, with the stationary aether no longer playing any role in his theory. He also pointed out the group character of the transformation. Einstein was motivated by Maxwell's theory of electromagnetism (in the form as it was given by Lorentz in 1895) and the lack of evidence for the luminiferous aether. This allows a more elegant and intuitive explanation of the Michelson–Morley null result. In a comoving frame the null result is self-evident, since the apparatus can be considered as at rest in accordance with the relativity principle, thus the beam travel times are the same. In a frame relative to which the apparatus is moving, the same reasoning applies as described above in "Length contraction and Lorentz transformation", except the word "aether" has to be replaced by "non-comoving inertial frame". Einstein wrote in 1916: The extent to which the null result of the Michelson–Morley experiment influenced Einstein is disputed. Alluding to some statements of Einstein, many historians argue that it played no significant role in his path to special relativity, while other statements of Einstein probably suggest that he was influenced by it. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. It was later shown by Howard Percy Robertson (1949) and others (see Robertson–Mansouri–Sexl test theory), that it is possible to derive the Lorentz transformation entirely from the combination of three experiments. First, the Michelson–Morley experiment showed that the speed of light is independent of the orientation of the apparatus, establishing the relationship between longitudinal (β) and transverse (δ) lengths. Then in 1932, Roy Kennedy and Edward Thorndike modified the Michelson–Morley experiment by making the path lengths of the split beam unequal, with one arm being very short. The Kennedy–Thorndike experiment took place for many months as the Earth moved around the sun. Their negative result showed that the speed of light is independent of the velocity of the apparatus in different inertial frames. In addition it established that besides length changes, corresponding time changes must also occur, i.e., it established the relationship between longitudinal lengths (β) and time changes (α). So both experiments do not provide the individual values of these quantities. This uncertainty corresponds to the undefined factor as described above. It was clear due to theoretical reasons (the group character of the Lorentz transformation as required by the relativity principle) that the individual values of length contraction and time dilation must assume their exact relativistic form. But a direct measurement of one of these quantities was still desirable to confirm the theoretical results. This was achieved by the Ives–Stilwell experiment (1938), measuring α in accordance with time dilation. Combining this value for α with the Kennedy–Thorndike null result shows that β must assume the value of relativistic length contraction. Combining β with the Michelson–Morley null result shows that δ must be zero. Therefore, the Lorentz transformation with is an unavoidable consequence of the combination of these three experiments. Special relativity is generally considered the solution to all negative aether drift (or isotropy of the speed of light) measurements, including the Michelson–Morley null result. Many high precision measurements have been conducted as tests of special relativity and modern searches for Lorentz violation in the photon, electron, nucleon, or neutrino sector, all of them confirming relativity. Incorrect alternatives As mentioned above, Michelson initially believed that his experiment would confirm Stokes' theory, according to which the aether was fully dragged in the vicinity of the earth (see Aether drag hypothesis). However, complete aether drag contradicts the observed aberration of light and was contradicted by other experiments as well. In addition, Lorentz showed in 1886 that Stokes's attempt to explain aberration is contradictory. Furthermore, the assumption that the aether is not carried in the vicinity, but only within matter, was very problematic as shown by the Hammar experiment (1935). Hammar directed one leg of his interferometer through a heavy metal pipe plugged with lead. If aether were dragged by mass, it was theorized that the mass of the sealed metal pipe would have been enough to cause a visible effect. Once again, no effect was seen, so aether-drag theories are considered to be disproven. Walther Ritz's emission theory (or ballistic theory) was also consistent with the results of the experiment, not requiring aether. The theory postulates that light has always the same velocity in respect to the source. However de Sitter noted that emitter theory predicted several optical effects that were not seen in observations of binary stars in which the light from the two stars could be measured in a spectrometer. If emission theory were correct, the light from the stars should experience unusual fringe shifting due to the velocity of the stars being added to the speed of the light, but no such effect could be seen. It was later shown by J. G. Fox that the original de Sitter experiments were flawed due to extinction, but in 1977 Brecher observed X-rays from binary star systems with similar null results. Furthermore, Filippas and Fox (1964) conducted terrestrial particle accelerator tests specifically designed to address Fox's earlier "extinction" objection, the results being inconsistent with source dependence of the speed of light. Subsequent experiments Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller from 1902 to 1904. Again, the result was negative within the margins of error. Miller worked on increasingly larger interferometers, culminating in one with a (effective) arm length that he tried at various sites, including on top of a mountain at the Mount Wilson Observatory. To avoid the possibility of the aether wind being blocked by solid walls, his mountaintop observations used a special shed with thin walls, mainly of canvas. From noisy, irregular data, he consistently extracted a small positive signal that varied with each rotation of the device, with the sidereal day, and on a yearly basis. His measurements in the 1920s amounted to approximately instead of the nearly expected from the Earth's orbital motion alone. He remained convinced this was due to partial entrainment or aether dragging, though he did not attempt a detailed explanation. He ignored critiques demonstrating the inconsistency of his results and the refutation by the Hammar experiment. Miller's findings were considered important at the time, and were discussed by Michelson, Lorentz and others at a meeting reported in 1928. There was general agreement that more experimentation was needed to check Miller's results. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding Invar to eliminate any remaining thermal effects. Other experimenters from around the world increased accuracy, eliminated possible side effects, or both. So far, no one has been able to replicate Miller's results, and modern experimental accuracies have ruled them out. Roberts (2006) has pointed out that the primitive data reduction techniques used by Miller and other early experimenters, including Michelson and Morley, were capable of creating apparent periodic signals even when none existed in the actual data. After reanalyzing Miller's original data using modern techniques of quantitative error analysis, Roberts found Miller's apparent signals to be statistically insignificant. Using a special optical arrangement involving a 1/20 wave step in one mirror, Roy J. Kennedy (1926) and K.K. Illingworth (1927) (Fig. 8) converted the task of detecting fringe shifts from the relatively insensitive one of estimating their lateral displacements to the considerably more sensitive task of adjusting the light intensity on both sides of a sharp boundary for equal luminance. If they observed unequal illumination on either side of the step, such as in Fig. 8e, they would add or remove calibrated weights from the interferometer until both sides of the step were once again evenly illuminated, as in Fig. 8d. The number of weights added or removed provided a measure of the fringe shift. Different observers could detect changes as little as 1/1500 to 1/300 of a fringe. Kennedy also carried out an experiment at Mount Wilson, finding only about 1/10 the drift measured by Miller and no seasonal effects. In 1930, Georg Joos conducted an experiment using an automated interferometer with arms forged from pressed quartz having a very low coefficient of thermal expansion, that took continuous photographic strip recordings of the fringes through dozens of revolutions of the apparatus. Displacements of 1/1000 of a fringe could be measured on the photographic plates. No periodic fringe displacements were found, placing an upper limit to the aether wind of . In the table below, the expected values are related to the relative speed between Earth and Sun of . With respect to the speed of the solar system around the galactic center of about , or the speed of the solar system relative to the CMB rest frame of about , the null results of those experiments are even more obvious. Recent experiments Optical tests Optical tests of the isotropy of the speed of light became commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision. (In the following table, only Essen (1955), Jaseja (1964), and Shamir/Fox (1969) are experiments of Michelson–Morley type, i.e., comparing two perpendicular beams. The other optical experiments employed different methods.) Recent optical resonator experiments During the early 21st century, there has been a resurgence in interest in performing precise Michelson–Morley type experiments using lasers, masers, cryogenic optical resonators, etc. This is in large part due to predictions of quantum gravity that suggest that special relativity may be violated at scales accessible to experimental study. The first of these highly accurate experiments was conducted by Brillet & Hall (1979), in which they analyzed a laser frequency stabilized to a resonance of a rotating optical Fabry–Pérot cavity. They set a limit on the anisotropy of the speed of light resulting from the Earth's motions of Δc/c ≈ 10−15, where Δc is the difference between the speed of light in the x- and y-directions. As of 2015, optical and microwave resonator experiments have improved this limit to Δc/c ≈ 10−18. In some of them, the devices were rotated or remained stationary, and some were combined with the Kennedy–Thorndike experiment. In particular, Earth's direction and velocity (ca. ) relative to the CMB rest frame are ordinarily used as references in these searches for anisotropies. Other tests of Lorentz invariance Examples of other experiments not based on the Michelson–Morley principle, i.e., non-optical isotropy tests achieving an even higher level of precision, are Clock comparison or Hughes–Drever experiments. In Drever's 1961 experiment, 7Li nuclei in the ground state, which has total angular momentum J = 3/2, were split into four equally spaced levels by a magnetic field. Each transition between a pair of adjacent levels should emit a photon of equal frequency, resulting in a single, sharp spectral line. However, since the nuclear wave functions for different MJ have different orientations in space relative to the magnetic field, any orientation dependence, whether from an aether wind or from a dependence on the large-scale distribution of mass in space (see Mach's principle), would perturb the energy spacings between the four levels, resulting in an anomalous broadening or splitting of the line. No such broadening was observed. Modern repeats of this kind of experiment have provided some of the most accurate confirmations of the principle of Lorentz invariance. See also Michelson–Morley Award Moving magnet and conductor problem The Light (Glass) LIGO References Notes Experiments Bibliography (Series "A" references) External links Physics experiments Aether theories Case Western Reserve University Tests of special relativity 1887 in science
0.770585
0.998108
0.769127
Inertial navigation system
An inertial navigation system (INS; also inertial guidance system, inertial instrument) is a navigation device that uses motion sensors (accelerometers), rotation sensors (gyroscopes) and a computer to continuously calculate by dead reckoning the position, the orientation, and the velocity (direction and speed of movement) of a moving object without the need for external references. Often the inertial sensors are supplemented by a barometric altimeter and sometimes by magnetic sensors (magnetometers) and/or speed measuring devices. INSs are used on mobile robots and on vehicles such as ships, aircraft, submarines, guided missiles, and spacecraft. Older INS systems generally used an inertial platform as their mounting point to the vehicle and the terms are sometimes considered synonymous. Integrals in the time domain implicitly demand a stable and accurate clock for the quantification of elapsed time. Design Inertial navigation is a self-contained navigation technique in which measurements provided by accelerometers and gyroscopes are used to track the position and orientation of an object relative to a known starting point, orientation and velocity. Inertial measurement units (IMUs) typically contain three orthogonal rate-gyroscopes and three orthogonal accelerometers, measuring angular velocity and linear acceleration respectively. By processing signals from these devices it is possible to track the position and orientation of a device. An inertial navigation system includes at least a computer and a platform or module containing accelerometers, gyroscopes, or other motion-sensing devices. The INS is initially provided with its position and velocity from another source (a human operator, a GPS satellite receiver, etc.) accompanied with the initial orientation and thereafter computes its own updated position and velocity by integrating information received from the motion sensors. The advantage of an INS is that it requires no external references in order to determine its position, orientation, or velocity once it has been initialized. An INS can detect a change in its geographic position (a move east or north, for example), a change in its velocity (speed and direction of movement) and a change in its orientation (rotation about an axis). It does this by measuring the linear acceleration and angular velocity applied to the system. Since it requires no external reference (after initialization), it is immune to jamming and deception. Gyroscopes measure the angular displacement of the sensor frame with respect to the inertial reference frame. By using the original orientation of the system in the inertial reference frame as the initial condition and integrating the angular displacement, the system's current orientation is known at all times. This can be thought of as the ability of a blindfolded passenger in a car to feel the car turn left and right or tilt up and down as the car ascends or descends hills. Based on this information alone, the passenger knows what direction the car is facing, but not how fast or slow it is moving, or whether it is sliding sideways. Accelerometers measure the linear acceleration of the moving vehicle in the sensor or body frame, but in directions that can only be measured relative to the moving system (since the accelerometers are fixed to the system and rotate with the system, but are not aware of their own orientation). This can be thought of as the ability of a blindfolded passenger in a car to feel themself pressed back into their seat as the vehicle accelerates forward or pulled forward as it slows down; and feel themself pressed down into their seat as the vehicle accelerates up a hill or rise up out of their seat as the car passes over the crest of a hill and begins to descend. Based on this information alone, they know how the vehicle is accelerating relative to itself; that is, whether it is accelerating forward, backward, left, right, up (toward the car's ceiling), or down (toward the car's floor), measured relative to the car, but not the direction relative to the Earth, since they did not know what direction the car was facing relative to the Earth when they felt the accelerations. However, by tracking both the current angular velocity of the system and the current linear acceleration of the system measured relative to the moving system, it is possible to determine the linear acceleration of the system in the inertial reference frame. Performing integration on the inertial accelerations (using the original velocity as the initial conditions) using the correct kinematic equations yields the inertial velocities of the system and integration again (using the original position as the initial condition) yields the inertial position. In our example, if the blindfolded passenger knew how the car was pointed and what its velocity was before they were blindfolded, and if they are able to keep track of both how the car has turned and how it has accelerated and decelerated since, then they can accurately know the current orientation, position, and velocity of the car at any time. Uses Inertial navigation is used in a wide range of applications including the navigation of aircraft, tactical and strategic missiles, spacecraft, submarines and ships. It is also embedded in some mobile phones for purposes of mobile phone location and tracking. Recent advances in the construction of microelectromechanical systems (MEMS) have made it possible to manufacture small and light inertial navigation systems. These advances have widened the range of possible applications to include areas such as human and animal motion capture. Inertial navigation systems are used in many different moving objects. However, their cost and complexity place constraints on the environments in which they are practical for use. To support the use of inertial technology in the best way, already in 1965 a technical working group for Inertial Sensors had been established in Germany to bring together the users, the manufacturers and the researchers of inertial sensors. This working group has been continuously developed and today it is known as DGON ISA Inertial Sensors and Application Symposium, the leading conference for inertial technologies for more than 60 years. This Symposium DGON / IEEE ISA with about 200 international attendees is held annually in October in Germany. The publications of all DGON ISA conferences over the last more than 60 years are accessible. Drift rate All inertial navigation systems suffer from integration drift: small errors in the measurement of acceleration and angular velocity are integrated into progressively larger errors in velocity, which are compounded into still greater errors in position. Since the new position is calculated from the previous calculated position and the measured acceleration and angular velocity, these errors accumulate roughly proportionally to the time since the initial position was input. Even the best accelerometers, with a standard error of 10 micro-g, would accumulate a 50-meter (164-ft) error within 17 minutes. Therefore, the position must be periodically corrected by input from some other type of navigation system. Accordingly, inertial navigation is usually used to supplement other navigation systems, providing a higher degree of accuracy than is possible with the use of any single system. For example, if, in terrestrial use, the inertially tracked velocity is intermittently updated to zero by stopping, the position will remain precise for a much longer time, a so-called zero velocity update. In aerospace particularly, other measurement systems are used to determine INS inaccuracies, e.g. the Honeywell LaseRefV inertial navigation systems uses GPS and air data computer outputs to maintain required navigation performance. The navigation error rises with the lower sensitivity of the sensors used. Currently, devices combining different sensors are being developed, e.g. attitude and heading reference system. Because the navigation error is mainly influenced by the numerical integration of angular rates and accelerations, the pressure reference system was developed to use one numerical integration of the angular rate measurements. Estimation theory in general and Kalman filtering in particular, provide a theoretical framework for combining information from various sensors. One of the most common alternative sensors is a satellite navigation radio such as GPS, which can be used for all kinds of vehicles with direct sky visibility. Indoor applications can use pedometers, distance measurement equipment, or other kinds of position sensors. By properly combining the information from an INS and other systems (GPS), the errors in position and velocity are stable. Furthermore, INS can be used as a short-term fallback while GPS signals are unavailable, for example when a vehicle passes through a tunnel. In 2011, GPS jamming at the civilian level became a governmental concern. The relative ease in ability to jam these systems has motivated the military to reduce navigation dependence on GPS technology. Because inertial navigation sensors do not depend on radio signals unlike GPS, they cannot be jammed. In 2012, the U.S. Army Research Laboratory reported a method to merge measurements from 10 pairs of MEMS gyroscope and accelerometers (plus occasional GPS), reducing the positional error by two thirds for a projectile. The algorithm can correct for systemic biases in individual sensors, using both GPS and a heuristic based on the gun-firing acceleration force. If one sensor consistently over or underestimates distance, the system can adjust the corrupted sensor's contributions to the final calculation. History Inertial navigation systems were originally developed for rockets. American rocketry pioneer Robert Goddard experimented with rudimentary gyroscopic systems. Goddard's systems were of great interest to contemporary German pioneers including Wernher von Braun. The systems entered more widespread use with the advent of spacecraft, guided missiles, and commercial airliners. Early German World War II V2 guidance systems combined two gyroscopes and a lateral accelerometer with a simple analog computer to adjust the azimuth for the rocket in flight. Analog computer signals were used to drive four graphite rudders in the rocket exhaust for flight control. The GN&C (Guidance, Navigation, and Control) system for the V2 provided many innovations as an integrated platform with closed loop guidance. At the end of the war von Braun engineered the surrender of 500 of his top rocket scientists, along with plans and test vehicles, to the Americans. They arrived at Fort Bliss, Texas in 1945 under the provisions of Operation Paperclip and were subsequently moved to Huntsville, Alabama, in 1950 where they worked for U.S. Army rocket research programs. In the early 1950s, the US government wanted to insulate itself against over-dependency on the German team for military applications, including the development of a fully domestic missile guidance program. The MIT Instrumentation Laboratory (later to become the Charles Stark Draper Laboratory, Inc.) was chosen by the Air Force Western Development Division to provide a self-contained guidance system backup to Convair in San Diego for the new Atlas intercontinental ballistic missile (Construction and testing were completed by Arma Division of AmBosch Arma). The technical monitor for the MIT task was engineer Jim Fletcher, who later served as NASA Administrator. The Atlas guidance system was to be a combination of an on-board autonomous system and a ground-based tracking and command system. The self-contained system finally prevailed in ballistic missile applications for obvious reasons. In space exploration, a mixture of the two remains. In the summer of 1952, Dr. Richard Battin and Dr. J. Halcombe "Hal" Laning, Jr., researched computational based solutions to guidance and undertook the initial analytical work on the Atlas inertial guidance in 1954. Other key figures at Convair were Charlie Bossart, the Chief Engineer, and Walter Schweidetzky, head of the guidance group. Schweidetzky had worked with von Braun at Peenemünde during World War II. The initial Delta guidance system assessed the difference in position from a reference trajectory. A velocity to be gained (VGO) calculation is made to correct the current trajectory with the objective of driving VGO to zero. The mathematics of this approach were fundamentally valid, but dropped because of the challenges in accurate inertial guidance and analog computing power. The challenges faced by the Delta efforts were overcome by the Q system (see Q-guidance) of guidance. The Q system's revolution was to bind the challenges of missile guidance (and associated equations of motion) in the matrix Q. The Q matrix represents the partial derivatives of the velocity with respect to the position vector. A key feature of this approach allowed for the components of the vector cross product (v, xdv, /dt) to be used as the basic autopilot rate signals—a technique that became known as cross-product steering. The Q-system was presented at the first Technical Symposium on Ballistic Missiles held at the Ramo-Wooldridge Corporation in Los Angeles on 21 and 22 June 1956. The Q system was classified information through the 1960s. Derivations of this guidance are used for today's missiles. Guidance in human spaceflight In February 1961 NASA awarded MIT a contract for preliminary design study of a guidance and navigation system for the Apollo program. MIT and the Delco Electronics Div. of General Motors Corp. were awarded the joint contract for design and production of the Apollo Guidance and Navigation systems for the Command Module and the Lunar Module. Delco produced the IMUs (Inertial Measurement Units) for these systems, Kollsman Instrument Corp. produced the Optical Systems, and the Apollo Guidance Computer was built by Raytheon under subcontract. For the Space Shuttle, open loop guidance was used to guide the Shuttle from lift-off until Solid Rocket Booster (SRB) separation. After SRB separation the primary Space Shuttle guidance is named PEG (Powered Explicit Guidance). PEG takes into account both the Q system and the predictor-corrector attributes of the original "Delta" System (PEG Guidance). Although many updates to the Shuttle's navigation system had taken place over the last 30 years (ex. GPS in the OI-22 build), the guidance core of the Shuttle GN&C system had evolved little. Within a crewed system, there is a human interface needed for the guidance system. As astronauts are the customer for the system, many new teams were formed that touch GN&C as it is a primary interface to "fly" the vehicle. Early use in aircraft inertial guidance One example of a popular INS for commercial aircraft was the Delco Carousel, which provided partial automation of navigation in the days before complete flight management systems became commonplace. The Carousel allowed pilots to enter 9 waypoints at a time and then guided the aircraft from one waypoint to the next using an INS to determine aircraft position and velocity. Boeing Corporation subcontracted the Delco Electronics Div. of General Motors to design and build the first production Carousel systems for the early models (-100, -200 and -300) of the 747 aircraft. The 747 utilized three Carousel systems operating in concert for reliability purposes. The Carousel system and derivatives thereof were subsequently adopted for use in many other commercial and military aircraft. The USAF C-141 was the first military aircraft to utilize the Carousel in a dual system configuration, followed by the C-5A which utilized the triple INS configuration, similar to the 747. The KC-135A fleet was fitted with a single Carousel IV-E system that could operate as a stand-alone INS or can be aided by the AN/APN-81 or AN/APN-218 Doppler radar. Some special-mission variants of the C-135 were fitted with dual Carousel IV-E INSs. ARINC Characteristic 704 defines the INS used in commercial air transport. Details INSs contain Inertial Measurement Units (IMUs) which have angular and linear accelerometers (for changes in position); some IMUs include a gyroscopic element (for maintaining an absolute angular reference). Angular accelerometers measure how the vehicle is rotating in space. Generally, there is at least one sensor for each of the three axes: pitch (nose up and down), yaw (nose left and right) and roll (clockwise or counter-clockwise from the cockpit). Linear accelerometers measure non-gravitational accelerations of the vehicle. Since it can move in three axes (up and down, left and right, forward and back), there is a linear accelerometer for each axis. A computer continually calculates the vehicle's current position. First, for each of the six degrees of freedom (x,y,z and θx, θy and θz), it integrates over time the sensed acceleration, together with an estimate of gravity, to calculate the current velocity. Then it integrates the velocity to calculate the current position. Inertial guidance is difficult without computers. The desire to use inertial guidance in the Minuteman missile and Project Apollo drove early attempts to miniaturize computers. Inertial guidance systems are now usually combined with satellite navigation systems through a digital filtering system. The inertial system provides short term data, while the satellite system corrects accumulated errors of the inertial system. An inertial guidance system that will operate near the surface of the earth must incorporate Schuler tuning so that its platform will continue pointing towards the center of the Earth as a vehicle moves from place to place. Basic schemes Gimballed gyrostabilized platforms Some systems place the linear accelerometers on a gimballed gyrostabilized platform. The gimbals are a set of three rings, each with a pair of bearings initially at right angles. They let the platform twist about any rotational axis (or, rather, they let the platform keep the same orientation while the vehicle rotates around it). There are two gyroscopes (usually) on the platform. Two gyroscopes are used to cancel gyroscopic precession, the tendency of a gyroscope to twist at right angles to an input torque. By mounting a pair of gyroscopes (of the same rotational inertia and spinning at the same speed in opposite directions) at right angles the precessions are cancelled and the platform will resist twisting. This system allows a vehicle's roll, pitch and yaw angles to be measured directly at the bearings of the gimbals. Relatively simple electronic circuits can be used to add up the linear accelerations, because the directions of the linear accelerometers do not change. The big disadvantage of this scheme is that it uses many expensive precision mechanical parts. It also has moving parts that can wear out or jam and is vulnerable to gimbal lock. The primary guidance system of the Apollo spacecraft used a three-axis gyrostabilized platform, feeding data to the Apollo Guidance Computer. Maneuvers had to be carefully planned to avoid gimbal lock. Fluid-suspended gyrostabilized platforms Gimbal lock constrains maneuvering and it would be beneficial to eliminate the slip rings and bearings of the gimbals. Therefore, some systems use fluid bearings or a flotation chamber to mount a gyrostabilized platform. These systems can have very high precisions (e.g., Advanced Inertial Reference Sphere). Like all gyrostabilized platforms, this system runs well with relatively slow, low-power computers. The fluid bearings are pads with holes through which pressurized inert gas (such as helium) or oil presses against the spherical shell of the platform. The fluid bearings are very slippery and the spherical platform can turn freely. There are usually four bearing pads, mounted in a tetrahedral arrangement to support the platform. In premium systems, the angular sensors are usually specialized transformer coils made in a strip on a flexible printed circuit board. Several coil strips are mounted on great circles around the spherical shell of the gyrostabilized platform. Electronics outside the platform uses similar strip-shaped transformers to read the varying magnetic fields produced by the transformers wrapped around the spherical platform. Whenever a magnetic field changes shape, or moves, it will cut the wires of the coils on the external transformer strips. The cutting generates an electric current in the external strip-shaped coils and electronics can measure that current to derive angles. Cheap systems sometimes use bar codes to sense orientations and use solar cells or a single transformer to power the platform. Some small missiles have powered the platform with light from a window or optic fibers to the motor. A research topic is to suspend the platform with pressure from exhaust gases. Data is returned to the outside world via the transformers, or sometimes LEDs communicating with external photodiodes. Strapdown systems Lightweight digital computers permit the system to eliminate the gimbals, creating strapdown systems, so called because their sensors are simply strapped to the vehicle. This reduces the cost, eliminates gimbal lock, removes the need for some calibrations and increases the reliability by eliminating some of the moving parts. Angular rate sensors called rate gyros measure the angular velocity of the vehicle. A strapdown system needs a dynamic measurement range several hundred times that required by a gimballed system. That is, it must integrate the vehicle's attitude changes in pitch, roll and yaw, as well as gross movements. Gimballed systems could usually do well with update rates of 50–60 Hz. However, strapdown systems normally update about 2000 Hz. The higher rate is needed to let the navigation system integrate the angular rate into an attitude accurately. The data updating algorithms (direction cosines or quaternions) involved are too complex to be accurately performed except by digital electronics. However, digital computers are now so inexpensive and fast that rate gyro systems can now be practically used and mass-produced. The Apollo lunar module used a strapdown system in its backup Abort Guidance System (AGS). Strapdown systems are nowadays commonly used in commercial and military applications (aircraft, ships, ROVs, missiles, etc.). State-of-the-art strapdown systems are based upon ring laser gyroscopes, fibre optic gyrocopes or hemispherical resonator gyroscopes. They are using digital electronics and advanced digital filtering techniques such as Kalman filter. Motion-based alignment The orientation of a gyroscope system can sometimes also be inferred simply from its position history (e.g., GPS). This is, in particular, the case with planes and cars, where the velocity vector usually implies the orientation of the vehicle body. For example, Honeywell's Align in Motion is an initialization process where the initialization occurs while the aircraft is moving, in the air or on the ground. This is accomplished using GPS and an inertial reasonableness test, thereby allowing commercial data integrity requirements to be met. This process has been FAA certified to recover pure INS performance equivalent to stationary alignment procedures for civilian flight times up to 18 hours. It avoids the need for gyroscope batteries on aircraft. Vibrating gyros Less-expensive navigation systems, intended for use in automobiles, may use a vibrating structure gyroscope to detect changes in heading and the odometer pickup to measure distance covered along the vehicle's track. This type of system is much less accurate than a higher-end INS, but it is adequate for the typical automobile application where GPS is the primary navigation system and dead reckoning is only needed to fill gaps in GPS coverage when buildings or terrain block the satellite signals. Hemispherical resonator gyros If a standing wave is induced in a hemispheric resonant structure and then the resonant structure is rotated, the spherical harmonic standing wave rotates through an angle different from the quartz resonator structure due to the Coriolis force. The movement of the outer case with respect to the standing wave pattern is proportional to the total rotation angle and can be sensed by appropriate electronics. The system resonators are machined from fused quartz due to its excellent mechanical properties. The electrodes that drive and sense the standing waves are deposited directly onto separate quartz structures that surround the resonator. These gyros can operate in either a whole angle mode (which gives them nearly unlimited rate capability) or a force rebalance mode that holds the standing wave in a fixed orientation with respect to the gyro housing (which gives them much better accuracy). This system has almost no moving parts and is very accurate. However it is still relatively expensive due to the cost of the precision ground and polished hollow quartz hemispheres. Northrop Grumman currently manufactures IMUs (inertial measurement units) for spacecraft that use HRGs. These IMUs have demonstrated extremely high reliability since their initial use in 1996. Safran manufactures large numbers of HRG based inertial navigation systems dedicated to a wide range of applications. Quartz rate sensors These products include "tuning fork gyros". Here, the gyro is designed as an electronically driven tuning fork, often fabricated out of a single piece of quartz or silicon. Such gyros operate in accordance with the dynamic theory that when an angle rate is applied to a translating body, a Coriolis force is generated. This system is usually integrated on a silicon chip. It has two mass-balanced quartz tuning forks, arranged "handle-to-handle" so forces cancel. Aluminum electrodes evaporated onto the forks and the underlying chip both drive and sense the motion. The system is both manufacturable and inexpensive. Since quartz is dimensionally stable, the system can be accurate. As the forks are twisted about the axis of the handle, the vibration of the tines tends to continue in the same plane of motion. This motion has to be resisted by electrostatic forces from the electrodes under the tines. By measuring the difference in capacitance between the two tines of a fork, the system can determine the rate of angular motion. Current state-of-the-art non-military technology can build small solid-state sensors that can measure human body movements. These devices have no moving parts and weigh about . Solid-state devices using the same physical principles are used for image stabilization in small cameras or camcorders. These can be extremely small, around and are built with microelectromechanical systems (MEMS) technologies. MHD sensor Sensors based on magnetohydrodynamic principles can be used to measure angular velocities. MEMS gyroscope MEMS gyroscopes typically rely on the Coriolis effect to measure angular velocity. It consists of a resonating proof mass mounted in silicon. The gyroscope is, unlike an accelerometer, an active sensor. The proof mass is pushed back and forth by driving combs. A rotation of the gyroscope generates a Coriolis force that is acting on the mass which results in a motion in a different direction. The motion in this direction is measured by electrodes and represents the rate of turn. Ring laser gyros A ring laser gyro (RLG) splits a beam of laser light into two beams in opposite directions through narrow tunnels in a closed circular optical path around the perimeter of a triangular block of temperature-stable Cervit glass with reflecting mirrors placed in each corner. When the gyro is rotating at some angular rate, the distance traveled by each beam will differ—the shorter path being opposite to the rotation. The phase shift between the two beams can be measured by an interferometer and is proportional to the rate of rotation (Sagnac effect). In practice, at low rotation rates the output frequency can drop to zero as the result of backscattering causing the beams to synchronise and lock together. This is known as a lock-in, or laser-lock. The result is that there is no change in the interference pattern and therefore no measurement change. To unlock the counter-rotating light beams, laser gyros either have independent light paths for the two directions (usually in fiber optic gyros), or the laser gyro is mounted on a piezo-electric dither motor that rapidly vibrates the laser ring back and forth about its input axis through the lock-in region to decouple the light waves. The shaker is the most accurate, because both light beams use exactly the same path. Thus laser gyros retain moving parts, but they do not move as far. Fiber optic gyros A more recent variation on the optical gyroscope, the fiber optic gyroscope (FOG), uses an external laser and two beams going opposite directions (counter-propagating) in long spools (several kilometers) of fiber optic filament, with the phase difference of the two beams compared after their travel through the spools of fiber. The basic mechanism, monochromatic laser light travelling in opposite paths and the Sagnac effect, is the same in a FOG and a RLG, but the engineering details are substantially different in the FOG compared to earlier laser gyros. Precise winding of the fiber-optic coil is required to ensure the paths taken by the light in opposite directions are as similar as possible. The FOG requires more complex calibrations than a laser ring gyro making the development and manufacture of FOG's more technically challenging that for a RLG. However FOG's do not suffer from laser lock at low speeds and do not need to contain any moving parts, increasing the maximum potential accuracy and lifespan of a FOG over an equivalent RLG. Pendular accelerometers The basic, open-loop accelerometer consists of a mass attached to a spring. The mass is constrained to move only in line with the spring. Acceleration causes deflection of the mass and the offset distance is measured. The acceleration is derived from the values of deflection distance, mass and the spring constant. The system must also be damped to avoid oscillation. A closed-loop accelerometer achieves higher performance by using a feedback loop to cancel the deflection, thus keeping the mass nearly stationary. Whenever the mass deflects, the feedback loop causes an electric coil to apply an equally negative force on the mass, canceling the motion. Acceleration is derived from the amount of negative force applied. Because the mass barely moves, the effects of non-linearities of the spring and damping system are greatly reduced. In addition, this accelerometer provides for increased bandwidth beyond the natural frequency of the sensing element. Both types of accelerometers have been manufactured as integrated micro-machinery on silicon chips. TIMU sensors DARPA's Microsystems Technology Office (MTO) department is working on a Micro-PNT (Micro-Technology for Positioning, Navigation and Timing) program to design Timing & Inertial Measurement Unit (TIMU) chips that do absolute position tracking on a single chip without GPS-aided navigation. Micro-PNT adds a highly accurate master timing clock integrated into an IMU (Inertial Measurement Unit) chip, making it a Timing & Inertial Measurement Unit chip. A TIMU chip integrates 3-axis gyroscope, 3-axis accelerometer and 3-axis magnetometer together with a highly accurate master timing clock, so that it can simultaneously measure the motion tracked and combine that with timing from the synchronized clock. Method In one form, the navigational system of equations acquires linear and angular measurements from the inertial and body frame, respectively and calculates the final attitude and position in the NED frame of reference. Where f is specific force, is angular rate, a is acceleration, R is position, and V are velocity, is the angular velocity of the earth, g is the acceleration due to gravity, and h are the NED location parameters. Also, super/subscripts of E, I and B are representing variables in the Earth centered, inertial or body reference frame, respectively and C is a transformation of reference frames. See also References Further reading External links Ferranti Inertial Navigation System (INAS) Inertial Navigation System Principle of operation of an accelerometer Overview of inertial instrument types Oxford Technical Solutions Inertial Navigation Guide Listing of open-source Inertial Navigation system Impact of inertial sensor errors on Inertial Navigation System position and attitude errors Introduction to Inertial Navigation Systems in UAV/Drone Applications Geodesy Aircraft instruments Aerospace engineering Avionics Spacecraft components Missile guidance Navigational equipment Technology systems Navigational aids Inertial navigation
0.770432
0.998303
0.769124
Vector (mathematics and physics)
In mathematics and physics, vector is a term that refers to quantities that cannot be expressed by a single number (a scalar), or to elements of some vector spaces. They have to be expressed by both magnitude and direction. Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers. The term vector is also used, in some contexts, for tuples, which are finite sequences (of numbers or other objects) of a fixed length. Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on the above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space. Many vector spaces are considered in mathematics, such as extension fields, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vector spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces). Vectors in Euclidean geometry Vector quantities Vector spaces Vectors in algebra Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. Vector quaternion, a quaternion with a zero real part Multivector or -vector, an element of the exterior algebra of a vector space. Spinors, also called spin vectors, have been introduced for extending the notion of rotation vector. In fact, rotation vectors represent well rotations locally, but not globally, because a closed loop in the space of rotation vectors may induce a curve in the space of rotations that is not a loop. Also, the manifold of rotation vectors is orientable, while the manifold of rotations is not. Spinors are elements of a vector subspace of some Clifford algebra. Witt vector, an infinite sequence of elements of a commutative ring, which belongs to an algebra over this ring, and has been introduced for handling carry propagation in the operations on p-adic numbers. Data represented by vectors The set of tuples of real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. It is common to call these tuples vectors, even in contexts where vector-space operations do not apply. More generally, when some data can be represented naturally by vectors, they are often called vectors even when addition and scalar multiplication of vectors are not valid operations on these data. Here are some examples. Rotation vector, a Euclidean vector whose direction is that of the axis of a rotation and magnitude is the angle of the rotation. Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set Probability vector, in statistics, a vector with non-negative entries that sum to one. Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated. However, a random vector may also refer to a random variable that takes its values in a vector space. Logical vector, a vector of 0s and 1s (Booleans). Vectors in calculus Calculus serves as a foundational mathematical tool in the realm of vectors, offering a framework for the analysis and manipulation of vector quantities in diverse scientific disciplines, notably physics and engineering. Vector-valued functions, where the output is a vector, are scrutinized using calculus to derive essential insights into motion within three-dimensional space. Vector calculus extends traditional calculus principles to vector fields, introducing operations like gradient, divergence, and curl, which find applications in physics and engineering contexts. Line integrals, crucial for calculating work along a path within force fields, and surface integrals, employed to determine quantities like flux, illustrate the practical utility of calculus in vector analysis. Volume integrals, essential for computations involving scalar or vector fields over three-dimensional regions, contribute to understanding mass distribution, charge density, and fluid flow rates. See also Vector (disambiguation) Vector spaces with more structure Graded vector space, a type of vector space that includes the extra structure of gradation Normed vector space, a vector space on which a norm is defined Hilbert space Ordered vector space, a vector space equipped with a partial order Super vector space, name for a Z2-graded vector space Symplectic vector space, a vector space V equipped with a non-degenerate, skew-symmetric, bilinear form Topological vector space, a blend of topological structure with the algebraic concept of a vector space Vector fields A vector field is a vector-valued function that, generally, has a domain of the same dimension (as a manifold) as its codomain, Conservative vector field, a vector field that is the gradient of a scalar potential field Hamiltonian vector field, a vector field defined for any energy function or Hamiltonian Killing vector field, a vector field on a Riemannian manifold associated with a symmetry Solenoidal vector field, a vector field with zero divergence Vector potential, a vector field whose curl is a given vector field Vector flow, a set of closely related concepts of the flow determined by a vector field See also Ricci calculus Vector Analysis, a textbook on vector calculus by Wilson, first published in 1901, which did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus Vector bundle, a topological construction that makes precise the idea of a family of vector spaces parameterized by another space Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields Vector differential, or del, a vector differential operator represented by the nabla symbol Vector Laplacian, the vector Laplace operator, denoted by , is a differential operator defined over a vector field Vector notation, common notation used when working with vectors Vector operator, a type of differential operator used in vector calculus Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector perpendicular to the original two Vector projection, also known as vector resolute or vector component, a linear mapping producing a vector parallel to a second vector Vector-valued function, a function that has a vector space as a codomain Vectorization (mathematics), a linear transformation that converts a matrix into a column vector Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series Vector boson, a boson with the spin quantum number equal to 1 Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties Vector meson, a meson with total spin 1 and odd parity Vector quantization, a quantization technique used in signal processing Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation Vector synthesis, a type of audio synthesis Phase vector Notes References Vectors - The Feynman Lectures on Physics Broad-concept articles
0.770353
0.998389
0.769112
Fencing response
The fencing response is an unnatural position of the arms following a concussion. Immediately after moderate forces have been applied to the brainstem, the forearms are held flexed or extended (typically into the air) for a period lasting up to several seconds after the impact. The fencing response is often observed during athletic competition involving contact, such as combat sports, American football, ice hockey, rugby union, rugby league and Australian rules football. It is used as an overt indicator of injury force magnitude and midbrain localization to aid in injury identification and classification for events including on-field and/or bystander observations of sports-related head injuries. Relationship to fencing reflex and posturing The fencing response is similar to the asymmetrical tonic neck reflex in infants. Like the reflex, a positive fencing response resembles the en garde position that initiates a fencing bout, with the extension of one arm and the flexion of the other. Tonic posturing preceding convulsion has been observed in sports injuries at the moment of impact where extension and flexion of opposite arms occur despite body position or gravity. The fencing response emerges from the separation of tonic posturing from convulsion and refines the tonic posturing phase as an immediate forearm motor response to indicate injury force magnitude and location. Pathophysiology The neuromotor manifestation of the fencing response resembles reflexes initiated by vestibular stimuli. Vestibular stimuli activate primitive reflexes in human infants, such as the asymmetric tonic neck reflex, Moro reflex, and parachute reflex, which are likely mediated by vestibular nuclei in the brainstem. The lateral vestibular nucleus (LVN; Deiter’s nucleus) has descending efferent fibers in the vestibulocochlear nerve distributed to the motor nuclei of the anterior column and exerts an excitatory influence on ipsilateral limb extensor motor neurons while suppressing flexor motor neurons. The anatomical location of the LVN, adjacent to the cerebellar peduncles (see cerebellum), suggests that mechanical forces to the head may stretch the cerebellar peduncles and activate the LVN. LVN activity would manifest as limb extensor activation and flexor inhibition, defined as a fencing response, while flexion of the contralateral limb is likely mediated by crossed inhibition necessary for pattern generation. In simpler terms, the shock of the trauma manually activates the nerves that control the muscle groups responsible for raising the arm. These muscle groups are activated by stimuli in infants for instincts such as grabbing for their mothers or breaking their falls. The LVN has neurons that connect it to motor neurons inside grey matter in the spinal cord, and sends signals to one side of the body that activate motor neurons that cause extension, while suppressing motor neurons that cause flexing. The LVN is located near the connection between the brain and the brain stem, which suggests that excessive force to the head may stretch this connection and thus activate the LVN. The neurons that are stimulated suppress neighboring neurons, which prevents neurons on the other side of the body from being stimulated. Injury severity and sports applications In a survey of documented head injuries followed by unconsciousness, most of which involved sporting activities, two thirds of head impacts demonstrated a fencing response, indicating a high incidence of fencing in head injuries leading to unconsciousness, and those pertaining to athletic behavior. Likewise, animal models of diffuse brain injury have illustrated a fencing response upon injury at moderate but not mild levels of severity as well as a correlation between fencing, blood–brain barrier disruption, and nuclear shrinkage within the LVN, all of which indicate diagnostic utility of the response. The most challenging aspect to managing sport-related concussion (mild traumatic brain injury, TBI) is recognizing the injury. Consensus conferences have worked toward objective criteria to identify mild TBI in the context of severe TBI. However, few tools are available for distinguishing mild TBI from moderate TBI. As a result, greater emphasis has regularly been placed on the management of concussions in athletes than on the immediate identification and treatment of such an injury. On-field predictors of injury severity can define return-to-play guidelines and urgency of care, but past criteria have either lacked sufficient incidence for effective utility, did not directly address the severity of the injury, or have become cumbersome and fraught with inter-rater reliability issues. Fencing displays in a televised game Stevan Ridley playing in the NFL for the New England Patriots against Baltimore Ravens clashed with Bernard Pollard. Ridley was knocked unconscious, with medical professionals declaring it fencing response. Steven went on to have a full recovery, a long career, and became a Super Bowl champion (XLIX). He has not reported any signs of permanent brain damage since. Kenny Shaw, NCAA football wide receiver for Florida State, September 17, 2011. Jakub Voracek, professional hockey player for the Philadelphia Flyers, after absorbing a body check by opponent Niklas Kronwall, March 6, 2012 Xiong Fei, professional footballer, after being kicked in the head by Shanghai Shenhua FC teammate Li Jianbin, October 17, 2015. Tom Savage, professional American football quarterback for the Houston Texans, December 10, 2017. Donald Parham, professional American football tight end for the Los Angeles Chargers, December 16, 2021. Tua Tagovailoa, professional American football quarterback for the Miami Dolphins, Sept. 29, 2022 and again on Sept. 12, 2024. Barnabás Varga, Hungarian professional footballer, after a collision with Scotland goalkeeper Angus Gunn in the Euro 2024 Group A match on June 23, 2024. References Sports medicine
0.769417
0.999579
0.769093
Chemical potential
In thermodynamics, the chemical potential of a species is the energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. When both temperature and pressure are held constant, and the number of particles is expressed in moles, the chemical potential is the partial molar Gibbs free energy. At chemical equilibrium or in phase equilibrium, the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum. In a system in diffusion equilibrium, the chemical potential of any chemical species is uniformly the same everywhere throughout the system. In semiconductor physics, the chemical potential of a system of electrons at zero absolute temperature is known as the Fermi level. Overview Particles tend to move from higher chemical potential to lower chemical potential because this reduces the free energy. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is the conjugate variable to chemical potential. A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually, the concentration is the same everywhere. The microscopic explanation for this is based on kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process. Another example, not based on concentration but on phase, is an ice cube on a plate above 0 °C. An H2O molecule that is in the solid phase (ice) has a higher chemical potential than a water molecule that is in the liquid phase (water) above 0 °C. When some of the ice melts, H2O molecules convert from solid to the warmer liquid where their chemical potential is lower, so the ice cube shrinks. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cube neither grows nor shrinks, and the system is in equilibrium. A third example is illustrated by the chemical reaction of dissociation of a weak acid HA (such as acetic acid, A = CH3COO−): HA H+ + A− Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H+ and A−) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H+ and A− increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution. Chemical potentials are important in many aspects of multi-phase equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case the chemical potential of a given species at equilibrium is the same in all phases of the system. In electrochemistry, ions do not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.) Thermodynamic definition The chemical potential μi of species i (atomic, molecular or nuclear) is defined, as all intensive quantities are, by the phenomenological fundamental equation of thermodynamics. This holds for both reversible and irreversible infinitesimal processes: where dU is the infinitesimal change of internal energy U, dS the infinitesimal change of entropy S, dV is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and dNi is the infinitesimal change of particle number Ni of species i as particles are added or subtracted. T is absolute temperature, S is entropy, P is pressure, and V is volume. Other work terms, such as those involving electric, magnetic or gravitational fields may be added. From the above equation, the chemical potential is given by This is because the internal energy U is a state function, so if its differential exists, then the differential is an exact differential such as for independent variables x1, x2, ... , xN of U. This expression of the chemical potential as a partial derivative of U with respect to the corresponding species particle number is inconvenient for condensed-matter systems, such as chemical solutions, as it is hard to control the volume and entropy to be constant while particles are added. A more convenient expression may be obtained by making a Legendre transformation to another thermodynamic potential: the Gibbs free energy . From the differential (for and , the product rule is applied to) and using the above expression for , a differential relation for is obtained: As a consequence, another expression for results: and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is . It follows that Use of this equality provides the means to establish the equilibrium constant for a chemical reaction. By making further Legendre transformations from U to other thermodynamic potentials like the enthalpy and Helmholtz free energy , expressions for the chemical potential may be obtained in terms of these: These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations. Applications The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants A and B are related by where is the number of moles of A and is the number of moles of B. Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation. They are used to explain colligative properties such as melting-point depression by the application of pressure. Henry's law for the solute can be derived from Raoult's law for the solvent using chemical potentials. History Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows: Gibbs later noted also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows the chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole. In his 1873 paper A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume–entropy–internal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words from the aforementioned paper, Gibbs states: In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body. Electrochemical, internal, external, and total chemical potential The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called total chemical potential. If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (electric potential energy, gravitational potential energy, etc.), while the rest would be due to "internal" factors (density, temperature, etc.) Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential: where i.e., the external potential is the sum of electric potential, gravitational potential, etc. (where q and m are the charge and mass of the species, Vele and h are the electric potential and height of the container, respectively, and g is the acceleration due to gravity). The internal chemical potential includes everything else besides the external potentials, such as density, temperature, and enthalpy. This formalism can be understood by assuming that the total energy of a system, , is the sum of two parts: an internal energy, , and an external energy due to the interaction of each particle with an external field, . The definition of chemical potential applied to yields the above expression for . The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal. In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential. Systems of particles Electrons in solids Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV). Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode. As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however, work function varies from surface to surface even on a completely homogeneous material. Total chemical potential, on the other hand, is usually specified relative to electrical ground. In atomic physics, the chemical potential of the electrons in an atom is sometimes said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons, i.e., Sub-nuclear particles In recent years, thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth. In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, at thermodynamic equilibrium, the chemical potential of photons is in most physical situations always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise, if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly - at least, it occurs rapidly in the presence of dense charged matter or also in the walls of the textbook example for a photon gas of blackbody radiation - it is safe to assume that the photon chemical potential here is never different from zero. A physical situation where the chemical potential for photons can differ from zero are material-filled optical microcavities, with spacings between cavity mirrors in the wavelength regime. In such two-dimensional cases, photon gases with tuneable chemical potential, much reminiscent to gases of material particles, can be observed. Electric charge is different because it is intrinsically conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential. Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out. In the case of electrons, the behaviour depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons). The chemical potentials of bosons and fermions is related to the number of particles and the temperature by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Ideal vs. non-ideal solutions Generally the chemical potential is given as a sum of an ideal contribution and an excess contribution: In an ideal solution, the chemical potential of species i (μi) is dependent on temperature and pressure. μi0(T, P) is defined as the chemical potential of pure species i. Given this definition, the chemical potential of species i in an ideal solution is where R is the gas constant, and is the mole fraction of species i contained in the solution. The chemical potential becomes negative infinity when , but this does not lead to nonphysical results because means that species i is not present in the system. This equation assumes that only depends on the mole fraction contained in the solution. This neglects intermolecular interaction between species i with itself and other species [i–(j≠i)]. This can be corrected for by factoring in the coefficient of activity of species i, defined as γi. This correction yields The plots above give a very rough picture of the ideal and non-ideal situation. See also Chemical equilibrium Electrochemical potential Equilibrium chemistry Excess chemical potential Fugacity Partial molar property Thermodynamic activity Thermodynamic equilibrium Sources Citations References External links Physical chemistry Potentials Chemical thermodynamics Thermodynamic properties Chemical engineering thermodynamics
0.772068
0.996141
0.769089
Evolution
Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation. The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment. In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow. All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today. Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science. Heredity Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner. Sources of variation Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species. An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely. Mutation Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line. One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring. Sex and recombination In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial. Gene flow Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. Epigenetics Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. Evolutionary forces From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias. Natural selection Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles: Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation). Different traits confer different rates of survival and reproduction (differential fitness). These traits can be passed from generation to generation (heritability of fitness). More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms. Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation. Genetic drift Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles. According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. Mutation bias Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution. Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature. For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size. However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation. Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates. Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation. Genetic hitchhiking Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. Sexual selection A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits. Natural outcomes Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky: Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing. Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Coevolution Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Cooperation Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. Speciation Speciation is the process where a species diverges into two or more descendant species. There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. Applications Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution. Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. Evolutionary history of life Origin of life The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. Common descent All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree. Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed. Evolution of life Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. History of evolutionary thought Classical antiquity The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura. Middle Ages In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be. A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous". Pre-Darwinian The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan. Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. Darwinian revolution The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. Pangenesis and heredity The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. The 'modern synthesis' In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology. Further syntheses Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations. The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability. Social and cultural responses In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China. See also Chronospecies References Bibliography The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09. The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21. "Proceedings of a symposium held at the American Museum of Natural History in New York, 2002." . Retrieved 2014-11-29. "Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997." "Based on a conference held in Bellagio, Italy, June 25–30, 1989" Further reading Introductory reading American version. Advanced reading External links General information Adobe Flash required. "History of Evolution in the United States". Salon. Retrieved 2021-08-24. Experiments Online lectures Biology theories
0.769552
0.999394
0.769085
Coulomb's law
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle. The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel: Coulomb also showed that oppositely charged bodies attract according to an inverse-square law: Here, is a constant, and are the quantities of each charge, and the scalar r is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract. Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m. History Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects. In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758. Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as . In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the and that of the , and there is no reason to think that it differs at all from the inverse duplicate ratio". Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. Mathematical form Coulomb's law states that the electrostatic force experienced by a charge, at position , in the vicinity of another charge, at position , in a vacuum is equal to where is the displacement vector between the charges, a unit vector pointing from to and the electric constant. Here, is used for the vector notation. The electrostatic force experienced by , according to Newton's third law, is If both charges have the same sign (like charges) then the product is positive and the direction of the force on is given by ; the charges repel each other. If the charges have opposite signs then the product is negative and the direction of the force on is the charges attract each other. System of discrete charges The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. Force on a small charge at position , due to a system of discrete charges in vacuum is where is the magnitude of the th charge, is the vector from its position to and is a unit vector in the direction of . Continuous charge distribution In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where gives the charge per unit length at position , and is an infinitesimal element of length, For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where gives the charge per unit area at position , and is an infinitesimal element of area, For a volume charge distribution (such as charge within a bulk metal) where gives the charge per unit volume at position , and is an infinitesimal element of volume, The force on a small test charge at position in vacuum is given by the integral over the distribution of charge The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow to be analyzed. Coulomb constant The constant of proportionality, , in Coulomb's law: is a consequence of historical choices for units. The constant is the vacuum electric permittivity. Using the CODATA 2018 recommended value for , the Coulomb constant is Limitations There are three conditions to be fulfilled for the validity of Coulomb's inverse square law: The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere). The charges must not overlap (e.g. they must be distinct point charges). The charges must be stationary with respect to a nonaccelerating frame of reference. The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration. Electric field An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force on a charge depends on the electric field established by other charges that it finds itself in, such that . In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition. If the field is generated by a positive source point charge , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field created by a single source point charge Q at a certain distance from it r in vacuum is given by A system of n discrete charges stationed at produces an electric field whose magnitude and direction is, by superposition Atomic forces Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. Relation to Gauss's law Deriving Gauss's law from Coulomb's law Deriving Coulomb's law from Gauss's law Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). In relativity Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by:where is the charge of the point source, is the position vector from the point source to the point in space, is the velocity vector of the charged particle, is the ratio of speed of the charged particle divided by the speed of light and is the angle between and . This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating ) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations. Coulomb potential Quantum field theory The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude is: This is to be compared to the: where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with Comparing with the QM scattering, we have to discard the as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: where Fourier transforming both sides, solving the integral and taking at the end will yield as the Coulomb potential. However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental. The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass. Verification It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass and same-sign charge , hanging from two ropes of negligible mass of length . The forces acting on each sphere are three: the weight , the rope tension and the electric force . In the equilibrium state: and Dividing by: Let be the distance between the charged spheres; the repulsion force between them , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge . In the equilibrium state, the distance between the charges will be and the repulsion force between them will be: We know that and: Dividing by, we get: Measuring the angles and and the distance between the charges and is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: Using this approximation, the relationship becomes the much simpler expression: In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value. See also Biot–Savart law Darwin Lagrangian Electromagnetic force Gauss's law Method of image charges Molecular modelling Newton's law of universal gravitation, which uses a similar structure, but for mass instead of charge Static forces and virtual-particle exchange Casimir effect References Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06 Related reading External links Coulomb's Law on Project PHYSNET Electricity and the Atom —a chapter from an online textbook A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. Electromagnetism Electrostatics Eponymous laws of physics Force Scientific laws
0.769567
0.999369
0.769081
Thermodynamic potential
A thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. While thermodynamic potentials cannot be measured directly, they can be predicted using computational chemistry. One main thermodynamic potential that has a physical interpretation is the internal energy . It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for . In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others. In thermodynamics, external forces, such as gravity, are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench, but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy. Description and interpretation Five common thermodynamic potentials are: where = temperature, = entropy, = pressure, = volume. is the number of particles of type in the system and is the chemical potential for an -type particle. The set of all are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy or Helmholtz function. It is often denoted by the symbol , but the use of is preferred by IUPAC, ISO and IEC. These five common potentials are all potential energies, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings like the below: Internal energy is the capacity to do work plus the capacity to release heat. Gibbs energy is the capacity to do non-mechanical work. Enthalpy is the capacity to do non-mechanical work plus the capacity to release heat. Helmholtz energy is the capacity to do mechanical work plus non-mechanical work. From these meanings (which actually apply in specific conditions, e.g. constant pressure, temperature, etc.), for positive changes (e.g., ), we can say that is the energy added to the system, is the total work done on it, is the non-mechanical work done on it, and is the sum of non-mechanical work done on the system and the heat given to it. Note that the sum of internal energy is conserved, but the sum of Gibbs energy, or Helmholtz energy, are not conserved, despite being named "energy". They can be better interpreted as the potential to perform "useful work", and the potential can be wasted. Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards a lower value of a potential and at equilibrium, under these constraints, the potential will take the unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint. In particular: (see principle of minimum energy for a derivation) When the entropy and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle. When the temperature and external parameters of a closed system are held constant, the Helmholtz free energy decreases and reaches a minimum value at equilibrium. When the pressure and external parameters of a closed system are held constant, the enthalpy decreases and reaches a minimum value at equilibrium. When the temperature , pressure and external parameters of a closed system are held constant, the Gibbs free energy decreases and reaches a minimum value at equilibrium. Natural variables For each thermodynamic potential, there are thermodynamic variables that need to be held constant to specify the potential value at a thermodynamical equilibrium state, such as independent variables for a mathematical function. These variables are termed the natural variables of that potential. The natural variables are important not only to specify the potential value at the equilibrium, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. If a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system. The set of natural variables for each of the above four thermodynamic potentials is formed from a combination of the , , , variables, excluding any pairs of conjugate variables; there is no natural variable set for a potential including the - or - variables together as conjugate variables for energy. An exception for this rule is the − conjugate pairs as there is no reason to ignore these in the thermodynamic potentials, and in fact we may additionally define the four potentials for each species. Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have: If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as and so on. If there are dimensions to the thermodynamic space, then there are unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials. The fundamental equations The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow. (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy of a system can be written as the sum of heat flowing into the system subtracted by the work done by the system on the environment, along with any change due to the addition of new particles to the system: where is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system, is the chemical potential of particle type and is the number of the type particles. (Neither nor are exact differentials, i.e., they are thermodynamic process path-dependent. Small changes in these variables are, therefore, represented with rather than .) By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have: where is temperature, is entropy, is pressure, and is volume, and the equality holds for reversible processes. This leads to the standard differential form of the internal energy in case of a quasistatic reversible change: Since , and are thermodynamic functions of state (also called state functions), the above relation also holds for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to: Here the are the generalized forces corresponding to the external variables . Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials (fundamental thermodynamic equations or fundamental thermodynamic relation): The infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of fundamental equations. The differences between the four thermodynamic potentials can be summarized as follows: The equations of state We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define to stand for any of the thermodynamic potentials, then the above equations are of the form: where and are conjugate pairs, and the are the natural variables of the potential . From the chain rule it follows that: where is the set of all natural variables of except that are held as constants. This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state. If we restrict ourselves to the potentials (Internal energy), (Helmholtz energy), (Enthalpy) and (Gibbs energy), then we have the following equations of state (subscripts showing natural variables that are held as constants): where, in the last equation, is any of the thermodynamic potentials (, , , or ), and are the set of natural variables for that potential, excluding . If we use all thermodynamic potentials, then we will have more equations of state such as and so on. In all, if the thermodynamic space is dimensions, then there will be  equations for each potential, resulting in a total of equations of state because thermodynamic potentials exist. If the equations of state for a particular potential are known, then the fundamental equation for that potential (i.e., the exact differential of the thermodynamic potential) can be determined. This means that all thermodynamic information about the system will be known because the fundamental equations for any other potential can be found via the Legendre transforms and the corresponding equations of state for each potential as partial derivatives of the potential can also be found. Measurement of thermodynamic potentials The above equations of state suggest methods to experimentally measure changes in the thermodynamic potentials using physically measurable parameters. For example the free energy expressions and can be integrated at constant temperature and quantities to obtain: (at constant T, {Nj} ) (at constant T, {Nj} ) which can be measured by monitoring the measurable variables of pressure, temperature and volume. Changes in the enthalpy and internal energy can be measured by calorimetry (which measures the amount of heat ΔQ released or absorbed by a system). The expressions can be integrated: (at constant P, {Nj} ) (at constant V, {Nj} ) Note that these measurements are made at constant {Nj } and are therefore not applicable to situations in which chemical reactions take place. The Maxwell relations Again, define and to be conjugate pairs, and the to be the natural variables of some potential . We may take the "cross differentials" of the state equations, which obey the following relationship: From these we get the Maxwell relations. There will be of them for each potential giving a total of equations in all. If we restrict ourselves the , , , Using the equations of state involving the chemical potential we get equations such as: and using the other potentials we can get equations such as: Euler relations Again, define and to be conjugate pairs, and the to be the natural variables of the internal energy. Since all of the natural variables of the internal energy are extensive quantities it follows from Euler's homogeneous function theorem that the internal energy can be written as: From the equations of state, we then have: This formula is known as an Euler relation, because Euler's theorem on homogeneous functions leads to it. (It was not discovered by Euler in an investigation of thermodynamics, which did not exist in his day.). Substituting into the expressions for the other main potentials we have: As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Thus, there is another Euler relation, based on the expression of entropy as a function of internal energy and other extensive variables. Yet other Euler relations hold for other fundamental equations for energy or entropy, as respective functions of other state variables including some intensive state variables. The Gibbs–Duhem relation Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward. Equating any thermodynamic potential definition with its Euler relation expression yields: Differentiating, and using the second law: yields: Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with components, there will be independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem. Stability Conditions As the internal energy is a convex function of entropy and volume, the stability condition requires that the second derivative of internal energy with entropy or volume to be positive. It is commonly expressed as . Since the maximum principle of entropy is equivalent to minimum principle of internal energy, the combined criteria for stability or thermodynamic equilibrium is expressed as and for parameters, entropy and volume. This is analogous to and condition for entropy at equilibrium. The same concept can be applied to the various thermodynamic potentials by identifying if they are convex or concave of respective their variables. and Where Helmholtz energy is a concave function of temperature and convex function of volume. and Where enthalpy is a concave function of pressure and convex function of entropy. and Where Gibbs potential is a concave function of both pressure and temperature. In general the thermodynamic potentials (the internal energy and its Legendre transforms), are convex functions of their extrinsic variables and concave functions of intrinsic variables. The stability conditions impose that isothermal compressibility is positive and that for non-negative temperature, . Chemical reactions Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. denotes the change in the potential and at equilibrium the change will be zero. Most commonly one considers reactions at constant and , so the Gibbs free energy is the most useful potential in studies of chemical reactions. See also Coomber's relationship Notes References Further reading McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971, Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, External links Thermodynamic Potentials – Georgia State University Chemical Potential Energy: The 'Characteristic' vs the Concentration-Dependent Kind Thermodynamics Potentials Thermodynamic equations
0.777176
0.989581
0.769079
Richardson number
The Richardson number (Ri) is named after Lewis Fry Richardson (1881–1953). It is the dimensionless number that expresses the ratio of the buoyancy term to the flow shear term: where is gravity, is density, is a representative flow speed, and is depth. The Richardson number, or one of several variants, is of practical importance in weather forecasting and in investigating density and turbidity currents in oceans, lakes, and reservoirs. When considering flows in which density differences are small (the Boussinesq approximation), it is common to use the reduced gravity g' and the relevant parameter is the densimetric Richardson number which is used frequently when considering atmospheric or oceanic flows. If the Richardson number is much less than unity, buoyancy is unimportant in the flow. If it is much greater than unity, buoyancy is dominant (in the sense that there is insufficient kinetic energy to homogenize the fluids). If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally. Aviation In aviation, the Richardson number is used as a rough measure of expected air turbulence. A lower value indicates a higher degree of turbulence. Values in the range 10 to 0.1 are typical, with values below unity indicating significant turbulence. Thermal convection In thermal convection problems, Richardson number represents the importance of natural convection relative to the forced convection. The Richardson number in this context is defined as where g is the gravitational acceleration, is the thermal expansion coefficient, Thot is the hot wall temperature, Tref is the reference temperature, L is the characteristic length, and V is the characteristic velocity. The Richardson number can also be expressed by using a combination of the Grashof number and Reynolds number, Typically, the natural convection is negligible when Ri < 0.1, forced convection is negligible when Ri > 10, and neither is negligible when 0.1 < Ri < 10. It may be noted that usually the forced convection is large relative to natural convection except in the case of extremely low forced flow velocities. However, buoyancy often plays a significant role in defining the laminar–turbulent transition of a mixed convection flow. In the design of water filled thermal energy storage tanks, the Richardson number can be useful. Meteorology In atmospheric science, several different expressions for the Richardson number are commonly used: the flux Richardson number (which is fundamental), the gradient Richardson number, and the bulk Richardson number. The flux Richardson number is the ratio of buoyant production (or suppression) of turbulence kinetic energy to the production of turbulence by shear. Mathematically, this is: , where is the virtual temperature, is the virtual potential temperature, is the altitude, is the component of the wind, is the component of the wind, and is the (vertical) component of the wind. A prime (e.g. ) denotes a deviation of the respective field from its Reynolds average. The gradient Richardson number is arrived at by approximating the flux Richardson number using "K-theory". This results in: . The bulk Richardson number results from making a finite difference approximation to the derivatives in the expression for the gradient Richardson number, giving: . Here, for any variable , , i.e. the difference between at altitude and altitude . If the lower reference level is taken to be , then (due to the no-slip boundary condition), so the expression simplifies to: . Oceanography In oceanography, the Richardson number has a more general form which takes stratification into account. It is a measure of relative importance of mechanical and density effects in the water column, as described by the Taylor–Goldstein equation, used to model Kelvin–Helmholtz instability which is driven by sheared flows. where N is the Brunt–Väisälä frequency and u the wind speed. The Richardson number defined above is always considered positive. A negative value of N² (i.e. complex N) indicates unstable density gradients with active convective overturning. Under such circumstances the magnitude of negative Ri is not generally of interest. It can be shown that Ri < 1/4 is a necessary condition for velocity shear to overcome the tendency of a stratified fluid to remain stratified, and some mixing (turbulence) will generally occur. When Ri is large, turbulent mixing across the stratification is generally suppressed. References Dimensionless numbers Atmospheric dispersion modeling Fluid dynamics Buoyancy Dimensionless numbers of fluid mechanics
0.782852
0.982377
0.769056
Gravity wave
In fluid dynamics, gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy tries to restore equilibrium. An example of such an interface is that between the atmosphere and the ocean, which gives rise to wind waves. A gravity wave results when fluid is displaced from a position of equilibrium. The restoration of the fluid to equilibrium will produce a movement of the fluid back and forth, called a wave orbit. Gravity waves on an air–sea interface of the ocean are called surface gravity waves (a type of surface wave), while gravity waves that are the body of the water (such as between parts of different densities) are called internal waves. Wind-generated waves on the water surface are examples of gravity waves, as are tsunamis, ocean tides, and the wakes of surface vessels. The period of wind-generated gravity waves on the free surface of the Earth's ponds, lakes, seas and oceans are predominantly between 0.3 and 30 seconds (corresponding to frequencies between 3 Hz and .03 Hz). Shorter waves are also affected by surface tension and are called gravity–capillary waves and (if hardly influenced by gravity) capillary waves. Alternatively, so-called infragravity waves, which are due to subharmonic nonlinear wave interaction with the wind waves, have periods longer than the accompanying wind-generated waves. Atmosphere dynamics on Earth In the Earth's atmosphere, gravity waves are a mechanism that produce the transfer of momentum from the troposphere to the stratosphere and mesosphere. Gravity waves are generated in the troposphere by frontal systems or by airflow over mountains. At first, waves propagate through the atmosphere without appreciable change in mean velocity. But as the waves reach more rarefied (thin) air at higher altitudes, their amplitude increases, and nonlinear effects cause the waves to break, transferring their momentum to the mean flow. This transfer of momentum is responsible for the forcing of the many large-scale dynamical features of the atmosphere. For example, this momentum transfer is partly responsible for the driving of the Quasi-Biennial Oscillation, and in the mesosphere, it is thought to be the major driving force of the Semi-Annual Oscillation. Thus, this process plays a key role in the dynamics of the middle atmosphere. The effect of gravity waves in clouds can look like altostratus undulatus clouds, and are sometimes confused with them, but the formation mechanism is different. Atmospheric gravity waves reaching ionosphere are responsible for the generation of traveling ionospheric disturbances and could be observed by radars. Quantitative description Deep water The phase velocity of a linear gravity wave with wavenumber is given by the formula where g is the acceleration due to gravity. When surface tension is important, this is modified to where σ is the surface tension coefficient and ρ is the density. The gravity wave represents a perturbation around a stationary state, in which there is no velocity. Thus, the perturbation introduced to the system is described by a velocity field of infinitesimally small amplitude, Because the fluid is assumed incompressible, this velocity field has the streamfunction representation where the subscripts indicate partial derivatives. In this derivation it suffices to work in two dimensions , where gravity points in the negative z-direction. Next, in an initially stationary incompressible fluid, there is no vorticity, and the fluid stays irrotational, hence In the streamfunction representation, Next, because of the translational invariance of the system in the x-direction, it is possible to make the ansatz where k is a spatial wavenumber. Thus, the problem reduces to solving the equation We work in a sea of infinite depth, so the boundary condition is at The undisturbed surface is at , and the disturbed or wavy surface is at where is small in magnitude. If no fluid is to leak out of the bottom, we must have the condition Hence, on , where A and the wave speed c are constants to be determined from conditions at the interface. The free-surface condition: At the free surface , the kinematic condition holds: Linearizing, this is simply where the velocity is linearized on to the surface Using the normal-mode and streamfunction representations, this condition is , the second interfacial condition. Pressure relation across the interface: For the case with surface tension, the pressure difference over the interface at is given by the Young–Laplace equation: where σ is the surface tension and κ is the curvature of the interface, which in a linear approximation is Thus, However, this condition refers to the total pressure (base+perturbed), thus (As usual, The perturbed quantities can be linearized onto the surface z=0.) Using hydrostatic balance, in the form this becomes The perturbed pressures are evaluated in terms of streamfunctions, using the horizontal momentum equation of the linearised Euler equations for the perturbations, to yield Putting this last equation and the jump condition together, Substituting the second interfacial condition and using the normal-mode representation, this relation becomes Using the solution , this gives Since is the phase speed in terms of the angular frequency and the wavenumber, the gravity wave angular frequency can be expressed as The group velocity of a wave (that is, the speed at which a wave packet travels) is given by and thus for a gravity wave, The group velocity is one half the phase velocity. A wave in which the group and phase velocities differ is called dispersive. Shallow water Gravity waves traveling in shallow water (where the depth is much less than the wavelength), are nondispersive: the phase and group velocities are identical and independent of wavelength and frequency. When the water depth is h, Generation of ocean waves by wind Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, and capillary-gravity waves play an essential role in this effect. There are two distinct mechanisms involved, called after their proponents, Phillips and Miles. In the work of Phillips, the ocean surface is imagined to be initially flat (glassy), and a turbulent wind blows over the surface. When a flow is turbulent, one observes a randomly fluctuating velocity field superimposed on a mean flow (contrast with a laminar flow, in which the fluid motion is ordered and smooth). The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time. The air-water interface is now endowed with a surface roughness due to the capillary-gravity waves, and a second phase of wave growth takes place. A wave established on the surface either spontaneously as described above, or in laboratory conditions, interacts with the turbulent mean flow in a manner described by Miles. This is the so-called critical-layer mechanism. A critical layer forms at a height where the wave speed c equals the mean turbulent flow U. As the flow is turbulent, its mean profile is logarithmic, and its second derivative is thus negative. This is precisely the condition for the mean flow to impart its energy to the interface through the critical layer. This supply of energy to the interface is destabilizing and causes the amplitude of the wave on the interface to grow in time. As in other examples of linear instability, the growth rate of the disturbance in this phase is exponential in time. This Miles–Phillips Mechanism process can continue until an equilibrium is reached, or until the wind stops transferring energy to the waves (i.e., blowing them along) or when they run out of ocean distance, also known as fetch length. Analog gravity models and surface gravity waves Surface gravity waves have been recognized as a powerful tool for studying analog gravity models, providing experimental platforms for phenomena typically found in black hole physics. In an experiment, surface gravity waves were utilized to simulate phase space horizons, akin to event horizons of black holes. This experiment observed logarithmic phase singularities, which are central to phenomena like Hawking radiation, and the emergence of Fermi-Dirac distributions, which parallel quantum mechanical systems. By propagating surface gravity water waves, researchers were able to recreate the energy wave functions of an inverted harmonic oscillator, a system that serves as an analog for black hole physics. The experiment demonstrated how the free evolution of these classical waves in a controlled laboratory environment can reveal the formation of horizons and singularities, shedding light on fundamental aspects of gravitational theories and quantum mechanics. See also Acoustic wave Asteroseismology Green's law Horizontal convective rolls Lee wave Lunitidal interval Mesosphere#Dynamic features Morning Glory cloud Orr–Sommerfeld equation Rayleigh–Taylor instability Rogue wave Skyquake Notes References Gill, A. E., "Gravity wave". Glossary of Meteorology. American Meteorological Society (15 December 2014). Crawford, Frank S., Jr. (1968). Waves (Berkeley Physics Course, Vol. 3), (McGraw-Hill, 1968) Free online version Alexander, P., A. de la Torre, and P. Llamedo (2008), Interpretation of gravity wave signatures in GPS radio occultations, J. Geophys. Res., 113, D16117, doi:10.1029/2007JD009390. Further reading External links
0.776035
0.991
0.769051
Gravitational acceleration
In physics, gravitational acceleration is the acceleration of an object in free fall within a vacuum (and thus without experiencing drag). This is the steady gain in speed caused exclusively by gravitational attraction. All bodies accelerate in vacuum at the same rate, regardless of the masses or compositions of the bodies; the measurement and analysis of these rates is known as gravimetry. At a fixed point on the surface, the magnitude of Earth's gravity results from combined effect of gravitation and the centrifugal force from Earth's rotation. At different points on Earth's surface, the free fall acceleration ranges from , depending on altitude, latitude, and longitude. A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag. Relation to the Universal Law Newton's law of universal gravitation states that there is a gravitational force between any two masses that is equal in magnitude for each mass, and is aligned to draw the two masses toward each other. The formula is: where and are any two masses, is the gravitational constant, and is the distance between the two point-like masses. Using the integral form of Gauss's Law, this formula can be extended to any pair of objects of which one is far more massive than the other — like a planet relative to any man-scale artifact. The distances between planets and between the planets and the Sun are (by many orders of magnitude) larger than the sizes of the sun and the planets. In consequence both the sun and the planets can be considered as point masses and the same formula applied to planetary motions. (As planets and natural satellites form pairs of comparable mass, the distance 'r' is measured from the common centers of mass of each pair rather than the direct total distance between planet centers.) If one mass is much larger than the other, it is convenient to take it as observational reference and define it as source of a gravitational field of magnitude and orientation given by: where is the mass of the field source (larger), and is a unit vector directed from the field source to the sample (smaller) mass. The negative sign indicates that the force is attractive (points backward, toward the source). Then the attraction force vector onto a sample mass can be expressed as: Here is the frictionless, free-fall acceleration sustained by the sampling mass under the attraction of the gravitational source. It is a vector oriented toward the field source, of magnitude measured in acceleration units. The gravitational acceleration vector depends only on how massive the field source is and on the distance 'r' to the sample mass . It does not depend on the magnitude of the small sample mass. This model represents the "far-field" gravitational acceleration associated with a massive body. When the dimensions of a body are not trivial compared to the distances of interest, the principle of superposition can be used for differential masses for an assumed density distribution throughout the body in order to get a more detailed model of the "near-field" gravitational acceleration. For satellites in orbit, the far-field model is sufficient for rough calculations of altitude versus period, but not for precision estimation of future location after multiple orbits. The more detailed models include (among other things) the bulging at the equator for the Earth, and irregular mass concentrations (due to meteor impacts) for the Moon. The Gravity Recovery and Climate Experiment (GRACE) mission launched in 2002 consists of two probes, nicknamed "Tom" and "Jerry", in polar orbit around the Earth measuring differences in the distance between the two probes in order to more precisely determine the gravitational field around the Earth, and to track changes that occur over time. Similarly, the Gravity Recovery and Interior Laboratory mission from 2011 to 2012 consisted of two probes ("Ebb" and "Flow") in polar orbit around the Moon to more precisely determine the gravitational field for future navigational purposes, and to infer information about the Moon's physical makeup. Comparative gravities of the Earth, Sun, Moon, and planets The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the Solar System and their major moons, Ceres, Pluto, and Eris. For gaseous bodies, the "surface" is taken to mean visible surface: the cloud tops of the giant planets (Jupiter, Saturn, Uranus, and Neptune), and the Sun's photosphere. The values in the table have not been de-rated for the centrifugal force effect of planet rotation (and cloud-top wind speeds for the giant planets) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles. For reference the time it would take an object to fall 100 meters, the height of a skyscraper, is shown, along with the maximum speed reached. Air resistance is neglected. General relativity In Einstein's theory of general relativity, gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, masses distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. The gravitational force is a fictitious force. There is no gravitational acceleration, in that the proper acceleration and hence four-acceleration of objects in free fall are zero. Rather than undergoing an acceleration, objects in free fall travel along straight lines (geodesics) on the curved spacetime. Gravitational field See also Air track Gravimetry Gravity of Earth Gravitation of the Moon Gravity of Mars Newton's law of universal gravitation Standard gravity Notes References Gravimetry Gravity Acceleration Temporal rates
0.771015
0.997427
0.76903
Rotation around a fixed axis
Rotation around a fixed axis or axial rotation is a special case of rotational motion around an axis of rotation fixed, stationary, or static in three-dimensional space. This type of motion excludes the possibility of the instantaneous axis of rotation changing its orientation and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible; if two rotations are forced at the same time, a new axis of rotation will result. This concept assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes. Translation and rotation A rigid body is an object of a finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably. A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and circular motion. Purely translational motion occurs when every particle of the body has the same instantaneous velocity as every other particle; then the path traced out by any particle is exactly parallel to the path traced out by every other particle in the body. Under translational motion, the change in the position of a rigid body is specified completely by three coordinates such as x, y, and z giving the displacement of any point, such as the center of mass, fixed to the rigid body. Purely rotational motion occurs if every particle in the body moves in a circle about a single line. This line is called the axis of rotation. Then the radius vectors from the axis to all particles undergo the same angular displacement at the same time. The axis of rotation need not go through the body. In general, any rotation can be specified completely by the three angular displacements with respect to the rectangular-coordinate axes x, y, and z. Any change in the position of the rigid body is thus completely described by three translational and three rotational coordinates. Any displacement of a rigid body may be arrived at by first subjecting the body to a displacement followed by a rotation, or conversely, to a rotation followed by a displacement. We already know that for any collection of particles—whether at rest with respect to one another, as in a rigid body, or in relative motion, like the exploding fragments of a shell, the acceleration of the center of mass is given by where M is the total mass of the system and acm is the acceleration of the center of mass. There remains the matter of describing the rotation of the body about the center of mass and relating it to the external forces acting on the body. The kinematics and dynamics of rotational motion around a single axis resemble the kinematics and dynamics of translational motion; rotational motion around a single axis even has a work-energy theorem analogous to that of particle dynamics. Kinematics Angular displacement Given a particle that moves along the circumference of a circle of radius , having moved an arc length , its angular position is relative to its initial position, where . In mathematics and physics it is conventional to treat the radian, a unit of plane angle, as 1, often omitting it. Units are converted as follows: An angular displacement is a change in angular position: where is the angular displacement, is the initial angular position and is the final angular position. Angular velocity Change in angular displacement per unit time is called angular velocity with direction along the axis of rotation. The symbol for angular velocity is and the units are typically rad s−1. Angular speed is the magnitude of angular velocity. The instantaneous angular velocity is given by Using the formula for angular position and letting , we have also where is the translational speed of the particle. Angular velocity and frequency are related by Angular acceleration A changing angular velocity indicates the presence of an angular acceleration in rigid body, typically measured in rad s−2. The average angular acceleration over a time interval Δt is given by The instantaneous acceleration α(t) is given by Thus, the angular acceleration is the rate of change of the angular velocity, just as acceleration is the rate of change of velocity. The translational acceleration of a point on the object rotating is given by where r is the radius or distance from the axis of rotation. This is also the tangential component of acceleration: it is tangential to the direction of motion of the point. If this component is 0, the motion is uniform circular motion, and the velocity changes in direction only. The radial acceleration (perpendicular to direction of motion) is given by It is directed towards the center of the rotational motion, and is often called the centripetal acceleration. The angular acceleration is caused by the torque, which can have a positive or negative value in accordance with the convention of positive and negative angular frequency. The relationship between torque and angular acceleration (how difficult it is to start, stop, or otherwise change rotation) is given by the moment of inertia: . Equations of kinematics When the angular acceleration is constant, the five quantities angular displacement , initial angular velocity , final angular velocity , angular acceleration , and time can be related by four equations of kinematics: Dynamics Moment of inertia The moment of inertia of an object, symbolized by , is a measure of the object's resistance to changes to its rotation. The moment of inertia is measured in kilogram metre² (kg m2). It depends on the object's mass: increasing the mass of an object increases the moment of inertia. It also depends on the distribution of the mass: distributing the mass further from the center of rotation increases the moment of inertia by a greater degree. For a single particle of mass a distance from the axis of rotation, the moment of inertia is given by Torque Torque is the twisting effect of a force F applied to a rotating object which is at position r from its axis of rotation. Mathematically, where × denotes the cross product. A net torque acting upon an object will produce an angular acceleration of the object according to just as F = ma in linear dynamics. The work done by a torque acting on an object equals the magnitude of the torque times the angle through which the torque is applied: The power of a torque is equal to the work done by the torque per unit time, hence: Angular momentum The angular momentum is a measure of the difficulty of bringing a rotating object to rest. It is given by where the sum is taken over all particles in the object. Angular momentum is the product of moment of inertia and angular velocity: just as p = mv in linear dynamics. The analog of linear momentum in rotational motion is angular momentum. The greater the angular momentum of the spinning object such as a top, the greater its tendency to continue to spin. The angular momentum of a rotating body is proportional to its mass and to how rapidly it is turning. In addition, the angular momentum depends on how the mass is distributed relative to the axis of rotation: the further away the mass is located from the axis of rotation, the greater the angular momentum. A flat disk such as a record turntable has less angular momentum than a hollow cylinder of the same mass and velocity of rotation. Like linear momentum, angular momentum is vector quantity, and its conservation implies that the direction of the spin axis tends to remain unchanged. For this reason, the spinning top remains upright whereas a stationary one falls over immediately. The angular momentum equation can be used to relate the moment of the resultant force on a body about an axis (sometimes called torque), and the rate of rotation about that axis. Torque and angular momentum are related according to just as F = dp/dt in linear dynamics. In the absence of an external torque, the angular momentum of a body remains constant. The conservation of angular momentum is notably demonstrated in figure skating: when pulling the arms closer to the body during a spin, the moment of inertia is decreased, and so the angular velocity is increased. Kinetic energy The kinetic energy due to the rotation of the body is given by just as in linear dynamics. Kinetic energy is the energy of motion. The amount of translational kinetic energy found in two variables: the mass of the object and the speed of the object as shown in the equation above. Kinetic energy must always be either zero or a positive value. While velocity can have either a positive or negative value, velocity squared will always be positive. Vector expression The above development is a special case of general rotational motion. In the general case, angular displacement, angular velocity, angular acceleration, and torque are considered to be vectors. An angular displacement is considered to be a vector, pointing along the axis, of magnitude equal to that of . A right-hand rule is used to find which way it points along the axis; if the fingers of the right hand are curled to point in the way that the object has rotated, then the thumb of the right hand points in the direction of the vector. The angular velocity vector also points along the axis of rotation in the same way as the angular displacements it causes. If a disk spins counterclockwise as seen from above, its angular velocity vector points upwards. Similarly, the angular acceleration vector points along the axis of rotation in the same direction that the angular velocity would point if the angular acceleration were maintained for a long time. The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure. Mathematical representation Examples and applications Constant angular speed The simplest case of rotation around a fixed axis is that of constant angular speed. Then the total torque is zero. For the example of the Earth rotating around its axis, there is very little friction. For a fan, the motor applies a torque to compensate for friction. Similar to the fan, equipment found in the mass production manufacturing industry demonstrate rotation around a fixed axis effectively. For example, a multi-spindle lathe is used to rotate the material on its axis to effectively increase the productivity of cutting, deformation and turning operations. The angle of rotation is a linear function of time, which modulo 360° is a periodic function. An example of this is the two-body problem with circular orbits. Centripetal force Internal tensile stress provides the centripetal force that keeps a spinning object together. A rigid body model neglects the accompanying strain. If the body is not rigid this strain will cause it to change shape. This is expressed as the object changing shape due to the "centrifugal force". Celestial bodies rotating about each other often have elliptic orbits. The special case of circular orbits is an example of a rotation around a fixed axis: this axis is the line through the center of mass perpendicular to the plane of motion. The centripetal force is provided by gravity, see also two-body problem. This usually also applies for a spinning celestial body, so it need not be solid to keep together unless the angular speed is too high in relation to its density. (It will, however, tend to become oblate.) For example, a spinning celestial body of water must take at least 3 hours and 18 minutes to rotate, regardless of size, or the water will separate. If the density of the fluid is higher the time can be less. See orbital period. Plane of rotation See also Anatomical terms of motion Artificial gravity by rotation Axle Axial precession Axial tilt Axis–angle representation Carousel, Ferris wheel Center pin Centrifugal force Centrifuge Centripetal force Circular motion Coriolis effect Fictitious force Flywheel Gyration Instant centre of rotation Linear-rotational analogs Optical axis Revolutions per minute Revolving door Rigid body angular momentum Rotation matrix Rotational speed Rotational symmetry Run-out References Fundamentals of Physics Extended 7th Edition by Halliday, Resnick and Walker. Concepts of Physics Volume 1, by H. C. Verma, 1st edition, Celestial mechanics Euclidean symmetries Rotation
0.774885
0.992441
0.769027
Physics engine
A physics engine is computer software that provides an approximate simulation of certain physical systems, such as rigid body dynamics (including collision detection), soft body dynamics, and fluid dynamics, of use in the domains of computer graphics, video games and film (CGI). Their main uses are in video games (typically as middleware), in which case the simulations are in real-time. The term is sometimes used more generally to describe any software system for simulating physical phenomena, such as high-performance scientific simulation. Description There are generally two classes of physics engines: real-time and high-precision. High-precision physics engines require more processing power to calculate very precise physics and are usually used by scientists and computer-animated movies. Real-time physics engines—as used in video games and other forms of interactive computing—use simplified calculations and decreased accuracy to compute in time for the game to respond at an appropriate rate for game play. A physics engine is essentially a big calculator that does mathematics needed to simulate physics. Scientific engines One of the first general purpose computers, ENIAC, was used as a very simple type of physics engine. It was used to design ballistics tables to help the United States military estimate where artillery shells of various mass would land when fired at varying angles and gunpowder charges, also accounting for drift caused by wind. The results were calculated a single time only, and were tabulated into printed tables handed out to the artillery commanders. Physics engines have been commonly used on supercomputers since the 1980s to perform computational fluid dynamics modeling, where particles are assigned force vectors that are combined to show circulation. Due to the requirements of speed and high precision, special computer processors known as vector processors were developed to accelerate the calculations. The techniques can be used to model weather patterns in weather forecasting, wind tunnel data for designing air- and watercraft or motor vehicles including racecars, and thermal cooling of computer processors for improving heat sinks. As with many calculation-laden processes in computing, the accuracy of the simulation is related to the resolution of the simulation and the precision of the calculations; small fluctuations not modeled in the simulation can drastically change the predicted results. Tire manufacturers use physics simulations to examine how new tire tread types will perform under wet and dry conditions, using new tire materials of varying flexibility and under different levels of weight loading. Game engines In most computer games, speed of the processors and gameplay are more important than accuracy of simulation. This leads to designs for physics engines that produce results in real-time but that replicate real world physics only for simple cases and typically with some approximation. More often than not, the simulation is geared towards providing a "perceptually correct" approximation rather than a real simulation. However some game engines, such as Source, use physics in puzzles or in combat situations. This requires more accurate physics so that, for example, the momentum of an object can knock over an obstacle or lift a sinking object. Physically-based character animation in the past only used rigid body dynamics because they are faster and easier to calculate, but modern games and movies are starting to use soft body physics. Soft body physics are also used for particle effects, liquids and cloth. Some form of limited fluid dynamics simulation is sometimes provided to simulate water and other liquids as well as the flow of fire and explosions through the air. Collision detection Objects in games interact with the player, the environment, and each other. Typically, most 3D objects in games are represented by two separate meshes or shapes. One of these meshes is the highly complex and detailed shape visible to the player in the game, such as a vase with elegant curved and looping handles. For purpose of speed, a second, simplified invisible mesh is used to represent the object to the physics engine so that the physics engine treats the example vase as a simple cylinder. It would thus be impossible to insert a rod or fire a projectile through the handle holes on the vase, because the physics engine model is based on the cylinder and is unaware of the handles. The simplified mesh used for physics processing is often referred to as the collision geometry. This may be a bounding box, sphere, or convex hull. Engines that use bounding boxes or bounding spheres as the final shape for collision detection are considered extremely simple. Generally a bounding box is used for broad phase collision detection to narrow down the number of possible collisions before costly mesh on mesh collision detection is done in the narrow phase of collision detection. Another aspect of precision in discrete collision detection involves the framerate, or the number of moments in time per second when physics is calculated. Each frame is treated as separate from all other frames, and the space between frames is not calculated. A low framerate and a small fast-moving object causes a situation where the object does not move smoothly through space but instead seems to teleport from one point in space to the next as each frame is calculated. Projectiles moving at sufficiently high speeds will miss targets, if the target is small enough to fit in the gap between the calculated frames of the fast moving projectile. Various techniques are used to overcome this flaw, such as Second Lifes representation of projectiles as arrows with invisible trailing tails longer than the gap in frames to collide with any object that might fit between the calculated frames. By contrast, continuous collision detection such as in Bullet or Havok does not suffer this problem. Soft-body dynamics An alternative to using bounding box-based rigid body physics systems is to use a finite element-based system. In such a system, a 3-dimensional, volumetric tessellation is created of the 3D object. The tessellation results in a number of finite elements which represent aspects of the object's physical properties such as toughness, plasticity, and volume preservation. Once constructed, the finite elements are used by a solver to model the stress within the 3D object. The stress can be used to drive fracture, deformation and other physical effects with a high degree of realism and uniqueness. As the number of modeled elements is increased, the engine's ability to model physical behavior increases. The visual representation of the 3D object is altered by the finite element system through the use of a deformation shader run on the CPU or GPU. Finite Element-based systems had been impractical for use in games due to the performance overhead and the lack of tools to create finite element representations out of 3D art objects. With higher performance processors and tools to rapidly create the volumetric tessellations, real-time finite element systems began to be used in games, beginning with Star Wars: The Force Unleashed that used Digital Molecular Matter for the deformation and destruction effects of wood, steel, flesh and plants using an algorithm developed by Dr. James O'Brien as a part of his PhD thesis. Brownian motion In the real world, physics is always active. There is a constant Brownian motion jitter to all particles in our universe as the forces push back and forth against each other. For a game physics engine, such constant active precision is unnecessarily wasting the limited CPU power, which can cause problems such as decreased framerate. Thus, games may put objects to "sleep" by disabling the computation of physics on objects that have not moved a particular distance within a certain amount of time. For example, in the 3D virtual world Second Life, if an object is resting on the floor and the object does not move beyond a minimal distance in about two seconds, then the physics calculations are disabled for the object and it becomes frozen in place. The object remains frozen until physics processing reactivates for the object after collision occurs with some other active physical object. Paradigms Physics engines for video games typically have two core components, a collision detection/collision response system, and the dynamics simulation component responsible for solving the forces affecting the simulated objects. Modern physics engines may also contain fluid simulations, animation control systems and asset integration tools. There are three major paradigms for the physical simulation of solids: Penalty methods, where interactions are commonly modelled as mass-spring systems. This type of engine is popular for deformable, or soft-body physics. Constraint based methods, where constraint equations are solved that estimate physical laws. Impulse based methods, where impulses are applied to object interactions. However, this is actually just a special case of a constraint based method combined with an iterative solver that propagates impulses throughout the system. Finally, hybrid methods are possible that combine aspects of the above paradigms. Limitations A primary limit of physics engine realism is the approximated result of the constraint resolutions and collision result due to the slow convergence of algorithms. Collision detection computed at a too low frequency can result in objects passing through each other and then being repelled with an abnormal correction force. On the other hand, approximated results of reaction force is due to the slow convergence of typical Projected Gauss Seidel solver resulting in abnormal bouncing. Any type of free-moving compound physics object can demonstrate this problem, but it is especially prone to affecting chain links under high tension, and wheeled objects with actively physical bearing surfaces. Higher precision reduces the positional/force errors, but at the cost of needing greater CPU power for the calculations. Physics processing unit (PPU) A physics processing unit (PPU) is a dedicated microprocessor designed to handle the calculations of physics, especially in the physics engine of video games. Examples of calculations involving a PPU might include rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. The idea is that specialized processors offload time-consuming tasks from a computer's CPU, much like how a GPU performs graphics operations in the main CPU's place. The term was coined by Ageia's marketing to describe their PhysX chip to consumers. Several other technologies in the CPU-GPU spectrum have some features in common with it, although Ageia's solution was the only complete one designed, marketed, supported, and placed within a system exclusively as a PPU. General-purpose computing on graphics processing unit (GPGPU) Hardware acceleration for physics processing is now usually provided by graphics processing units that support more general computation, a concept known as general-purpose computing on graphics processing units (GPGPU). AMD and NVIDIA provide support for rigid body dynamics computations on their latest graphics cards. NVIDIA's GeForce 8 series supports a GPU-based Newtonian physics acceleration technology named Quantum Effects Technology. NVIDIA provides an SDK Toolkit for CUDA (Compute Unified Device Architecture) technology that offers both a low and high-level API to the GPU. For their GPUs, AMD offers a similar SDK, called Close to Metal (CTM), which provides a thin hardware interface. PhysX is an example of a physics engine that can use GPGPU based hardware acceleration when it is available. Engines Real-time physics engines Open source Advanced Simulation Library - open source hardware accelerated multiphysics simulation software Box2D Bullet Chipmunk physics engine - 2D physics engine Jolt Physics - Horizon Forbidden West physics engine Newton Game Dynamics Open Dynamics Engine PAL (Physics Abstraction Layer) - A uniform API that supports multiple physics engines PhysX Project Chrono - An open source simulation engine for multi-physics applications. Siconos Modeling and the simulation of mechanical systems with contact, impact and Coulomb's friction SOFA (Simulation Open Framework Architecture) Tokamak physics engine Public domain Phyz (Dax Phyz) - 2.5D physics simulator/editor. Closed source/limited free distribution Digital Molecular Matter Havok Chaos by Epic Games Vortex by CMLabs Simulations AGX Multiphysics by Algoryx Simulation AB Algodoo by Algoryx Simulation AB Rubikon by Valve Corporation High precision physics engines VisSim - Visual Simulation engine for linear and nonlinear dynamics See also Game physics Ragdoll physics Procedural animation Rigid body dynamics Soft body dynamics Physics processing unit Cell microprocessor Linear complementarity problem Impulse/constraint physics engines require a solver for such problems to handle multi-point collisions. Finite Element Analysis References Further reading Bourg, David M. (2002) Physics for Game Developers. O'Reilly & Associates. External links Computer graphics Video game development Articles containing video clips
0.777483
0.989095
0.769004
Electronvolt
In physics, an electronvolt (symbol eV), also written electron-volt and electron volt, is the measure of an amount of kinetic energy gained by a single electron accelerating through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equal to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 revision of the SI, this sets 1 eV equal to the exact value Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy after passing through a voltage of V. Definition and use An electronvolt is the amount of energy gained or lost by a single electron when it moves through an electric potential difference of one volt. Hence, it has a value of one volt, which is , multiplied by the elementary charge Therefore, one electronvolt is equal to The electronvolt (eV) is a unit of energy, but is not an SI unit. It is a commonly used unit of energy within physics, widely used in solid state, atomic, nuclear and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli- (10-3), kilo- (103), mega- (106), giga- (109), tera- (1012), peta- (1015) or exa- (1018), the respective symbols being meV, keV, MeV, GeV, TeV, PeV and EeV. The SI unit of energy is the joule (J). In some older documents, and in the name Bevatron, the symbol BeV is used, where the B stands for billion. The symbol BeV is therefore equivalent to GeV, though neither is an SI unit. Relation to other physical properties and units In the fields of physics in which the electronvolt is used, other quantities are typically measured using units derived from the electronvolt as a product with fundamental constants of importance in the theory are often used. Mass By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from ). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of is: For example, an electron and a positron, each with a mass of , can annihilate to yield of energy. A proton has a mass of . In general, the masses of all hadrons are of the order of , which makes the GeV/c2 a convenient unit of mass for particle physics: The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula: Momentum By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may be informally be omitted to express momentum using the unit electronvolt. The energy–momentum relation in natural units (with ) is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as in high-energy physics such that an applied energy with expressed in the unit eV conveniently results in a numerically approximately equivalent change of momentum when expressed with the unit eV/c. The dimension of momentum is . The dimension of energy is . Dividing a unit of energy (such as eV) by a fundamental constant (such as the speed of light) that has the dimension of velocity facilitates the required conversion for using a unit of energy to quantify momentum. For example, if the momentum p of an electron is , then the conversion to MKS system of units can be achieved by: Distance In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented using a unit of inverse particle mass. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via . For example, the meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: Temperature In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: where kB is the Boltzmann constant. The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kiloelectronvolt), which is equal to 174 MK (megakelvin). As an approximation: kBT is about (≈ ) at a temperature of . Wavelength The energy E, frequency ν, and wavelength λ of a photon are related by where h is the Planck constant, c is the speed of light. This reduces to A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency . Scattering experiments In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. Energy comparisons Molar energy One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ ), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n. See also Orders of magnitude (energy) References External links Fundamental Physical Constants from NIST Particle physics Units of chemical measurement Units of energy Voltage Electron
0.770117
0.998552
0.769002
Abraham–Lorentz force
In the physics of electromagnetism, the Abraham–Lorentz force (also known as the Lorentz–Abraham force) is the reaction force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz. The formula, although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called the Lorentz–Dirac force or collectively known as Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the "Abraham–Lorentz–Dirac–Langevin equation", the other is the self-force on a moving mirror. The force is proportional to the square of the object's charge, multiplied by the jerk that it is experiencing. (Jerk is the rate of change of acceleration.) The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves. There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause (retrocausality), some theories have speculated that the equation allows signals to travel backward in time, thus challenging the physical principle of causality. One resolution of this problem was discussed by Arthur D. Yaghjian and was further discussed by Fritz Rohrlich and Rodrigo Medina. Furthermore, some authors argue that a radiation reaction force is unnecessary, introducing a corresponding stress-energy tensor that naturally conserves energy and momentum in Minkowski space and other suitable spacetimes. Definition and description The Lorentz self-force derived for non-relativistic velocity approximation , is given in SI units by: or in Gaussian units by where is the force, is the derivative of acceleration, or the third derivative of displacement, also called jerk, μ0 is the magnetic constant, ε0 is the electric constant, c is the speed of light in free space, and q is the electric charge of the particle. Physically, an accelerating charge emits radiation (according to the Larmor formula), which carries momentum away from the charge. Since momentum is conserved, the charge is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the Larmor formula, as shown below. The Abraham–Lorentz force, a generalization of Lorentz self-force for arbitrary velocities is given by: Where is the Lorentz factor associated with , the velocity of particle. The formula is consistent with special relativity and reduces to Lorentz's self-force expression for low velocity limit. The covariant form of radiation reaction deduced by Dirac for arbitrary shape of elementary charges is found to be: History The first calculation of electromagnetic radiation energy due to current was given by George Francis FitzGerald in 1883, in which radiation resistance appears. However, dipole antenna experiments by Heinrich Hertz made a bigger impact and gathered commentary by Poincaré on the amortissement or damping of the oscillator due to the emission of radiation. Qualitative discussions surrounding damping effects of radiation emitted by accelerating charges was sparked by Henry Poincaré in 1891. In 1892, Hendrik Lorentz derived the self-interaction force of charges for low velocities but did not relate it to radiation losses. Suggestion of a relationship between radiation energy loss and self-force was first made by Max Planck. Planck's concept of the damping force, which did not assume any particular shape for elementary charged particles, was applied by Max Abraham to find the radiation resistance of an antenna in 1898, which remains the most practical application of the phenomenon. In the early 1900s, Abraham formulated a generalization of the Lorentz self-force to arbitrary velocities, the physical consistency of which was later shown by George Adolphus Schott. Schott was able to derive the Abraham equation and attributed "acceleration energy" to be the source of energy of the electromagnetic radiation. Originally submitted as an essay for the 1908 Adams Prize, he won the competition and had the essay published as a book in 1912. The relationship between self-force and radiation reaction became well-established at this point. Wolfgang Pauli first obtained the covariant form of the radiation reaction and in 1938, Paul Dirac found that the equation of motion of charged particles, without assuming the shape of the particle, contained Abraham's formula within reasonable approximations. The equations derived by Dirac are considered exact within the limits of classical theory. Background In classical electrodynamics, problems are typically divided into two classes: Problems in which the charge and current sources of fields are specified and the fields are calculated, and The reverse situation, problems in which the fields are specified and the motion of particles are calculated. In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold: Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and Inclusion of self-fields leads to problems in physics such as renormalization, some of which are still unsolved, that relate to the very nature of matter and energy. These conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson] The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~ 1948–1950) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain. The Abraham–Lorentz force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating charges emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. (See precision tests of QED.) The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore, general relativity has an unsolved self-field problem. String theory and loop quantum gravity are current attempts to resolve this problem, formally called the problem of radiation reaction or the problem of self-force. Derivation The simplest derivation for the self-force is found for periodic motion from the Larmor formula for the power radiated from a point charge that moves with velocity much lower than that of speed of light: If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from to : The above expression can be integrated by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears: Clearly, we can identify the Lorentz self-force equation which is applicable to slow moving particles as: A more rigorous derivation, which does not require periodic motion, was found using an effective field theory formulation. A generalized equation for arbitrary velocities was formulated by Max Abraham, which is found to be consistent with special relativity. An alternative derivation, making use of theory of relativity which was well established at that time, was found by Dirac without any assumption of the shape of the charged particle. Signals from the future Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory". For a particle in an external force , we have where This equation can be integrated once to obtain The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor which falls off rapidly for times greater than in the future. Therefore, signals from an interval approximately into the future affect the acceleration in the present. For an electron, this time is approximately sec, which is the time it takes for a light wave to travel across the "size" of an electron, the classical electron radius. One way to define this "size" is as follows: it is (up to some constant factor) the distance such that two electrons placed at rest at a distance apart and allowed to fly apart, would have sufficient energy to reach half the speed of light. In other words, it forms the length (or time, or energy) scale where something as light as an electron would be fully relativistic. It is worth noting that this expression does not involve the Planck constant at all, so although it indicates something is wrong at this length scale, it does not directly relate to quantum uncertainty, or to the frequency–energy relation of a photon. Although it is common in quantum mechanics to treat as a "classical limit", some speculate that even the classical theory needs renormalization, no matter how the Planck constant would be fixed. Abraham–Lorentz–Dirac force To find the relativistic generalization, Dirac renormalized the mass in the equation of motion with the Abraham–Lorentz force in 1938. This renormalized equation of motion is called the Abraham–Lorentz–Dirac equation of motion. Definition The expression derived by Dirac is given in signature (− + + +) by With Liénard's relativistic generalization of Larmor's formula in the co-moving frame, one can show this to be a valid force by manipulating the time average equation for power: Paradoxes Pre-acceleration Similar to the non-relativistic case, there are pathological solutions using the Abraham–Lorentz–Dirac equation that anticipate a change in the external force and according to which the particle accelerates in advance of the application of a force, so-called preacceleration solutions. One resolution of this problem was discussed by Yaghjian, and is further discussed by Rohrlich and Medina. Runaway solutions Runaway solutions are solutions to ALD equations that suggest the force on objects will increase exponential over time. It is considered as an unphysical solution. Hyperbolic motion The ALD equations are known to be zero for constant acceleration or hyperbolic motion in Minkowski spacetime diagram. The subject of whether in such condition electromagnetic radiation exists was matter of debate until Fritz Rohrlich resolved the problem by showing that hyperbolically moving charges do emit radiation. Subsequently, the issue is discussed in context of energy conservation and equivalence principle which is classically resolved by considering "acceleration energy" or Schott energy. Self-interactions However the antidamping mechanism resulting from the Abraham–Lorentz force can be compensated by other nonlinear terms, which are frequently disregarded in the expansions of the retarded Liénard–Wiechert potential. Landau–Lifshitz radiation damping force The Abraham–Lorentz–Dirac force leads to some pathological solutions. In order to avoid this, Lev Landau and Evgeny Lifshitz came with the following formula for radiation damping force, which is valid when the radiation damping force is small compared with the Lorentz force in some frame of reference (assuming it exists), so that the equation of motion of the charge in an external field can be written as Here is the four-velocity of the particle, is the Lorentz factor and is the three-dimensional velocity vector. The three-dimensional Landau–Lifshitz radiation damping force can be written as where is the total derivative. Experimental observations While the Abraham–Lorentz force is largely neglected for many experimental considerations, it gains importance for plasmonic excitations in larger nanoparticles due to large local field enhancements. Radiation damping acts as a limiting factor for the plasmonic excitations in surface-enhanced Raman scattering. The damping force was shown to broaden surface plasmon resonances in gold nanoparticles, nanorods and clusters. The effects of radiation damping on nuclear magnetic resonance were also observed by Nicolaas Bloembergen and Robert Pound, who reported its dominance over spin–spin and spin–lattice relaxation mechanisms for certain cases. The Abraham–Lorentz force has been observed in the semiclassical regime in experiments which involve the scattering of a relativistic beam of electrons with a high intensity laser. In the experiments, a supersonic jet of helium gas is intercepted by a high-intensity (1018–1020 W/cm2) laser. The laser ionizes the helium gas and accelerates the electrons via what is known as the “laser-wakefield” effect. A second high-intensity laser beam is then propagated counter to this accelerated electron beam. In a small number of cases, inverse-Compton scattering occurs between the photons and the electron beam, and the spectra of the scattered electrons and photons are measured. The photon spectra are then compared with spectra calculated from Monte Carlo simulations that use either the QED or classical LL equations of motion. Collective effects The effects of radiation reaction are often considered within the framework of single-particle dynamics. However, interesting phenomena arise when a collection of charged particles is subjected to strong electromagnetic fields, such as in a plasma. In such scenarios, the collective behavior of the plasma can significantly modify its properties due to radiation reaction effects. Theoretical studies have shown that in environments with strong magnetic fields, like those found around pulsars and magnetars, radiation reaction cooling can alter the collective dynamics of the plasma. This modification can lead to instabilities within the plasma. Specifically, in the high magnetic fields typical of these astrophysical objects, the momentum distribution of particles is bunched and becomes anisotropic due to radiation reaction forces, potentially driving plasma instabilities and affecting overall plasma behavior. Among these instabilities, the firehose instability can arise due to the anisotropic pressure. See also Lorentz force Cyclotron radiation Synchrotron radiation Electromagnetic mass Radiation resistance Radiation damping Wheeler–Feynman absorber theory Magnetic radiation reaction force References Further reading See sections 11.2.2 and 11.2.3 Donald H. Menzel (1960) Fundamental Formulas of Physics, Dover Publications Inc., , vol. 1, p. 345. Stephen Parrott (1987) Relativistic Electrodynamics and Differential Geometry, § 4.3 Radiation reaction and the Lorentz–Dirac equation, pages 136–45, and § 5.5 Peculiar solutions of the Lorentz–Dirac equation, pp. 195–204, Springer-Verlag . External links MathPages – Does A Uniformly Accelerating Charge Radiate? Feynman: The Development of the Space-Time View of Quantum Electrodynamics EC. del Río: Radiation of an accelerated charge Electrodynamics Electromagnetic radiation Radiation Hendrik Lorentz
0.784412
0.980333
0.768985
Mass-to-charge ratio
The mass-to-charge ratio (m/Q) is a physical quantity relating the mass (quantity of matter) and the electric charge of a given particle, expressed in units of kilograms per coulomb (kg/C). It is most widely used in the electrodynamics of charged particles, e.g. in electron optics and ion optics. It appears in the scientific fields of electron microscopy, cathode ray tubes, accelerator physics, nuclear physics, Auger electron spectroscopy, cosmology and mass spectrometry. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum, when subjected to the same electric and magnetic fields. Some disciplines use the charge-to-mass ratio (Q/m) instead, which is the multiplicative inverse of the mass-to-charge ratio. The CODATA recommended value for an electron is Origin When charged particles move in electric and magnetic fields the following two laws apply: Lorentz force law: Newton's second law of motion: where F is the force applied to the ion, m is the mass of the particle, a is the acceleration, Q is the electric charge, E is the electric field, and v × B is the cross product of the ion's velocity and the magnetic flux density. This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data in a mass spectrum, it is common to use the dimensionless m/z, which denotes the dimensionless quantity formed by dividing the mass number of the ion by its charge number. Combining the two previous equations yields: This differential equation is the classic equation of motion of a charged particle in a vacuum. Together with the particle's initial conditions, it determines the particle's motion in space and time. It immediately reveals that two particles with the same m/Q ratio behave in the same way. This is why the mass-to-charge ratio is an important physical quantity in those scientific fields where charged particles interact with magnetic or electric fields. Exceptions There are non-classical effects that derive from quantum mechanics, such as the Stern–Gerlach effect that can diverge the path of ions of identical m/Q. Symbols and units The IUPAC-recommended symbols for mass and charge are m and Q, respectively, however using a lowercase q for charge is also very common. Charge is a scalar property, meaning that it can be either positive (+) or negative (−). The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e). The SI unit of the physical quantity m/Q is kilogram per coulomb. Mass spectrometry and m/z The units and notation above are used when dealing with the physics of mass spectrometry; however, the m/z notation is used for the independent variable in a mass spectrum. This notation eases data interpretation since it is numerically more related to the dalton. For example, if an ion carries one charge the m/z is numerically equivalent to the molecular or atomic mass of the ion in daltons (Da), where the numerical value of m/Q is abstruse. The m refers to the molecular or atomic mass number (number of nucleons) and z to the charge number of the ion; however, the quantity of m/z is dimensionless by definition. An ion with a mass of 100 Da (daltons) carrying two charges will be observed at . However, the empirical observation is one equation with two unknowns and could have arisen from other ions, such as an ion of mass 50 Da carrying one charge. Thus, the m/z of an ion alone neither infers mass nor the number of charges. Additional information, such as the mass spacing between mass isotopomers or the relationship between multiple charge states, is required to assign the charge state and infer the mass of the ion from the m/z. This additional information is often but not always available. Thus, the m/z is primarily used to report an empirical observation in mass spectrometry. This observation may be used in conjunction with other lines of evidence to subsequently infer the physical attributes of the ion, such as mass and charge. On rare occasions, the thomson has been used as a unit of the x-axis of a mass spectrum. History In the 19th century, the mass-to-charge ratios of some ions were measured by electrochemical methods. The first attempt to measure the mass-to-charge ratio of cathode ray particles, assuming them to be ions, was made in 1884-1890 by German-born British physicist Arthur Schuster. He put an upper limit of 10^10 coul/kg, but even that resulted in much greater value than expected, so little credence was given to his calculations at the time. In 1897, the mass-to-charge ratio of the electron was first measured by J. J. Thomson. By doing this, he showed that the electron was in fact a particle with a mass and a charge, and that its mass-to-charge ratio was much smaller than that of the hydrogen ion H+. In 1898, Wilhelm Wien separated ions (canal rays) according to their mass-to-charge ratio with an ion optical device with superimposed electric and magnetic fields (Wien filter). In 1901 Walter Kaufman measured the increase of electromagnetic mass of fast electrons (Kaufmann–Bucherer–Neumann experiments), or relativistic mass increase in modern terms. In 1913, Thomson measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Today, an instrument that measures the mass-to-charge ratio of charged particles is called a mass spectrometer. Charge-to-mass ratio The charge-to-mass ratio (Q/m) of an object is, as its name implies, the charge of an object divided by the mass of the same object. This quantity is generally useful only for objects that may be treated as particles. For extended objects, total charge, charge density, total mass, and mass density are often more useful. Derivation: or Since , or Equations and yield Significance In some experiments, the charge-to-mass ratio is the only quantity that can be measured directly. Often, the charge can be inferred from theoretical considerations, so the charge-to-mass ratio provides a way to calculate the mass of a particle. Often, the charge-to-mass ratio can be determined by observing the deflection of a charged particle in an external magnetic field. The cyclotron equation, combined with other information such as the kinetic energy of the particle, will give the charge-to-mass ratio. One application of this principle is the mass spectrometer. The same principle can be used to extract information in experiments involving the cloud chamber. The ratio of electrostatic to gravitational forces between two particles will be proportional to the product of their charge-to-mass ratios. It turns out that gravitational forces are negligible on the subatomic level, due to the extremely small masses of subatomic particles. Electron The electron charge-to-mass quotient, , is a quantity that may be measured in experimental physics. It bears significance because the electron mass me is difficult to measure directly, and is instead derived from measurements of the elementary charge e and . It also has historical significance; the Q/m ratio of the electron was successfully calculated by J. J. Thomson in 1897—and more successfully by Dunnington, which involves the angular momentum and deflection due to a perpendicular magnetic field. Thomson's measurement convinced him that cathode rays were particles, which were later identified as electrons, and he is generally credited with their discovery. The CODATA recommended value is CODATA refers to this as the electron charge-to-mass quotient, but ratio is still commonly used. There are two other common ways of measuring the charge-to-mass ratio of an electron, apart from Thomson and Dunnington's methods. The magnetron method: Using a GRD7 Valve (Ferranti valve), electrons are expelled from a hot tungsten-wire filament towards an anode. The electron is then deflected using a solenoid. From the current in the solenoid and the current in the Ferranti Valve, e/m can be calculated. Fine beam tube method: A heater heats a cathode, which emits electrons. The electrons are accelerated through a known potential, so the velocity of the electrons is known. The beam path can be seen when the electrons are accelerated through a helium (He) gas. The collisions between the electrons and the helium gas produce a visible trail. A pair of Helmholtz coils produces a uniform and measurable magnetic field at right angles to the electron beam. This magnetic field deflects the electron beam in a circular path. By measuring the accelerating potential (volts), the current (amps) to the Helmholtz coils, and the radius of the electron beam, e/m can be calculated. Zeeman Effect The charge-to-mass ratio of an electron may also be measured with the Zeeman effect, which gives rise to energy splittings in the presence of a magnetic field B: Here mj are quantum integer values ranging from −j to j, with j as the eigenvalue of the total angular momentum operator J, with where S is the spin operator with eigenvalue s and L is the angular momentum operator with eigenvalue l. gJ is the Landé g-factor, calculated as The shift in energy is also given in terms of frequency υ and wavelength λ as Measurements of the Zeeman effect commonly involve the use of a Fabry–Pérot interferometer, with light from a source (placed in a magnetic field) being passed between two mirrors of the interferometer. If δD is the change in mirror separation required to bring the mth-order ring of wavelength into coincidence with that of wavelength λ, and ΔD brings the ring of wavelength λ into coincidence with the mth-order ring, then It follows then that Rearranging, it is possible to solve for the charge-to-mass ratio of an electron as See also Gyromagnetic ratio Thomson (unit) References Bibliography IUPAP Red Book SUNAMCO 87-1 "Symbols, Units, Nomenclature and Fundamental Constants in Physics" (does not have an online version) Symbols Units and Nomenclature in Physics IUPAP-25, E.R. Cohen & P. Giacomo, Physics 146A (1987) 1–68 External links BIPM SI brochure AIP style manual NIST on units and manuscript check list Physics Today's instructions on quantities and units Physical quantities Mass spectrometry Metrology Ratios
0.77536
0.991768
0.768977
Van 't Hoff equation
The Van 't Hoff equation relates the change in the equilibrium constant, , of a chemical reaction to the change in temperature, T, given the standard enthalpy change, , for the process. The subscript means "reaction" and the superscript means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique (Studies in Dynamic Chemistry). The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system. The Van 't Hoff plot, which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction. Equation Summary and uses The standard pressure, , is used to define the reference state for the Van 't Hoff equation, which is where denotes the natural logarithm, is the thermodynamic equilibrium constant, and is the ideal gas constant. This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium. In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature). Since in reality and the standard reaction entropy do vary with temperature for most processes, the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant. A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as The definite integral between temperatures and is then In this equation is the equilibrium constant at absolute temperature , and is the equilibrium constant at absolute temperature . Development from thermodynamics Combining the well-known formula for the Gibbs free energy of reaction where is the entropy of the system, with the Gibbs free energy isotherm equation: we obtain Differentiation of this expression with respect to the variable while assuming that both and are independent of yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations. Provided that and are constant, the preceding equation gives as a linear function of and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by to obtain the standard entropy change. Van 't Hoff isotherm The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature: where is the Gibbs free energy of reaction under non-standard states at temperature , is the Gibbs free energy for the reaction at , is the extent of reaction, and is the thermodynamic reaction quotient. Since , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T. This finds applications in the field of electrochemistry. particularly in the study of the temperature dependence of voltaic cells. The isotherm can also be used at fixed temperature to describe the Law of Mass Action. When a reaction is at equilibrium, and . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when , the reaction moves in the forward direction, whereas when , the reaction moves in the backwards direction. See Chemical equilibrium. Van 't Hoff plot For a reversible reaction, the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with on the -axis and on the axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction. From this plot, is the slope, and is the intercept of the linear fit. By measuring the equilibrium constant, , at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot. Endothermic reactions For an endothermic reaction, heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope: When the reaction is endothermic, (and the gas constant ), so Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope. Exothermic reactions For an exothermic reaction, heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope: For an exothermic reaction , so Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope. Error propagation At first glance, using the fact that it would appear that two measurements of would suffice to be able to obtain an accurate value of : where and are the equilibrium constant values obtained at temperatures and respectively. However, the precision of values obtained in this way is highly dependent on the precision of the measured equilibrium constant values. The use of error propagation shows that the error in will be about 76 kJ/mol times the experimental uncertainty in , or about 110 kJ/mol times the uncertainty in the values. Similar considerations apply to the entropy of reaction obtained from . Notably, when equilibrium constants are measured at three or more temperatures, values of and are often obtained by straight-line fitting. The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this. Applications of the Van 't Hoff plot Van 't Hoff analysis In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error. Assume two products B and C form in a reaction: a A + d D → b B, a A + d D → c C. In this case, can be defined as ratio of B to C rather than the equilibrium constant. When > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region. When < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region. Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product. In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C-terminus or the N-terminus of the amino acid proline. The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C-terminus, but entropically it was more favorable to hydrogen bond with the N-terminus. Specifically, they found that C-terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N-terminus hydrogen bonding was favored by 31–43 J/(K mol). This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C-terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N-terminus, was preferred. Mechanistic studies A chemical reaction may undergo different reaction mechanisms at different temperatures. In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures. In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature. Temperature dependence If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction: where Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists. Surfactant self-assembly The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy of surfactants from the temperature dependence of the critical micelle concentration (CMC): However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead: with and being the free energies of the surfactant in a micelle with aggregation number and respectively. This effect is particularly relevant for nonionic ethoxylated surfactants or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms. See also Clausius–Clapeyron relation Van 't Hoff factor Gibbs–Helmholtz equation Solubility equilibrium Arrhenius equation References Equilibrium chemistry Eponymous equations of physics Thermochemistry Jacobus Henricus van 't Hoff
0.772939
0.994846
0.768956
Material derivative
In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field. The material derivative can serve as a link between Eulerian and Lagrangian descriptions of continuum deformation. For example, in fluid dynamics, the velocity field is the flow velocity, and the quantity of interest might be the temperature of the fluid. In which case, the material derivative then describes the temperature change of a certain fluid parcel with time, as it flows along its pathline (trajectory). Other names There are many other names for the material derivative, including: advective derivative convective derivative derivative following the motion hydrodynamic derivative Lagrangian derivative particle derivative substantial derivative substantive derivative Stokes derivative total derivative, although the material derivative is actually a special case of the total derivative Definition The material derivative is defined for any tensor field y that is macroscopic, with the sense that it depends only on position and time coordinates, : where is the covariant derivative of the tensor, and is the flow velocity. Generally the convective derivative of the field , the one that contains the covariant derivative of the field, can be interpreted both as involving the streamline tensor derivative of the field , or as involving the streamline directional derivative of the field , leading to the same result. Only this spatial term containing the flow velocity describes the transport of the field in the flow, while the other describes the intrinsic variation of the field, independent of the presence of any flow. Confusingly, sometimes the name "convective derivative" is used for the whole material derivative , instead for only the spatial term . The effect of the time-independent terms in the definitions are for the scalar and tensor case respectively known as advection and convection. Scalar and vector fields For example, for a macroscopic scalar field and a macroscopic vector field the definition becomes: In the scalar case is simply the gradient of a scalar, while is the covariant derivative of the macroscopic vector (which can also be thought of as the Jacobian matrix of as a function of ). In particular for a scalar field in a three-dimensional Cartesian coordinate system , the components of the velocity are , and the convective term is then: Development Consider a scalar quantity , where is time and is position. Here may be some physical variable such as temperature or chemical concentration. The physical quantity, whose scalar quantity is , exists in a continuum, and whose macroscopic velocity is represented by the vector field . The (total) derivative with respect to time of is expanded using the multivariate chain rule: It is apparent that this derivative is dependent on the vector which describes a chosen path in space. For example, if is chosen, the time derivative becomes equal to the partial time derivative, which agrees with the definition of a partial derivative: a derivative taken with respect to some variable (time in this case) holding other variables constant (space in this case). This makes sense because if , then the derivative is taken at some constant position. This static position derivative is called the Eulerian derivative. An example of this case is a swimmer standing still and sensing temperature change in a lake early in the morning: the water gradually becomes warmer due to heating from the sun. In which case the term is sufficient to describe the rate of change of temperature. If the sun is not warming the water (i.e. ), but the path is not a standstill, the time derivative of may change due to the path. For example, imagine the swimmer is in a motionless pool of water, indoors and unaffected by the sun. One end happens to be at a constant high temperature and the other end at a constant low temperature. By swimming from one end to the other the swimmer senses a change of temperature with respect to time, even though the temperature at any given (static) point is a constant. This is because the derivative is taken at the swimmer's changing location and the second term on the right is sufficient to describe the rate of change of temperature. A temperature sensor attached to the swimmer would show temperature varying with time, simply due to the temperature variation from one end of the pool to the other. The material derivative finally is obtained when the path is chosen to have a velocity equal to the fluid velocity That is, the path follows the fluid current described by the fluid's velocity field . So, the material derivative of the scalar is An example of this case is a lightweight, neutrally buoyant particle swept along a flowing river and experiencing temperature changes as it does so. The temperature of the water locally may be increasing due to one portion of the river being sunny and the other in a shadow, or the water as a whole may be heating as the day progresses. The changes due to the particle's motion (itself caused by fluid motion) is called advection (or convection if a vector is being transported). The definition above relied on the physical nature of a fluid current; however, no laws of physics were invoked (for example, it was assumed that a lightweight particle in a river will follow the velocity of the water), but it turns out that many physical concepts can be described concisely using the material derivative. The general case of advection, however, relies on conservation of mass of the fluid stream; the situation becomes slightly different if advection happens in a non-conservative medium. Only a path was considered for the scalar above. For a vector, the gradient becomes a tensor derivative; for tensor fields we may want to take into account not only translation of the coordinate system due to the fluid movement but also its rotation and stretching. This is achieved by the upper convected time derivative. Orthogonal coordinates It may be shown that, in orthogonal coordinates, the -th component of the convection term of the material derivative of a vector field is given by where the are related to the metric tensors by In the special case of a three-dimensional Cartesian coordinate system (x, y, z), and being a 1-tensor (a vector with three components), this is just: where is a Jacobian matrix. See also Navier–Stokes equations Euler equations (fluid dynamics) Derivative (generalizations) Lie derivative Levi-Civita connection Spatial acceleration Spatial gradient References Further reading Fluid dynamics Multivariable calculus Rates Generalizations of the derivative
0.773637
0.993899
0.768917
Strain energy
In physics, the elastic potential energy gained by a wire during elongation with a tensile (stretching) or compressive (contractile) force is called strain energy. For linearly elastic materials, strain energy is: where is stress, is strain, is volume, and is Young's modulus: Molecular strain In a molecule, strain energy is released when the constituent atoms are allowed to rearrange themselves in a chemical reaction. The external work done on an elastic member in causing it to distort from its unstressed state is transformed into strain energy which is a form of potential energy. The strain energy in the form of elastic deformation is mostly recoverable in the form of mechanical work. For example, the heat of combustion of cyclopropane (696 kJ/mol) is higher than that of propane (657 kJ/mol) for each additional CH2 unit. Compounds with unusually large strain energy include tetrahedranes, propellanes, cubane-type clusters, fenestranes and cyclophanes. References Chemical bonding Structural analysis
0.781087
0.98441
0.768911
Joule–Thomson effect
In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect or Kelvin–Joule effect) describes the temperature change of a real gas or liquid (as differentiated from an ideal gas) when it is expanding; typically caused by the pressure loss from flow through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. This procedure is called a throttling process or Joule–Thomson process. The effect is purely an effect due to deviation from ideality, as any ideal gas has no JT effect. At room temperature, all gases except hydrogen, helium, and neon cool upon expansion by the Joule–Thomson process when being throttled through an orifice; these three gases rise in temperature when forced through a porous plug at room temperature, but lowers in temperature when already at lower temperatures. Most liquids such as hydraulic oils will be warmed by the Joule–Thomson throttling process. The temperature at which the JT effect switches sign is the inversion temperature. The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance. Since it is a constant-enthalpy process, it can be used to experimentally measure the lines of constant enthalpy (isenthalps) on the diagram of a gas. Combined with the specific heat capacity at constant pressure it allows the complete measurement of the thermodynamic potential for the gas. History The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the temperature is unchanged, if the gas is ideal. Description The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out. If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases. In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of a real gas decreases, except at very high temperature. The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 without significant change in kinetic energy, is called the Joule–Thomson expansion. The expansion is inherently irreversible. During this expansion, enthalpy remains unchanged (see proof below). Unlike a free expansion, work is done, causing a change in internal energy. Whether the internal energy increases or decreases is determined by whether work is done on or by the fluid; that is determined by the initial and final states of the expansion and the properties of the fluid. The temperature change produced during a Joule–Thomson expansion is quantified by the Joule–Thomson coefficient, . This coefficient may be either positive (corresponding to cooling) or negative (heating); the regions where each occurs for molecular nitrogen, N2, are shown in the figure. Note that most conditions in the figure correspond to N2 being a supercritical fluid, where it has some properties of a gas and some of a liquid, but can not be really described as being either. The coefficient is negative at both very high and very low temperatures; at very high pressure it is negative at all temperatures. The maximum inversion temperature (621 K for N2) occurs as zero pressure is approached. For N2 gas at low pressures, is negative at high temperatures and positive at low temperatures. At temperatures below the gas-liquid coexistence curve, N2 condenses to form a liquid and the coefficient again becomes negative. Thus, for N2 gas below 621 K, a Joule–Thomson expansion can be used to cool the gas until liquid N2 forms. Physical mechanism There are two factors that can change the temperature of a fluid during an adiabatic expansion: a change in internal energy or the conversion between potential and kinetic internal energy. Temperature is the measure of thermal kinetic energy (energy associated with molecular motion); so a change in temperature indicates a change in thermal kinetic energy. The internal energy is the sum of thermal kinetic energy and thermal potential energy. Thus, even if the internal energy does not change, the temperature can change due to conversion between kinetic and potential energy; this is what happens in a free expansion and typically produces a decrease in temperature as the fluid expands. If work is done on or by the fluid as it expands, then the total internal energy changes. This is what happens in a Joule–Thomson expansion and can produce larger heating or cooling than observed in a free expansion. In a Joule–Thomson expansion the enthalpy remains constant. The enthalpy, , is defined as where is internal energy, is pressure, and is volume. Under the conditions of a Joule–Thomson expansion, the change in represents the work done by the fluid (see the proof below). If increases, with constant, then must decrease as a result of the fluid doing work on its surroundings. This produces a decrease in temperature and results in a positive Joule–Thomson coefficient. Conversely, a decrease in means that work is done on the fluid and the internal energy increases. If the increase in kinetic energy exceeds the increase in potential energy, there will be an increase in the temperature of the fluid and the Joule–Thomson coefficient will be negative. For an ideal gas, does not change during a Joule–Thomson expansion. As a result, there is no change in internal energy; since there is also no change in thermal potential energy, there can be no change in thermal kinetic energy and, therefore, no change in temperature. In real gases, does change. The ratio of the value of to that expected for an ideal gas at the same temperature is called the compressibility factor, . For a gas, this is typically less than unity at low temperature and greater than unity at high temperature (see the discussion in compressibility factor). At low pressure, the value of always moves towards unity as a gas expands. Thus at low temperature, and will increase as the gas expands, resulting in a positive Joule–Thomson coefficient. At high temperature, and decrease as the gas expands; if the decrease is large enough, the Joule–Thomson coefficient will be negative. For liquids, and for supercritical fluids under high pressure, increases as pressure increases. This is due to molecules being forced together, so that the volume can barely decrease due to higher pressure. Under such conditions, the Joule–Thomson coefficient is negative, as seen in the figure above. The physical mechanism associated with the Joule–Thomson effect is closely related to that of a shock wave, although a shock wave differs in that the change in bulk kinetic energy of the gas flow is not negligible. The Joule–Thomson (Kelvin) coefficient The rate of change of temperature with respect to pressure in a Joule–Thomson process (that is, at constant enthalpy ) is the Joule–Thomson (Kelvin) coefficient . This coefficient can be expressed in terms of the gas's specific volume , its heat capacity at constant pressure , and its coefficient of thermal expansion as: See the below for the proof of this relation. The value of is typically expressed in °C/bar (SI units: K/Pa) and depends on the type of gas and on the temperature and pressure of the gas before expansion. Its pressure dependence is usually only a few percent for pressures up to 100 bar. All real gases have an inversion point at which the value of changes sign. The temperature of this point, the Joule–Thomson inversion temperature, depends on the pressure of the gas before expansion. In a gas expansion the pressure decreases, so the sign of is negative by definition. With that in mind, the following table explains when the Joule–Thomson effect cools or warms a real gas: Helium and hydrogen are two gases whose Joule–Thomson inversion temperatures at a pressure of one atmosphere are very low (e.g., about 40 K, −233 °C for helium). Thus, helium and hydrogen warm when expanded at constant enthalpy at typical room temperatures. On the other hand, nitrogen and oxygen, the two most abundant gases in air, have inversion temperatures of 621 K (348 °C) and 764 K (491 °C) respectively: these gases can be cooled from room temperature by the Joule–Thomson effect. For an ideal gas, is always equal to zero: ideal gases neither warm nor cool upon being expanded at constant enthalpy. Theoretical models For a Van der Waals gas, the coefficient iswith inversion temperature . For the Dieterici gas, the reduced inversion temperature is , and the relation between reduced pressure and reduced inversion temperature is . This is plotted on the right. The critical point falls inside the region where the gas cools on expansion. The outside region is where the gas warms on expansion. Applications In practice, the Joule–Thomson effect is achieved by allowing the gas to expand through a throttling device (usually a valve) which must be very well insulated to prevent any heat transfer to or from the gas. No external work is extracted from the gas during the expansion (the gas must not be expanded through a turbine, for example). The cooling produced in the Joule–Thomson expansion makes it a valuable tool in refrigeration. The effect is applied in the Linde technique as a standard process in the petrochemical industry, where the cooling effect is used to liquefy gases, and in many cryogenic applications (e.g. for the production of liquid oxygen, nitrogen, and argon). A gas must be below its inversion temperature to be liquefied by the Linde cycle. For this reason, simple Linde cycle liquefiers, starting from ambient temperature, cannot be used to liquefy helium, hydrogen, or neon. They must first be cooled to their inversion temperatures, which are -233 C (helium), -71 C (hydrogen), and -42 C (neon). Proof that the specific enthalpy remains constant In thermodynamics so-called "specific" quantities are quantities per unit mass (kg) and are denoted by lower-case characters. So h, u, and v are the specific enthalpy, specific internal energy, and specific volume (volume per unit mass, or reciprocal density), respectively. In a Joule–Thomson process the specific enthalpy h remains constant. To prove this, the first step is to compute the net work done when a mass m of the gas moves through the plug. This amount of gas has a volume of V1 = m v1 in the region at pressure P1 (region 1) and a volume V2 = m v2 when in the region at pressure P2 (region 2). Then in region 1, the "flow work" done on the amount of gas by the rest of the gas is: W1 = m P1v1. In region 2, the work done by the amount of gas on the rest of the gas is: W2 = m P2v2. So, the total work done on the mass m of gas is The change in internal energy minus the total work done on the amount of gas is, by the first law of thermodynamics, the total heat supplied to the amount of gas. In the Joule–Thomson process, the gas is insulated, so no heat is absorbed. This means that where u1 and u2 denote the specific internal energies of the gas in regions 1 and 2, respectively. Using the definition of the specific enthalpy h = u + Pv, the above equation implies that where h1 and h2 denote the specific enthalpies of the amount of gas in regions 1 and 2, respectively. Throttling in the T-s diagram A very convenient way to get a quantitative understanding of the throttling process is by using diagrams such as h-T diagrams, h-P diagrams, and others. Commonly used are the so-called T-s diagrams. Figure 2 shows the T-s diagram of nitrogen as an example. Various points are indicated as follows: As shown before, throttling keeps h constant. E.g. throttling from 200 bar and 300K (point a in fig. 2) follows the isenthalpic (line of constant specific enthalpy) of 430kJ/kg. At 1 bar it results in point b which has a temperature of 270K. So throttling from 200 bar to 1 bar gives a cooling from room temperature to below the freezing point of water. Throttling from 200 bar and an initial temperature of 133K (point c in fig. 2) to 1 bar results in point d, which is in the two-phase region of nitrogen at a temperature of 77.2K. Since the enthalpy is an extensive parameter the enthalpy in d (hd) is equal to the enthalpy in e (he) multiplied with the mass fraction of the liquid in d (xd) plus the enthalpy in f (hf) multiplied with the mass fraction of the gas in d (1 − xd). So With numbers: 150 = xd 28 + (1 − xd) 230 so xd is about 0.40. This means that the mass fraction of the liquid in the liquid–gas mixture leaving the throttling valve is 40%. Derivation of the Joule–Thomson coefficient It is difficult to think physically about what the Joule–Thomson coefficient, , represents. Also, modern determinations of do not use the original method used by Joule and Thomson, but instead measure a different, closely related quantity. Thus, it is useful to derive relationships between and other, more conveniently measured quantities, as described below. The first step in obtaining these results is to note that the Joule–Thomson coefficient involves the three variables T, P, and H. A useful result is immediately obtained by applying the cyclic rule; in terms of these three variables that rule may be written Each of the three partial derivatives in this expression has a specific meaning. The first is , the second is the constant pressure heat capacity, , defined by and the third is the inverse of the isothermal Joule–Thomson coefficient, , defined by . This last quantity is more easily measured than . Thus, the expression from the cyclic rule becomes This equation can be used to obtain Joule–Thomson coefficients from the more easily measured isothermal Joule–Thomson coefficient. It is used in the following to obtain a mathematical expression for the Joule–Thomson coefficient in terms of the volumetric properties of a fluid. To proceed further, the starting point is the fundamental equation of thermodynamics in terms of enthalpy; this is Now "dividing through" by dP, while holding temperature constant, yields The partial derivative on the left is the isothermal Joule–Thomson coefficient, , and the one on the right can be expressed in terms of the coefficient of thermal expansion via a Maxwell relation. The appropriate relation is where α is the cubic coefficient of thermal expansion. Replacing these two partial derivatives yields This expression can now replace in the earlier equation for to obtain: This provides an expression for the Joule–Thomson coefficient in terms of the commonly available properties heat capacity, molar volume, and thermal expansion coefficient. It shows that the Joule–Thomson inversion temperature, at which is zero, occurs when the coefficient of thermal expansion is equal to the inverse of the temperature. Since this is true at all temperatures for ideal gases (see expansion in gases), the Joule–Thomson coefficient of an ideal gas is zero at all temperatures. Joule's second law It is easy to verify that for an ideal gas defined by suitable microscopic postulates that αT = 1, so the temperature change of such an ideal gas at a Joule–Thomson expansion is zero. For such an ideal gas, this theoretical result implies that: The internal energy of a fixed mass of an ideal gas depends only on its temperature (not pressure or volume). This rule was originally found by Joule experimentally for real gases and is known as Joule's second law. More refined experiments found important deviations from it. See also Critical point (thermodynamics) Enthalpy and Isenthalpic process Ideal gas Liquefaction of gases MIRI (Mid-Infrared Instrument), a J–T loop is used on one of the instruments of the James Webb Space Telescope Refrigeration Reversible process (thermodynamics) References Bibliography External links Joule–Thomson effect module, University of Notre Dame Thermodynamics Cryogenics Engineering thermodynamics Gases Heating, ventilation, and air conditioning Thomson effect William Thomson, 1st Baron Kelvin
0.771298
0.996883
0.768893
Curved spacetime
In physics, curved spacetime is the mathematical model in which, with Einstein's theory of general relativity, gravity naturally arises, as opposed to being described as a fundamental force in Newton's static Euclidean reference frame. Objects move along geodesics—curved paths determined by the local geometry of spacetime—rather than being influenced directly by distant bodies. This framework led to two fundamental principles: coordinate independence, which asserts that the laws of physics are the same regardless of the coordinate system used, and the equivalence principle, which states that the effects of gravity are indistinguishable from those of acceleration in sufficiently small regions of space. These principles laid the groundwork for a deeper understanding of gravity through the geometry of spacetime, as formalized in Einstein's field equations. Introduction Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space. In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself. In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle. In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth. Different observers viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first observer, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second observer, aware of the large mass 1, smiles at the first reporter's naiveté. This second observer knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third observer, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime. Two central propositions underlie general relativity. The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion." This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence. The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration. In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in. An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, mgg and in Newton's second law, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass mi. The equivalence principle states that these two masses are identical. To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations. Curvature of time In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime. Years before publication of the general theory in 1916, Einstein used the equivalence principle to predict the existence of gravitational redshift in the following thought experiment: (i) Assume that a tower of height h (Fig. 5-3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity , so that its total energy E, as measured by an observer on the ground, is (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E back into a particle of rest mass m. It must be that , since otherwise one would be able to construct a perpetual motion device. We therefore predict that , so that A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964). Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side by side with the ground clock. For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation. Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence. This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity. Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable: Curvature of space The coefficient in front of describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to and , and because of the in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved. But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, should not their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms? The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the term dwarfs the spatial terms. Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets. The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry. As the astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing irregularities in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed. In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct. The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components. In Newton's gravitation, the coefficient in front of predicts bending of light around a star. In general relativity, the coefficient in front of predicts a doubling of the total bending. The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere. Sources of spacetime curvature In Newton's theory of gravitation, the only source of gravitational force is mass. In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations, the sources of gravity are presented on the right-hand side in the stress–energy tensor. Fig. 5-5 classifies the various sources of gravity in the stress–energy tensor: (red): The total mass–energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions. and (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum. are the rates of flow of the of momentum per unit area in the . Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the terms (green) represent isotropic pressure, and the terms (blue) represent shear stresses. One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity. Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases. Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime. Energy-momentum In special relativity, mass-energy is closely connected to momentum. Just as space and time are different aspects of a more comprehensive entity called spacetime, mass–energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass–energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism. It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, of his Lectures on Physics, available online.) Analogous logic can be used to demonstrate the origin of gravitomagnetism. In Fig. 5-7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume so that velocities are simply additive. Fig. 5-7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream. It is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream. The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism. Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5-8) ejected by some rotating supermassive black holes. Pressure and stress Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, , momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve. General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass–energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole. The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae. These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated, and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass–energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms did not contribute as a source of gravity. Experimental test of the sources of spacetime curvature Definitions: Active, passive, and inertial mass Bondi distinguishes between different possible types of mass: (1) is the mass which acts as the source of a gravitational field; (2) is the mass which reacts to a gravitational field; (3) is the mass which reacts to acceleration. is the same as in the discussion of the equivalence principle. In Newtonian theory, The third law of action and reaction dictates that and must be the same. On the other hand, whether and are equal is an empirical result. In general relativity, The equality of and is dictated by the equivalence principle. There is no "action and reaction" principle dictating any necessary relationship between and . Pressure as a gravitational source The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5-9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined. To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the of a metal ball. However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2). If pressure does not act as a gravitational source, then the ratio should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5-9b). Fluorine has atomic number , while bromine has . Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5. Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields. In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2 km offset between the moon's center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass–energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 10−12. With decades of additional lunar laser ranging data, Singh et al. (2023) reported improvement on these limits by a factor of about 100. Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism. Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, while the geodetic effect was confirmed to better than 0.5%. Subsequent measurements of frame dragging by laser-ranging observations of the LARES, and satellites has improved on the measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value, although there has been some disagreement on the accuracy of this result. Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect. The first ten years of experience with a prototype ring laser gyroscope array, GINGERINO, established that the full scale experiment should be able to measure gravitomagnetism due to the Earth's rotation to within a 0.1% level or even better. See also Spacetime topology Notes References Concepts in physics Theoretical physics Theory of relativity Time Time in physics Conceptual models
0.790829
0.972256
0.768888
Resource
Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified according to their availability as renewable or national and international resources. An item may become a resource with technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well. From a human perspective, a regular resource is anything to satisfy human needs and wants. The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management. The concept of resources can also be tied to the direction of leadership over resources; this may include human resources issues, for which leaders are responsible, in managing, supporting, or directing those matters and the resulting necessary actions. For example, in the cases of professional groups, innovative leaders and technical experts in archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administration. Definition of size asymmetry Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource). Economic versus biological There are three fundamental differences between economic versus ecological views: 1) the economic resource definition is human-centered (anthropocentric) and the biological or ecological resource definition is nature-centered (biocentric or ecocentric); 2) the economic view includes desire along with necessity, whereas the biological view is about basic biological needs; and 3) economic systems are based on markets of currency exchanged for goods and services, whereas biological systems are based on natural processes of growth, maintenance, and reproduction. Computer resources A computer resource is any physical or virtual component of limited availability within a computer or information management system. Computer resources include means for input, processing, output, communication, and storage. Natural Natural resources are derived from the environment. Many natural resources are essential for human survival, while others are used to satisfy human desire. Conservation is the management of natural resources with the goal of sustainability. Natural resources may be further classified in different ways. Resources can be categorized based on origin: Abiotic resources comprise non-living things (e.g., land, water, air, and minerals such as gold, iron, copper, silver). Biotic resources are obtained from the biosphere. Forests and their products, animals, birds and their products, fish and other marine organisms are important examples. Minerals such as coal and petroleum are sometimes included in this category because they were formed from fossilized organic matter, over long periods. Natural resources are also categorized based on the stage of development: Potential resources are known to exist and may be used in the future. For example, petroleum may exist in many parts of India and Kuwait that have sedimentary rocks, but until the time it is actually drilled out and put into use, it remains a potential resource. Actual resources are those, that have been surveyed, their quantity and quality determined, and are being used in present times. For example, petroleum and natural gas are actively being obtained from the Mumbai High Fields. The development of an actual resource, such as wood processing depends on the technology available and the cost involved. That part of the actual resource that can be developed profitably with the available technology is known as a reserve resource, while that part that can not be developed profitably due to a lack of technology is known as a stock resource. Natural resources can be categorized based on renewability: Non-renewable resources are formed over very long geological periods. Minerals and fossils are included in this category. Since their formation rate is extremely slow, they cannot be replenished, once they are depleted. Even though metals can be recycled and reused, whereas petroleum and gas cannot, they are still considered non-renewable resources. Renewable resources, such as forests and fisheries, can be replenished or reproduced relatively quickly. The highest rate at which a resource can be used sustainably is the sustainable yield. Some resources, such as sunlight, air, and wind, are called perpetual resources because they are available continuously, though at a limited rate. Human consumption does not affect their quantity. Many renewable resources can be depleted by human use, but may also be replenished, thus maintaining a flow. Some of these, such as crops, take a short time for renewal; others, such as water, take a comparatively longer time, while others, such as forests, need even longer periods. Depending upon the speed and quantity of consumption, overconsumption can lead to depletion or the total and everlasting destruction of a resource. Important examples are agricultural areas, fish and other animals, forests, healthy water and soil, cultivated and natural landscapes. Such conditionally renewable resources are sometimes classified as a third kind of resource or as a subtype of renewable resources. Conditionally renewable resources are presently subject to excess human consumption and the only sustainable long-term use of such resources is within the so-called zero ecological footprint, where humans use less than the Earth's ecological capacity to regenerate. Natural resources are also categorized based on distribution: Ubiquitous resources are found everywhere (for example, air, light, and water). Localized resources are found only in certain parts of the world (for example metal ores and geothermal power). Actual vs. potential natural resources are distinguished as follows: Actual resources are those resources whose location and quantity are known and we have the technology to exploit and use them. Potential resources are those of which we have insufficient knowledge or do not have the technology to exploit them at present. Based on ownership, resources can be classified as individual, community, national, and international. Labour or human resources In economics, labor or human resources refers to the human work in the production of goods and rendering of services. Human resources can be defined in terms of skills, energy, talent, abilities, or knowledge. In a project management context, human resources are those employees responsible for undertaking the activities defined in the project plan. Capital or infrastructure In economics, capital goods or capital are "those durable produced goods that are in turn used as productive inputs for further production" of goods and services. A typical example is the machinery used in a factory. At the macroeconomic level, "the nation's capital stock includes buildings, equipment, software, and inventories during a given year." Capitals are the most important economic resource. Tangible versus intangible Whereas, tangible resources such as equipment have an actual physical existence, intangible resources such as corporate images, brands and patents, and other intellectual properties exist in abstraction. Use and sustainable development Typically resources cannot be consumed in their original form, but rather through resource development they must be processed into more usable commodities and usable things. The demand for resources is increasing as economies develop. There are marked differences in resource distribution and associated economic inequality between regions or countries, with developed countries using more natural resources than developing countries. Sustainable development is a pattern of resource use, that aims to meet human needs while preserving the environment. Sustainable development means that we should exploit our resources carefully to meet our present requirement without compromising the ability of future generations to meet their own needs. The practice of the three R's – reduce, reuse, and recycle must be followed to save and extend the availability of resources. Various problems are related to the usage of resources: Environmental degradation Over-consumption Resource curse Resource depletion Tragedy of the commons Various benefits can result from the wise usage of resources: Economic growth Ethical consumerism Prosperity Quality of life Sustainability Wealth See also Natural resource management Resource-based view Waste management References Further reading Elizabeth Kolbert, "Needful Things: The raw materials for the world we've built come at a cost" (largely based on Ed Conway, Material World: The Six Raw Materials That Shape Modern Civilization, Knopf, 2023; Vince Beiser, The World in a Grain; and Chip Colwell, So Much Stuff: How Humans Discovered Tools, Invented Meaning, and Made More of Everything, Chicago), The New Yorker, 30 October 2023, pp. 20–23. Kolbert mainly discusses the importance to modern civilization, and the finite sources of, six raw materials: high-purity quartz (needed to produce silicon chips), sand, iron, copper, petroleum (which Conway lumps together with another fossil fuel, natural gas), and lithium. Kolbert summarizes archeologist Colwell's review of the evolution of technology, which has ended up giving the Global North a superabundance of "stuff," at an unsustainable cost to the world's environment and reserves of raw materials. External links Resource economics Ecology
0.771257
0.996928
0.768887
Rayleigh scattering
Rayleigh scattering is the scattering or deflection of light, or other electromagnetic radiation, by particles with a size much smaller than the wavelength of the radiation. For light frequencies well below the resonance frequency of the scattering medium (normal dispersion regime), the amount of scattering is inversely proportional to the fourth power of the wavelength (e.g., a blue color is scattered much more than a red color as light propagates through air). The phenomenon is named after the 19th-century British physicist Lord Rayleigh (John William Strutt). Rayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, causing them to move at the same frequency. The particle, therefore, becomes a small radiating dipole whose radiation we see as scattered light. The particles may be individual atoms or molecules; it can occur when light travels through transparent solids and liquids, but is most prominently seen in gases. Rayleigh scattering of sunlight in Earth's atmosphere causes diffuse sky radiation, which is the reason for the blue color of the daytime and twilight sky, as well as the yellowish to reddish hue of the low Sun. Sunlight is also subject to Raman scattering, which changes the rotational state of the molecules and gives rise to polarization effects. Scattering by particles with a size comparable to, or larger than, the wavelength of the light is typically treated by the Mie theory, the discrete dipole approximation and other computational techniques. Rayleigh scattering applies to particles that are small with respect to wavelengths of light, and that are optically "soft" (i.e., with a refractive index close to 1). Anomalous diffraction theory applies to optically soft but larger particles. History In 1869, while attempting to determine whether any contaminants remained in the purified air he used for infrared experiments, John Tyndall discovered that bright light scattering off nanoscopic particulates was faintly blue-tinted. He conjectured that a similar scattering of sunlight gave the sky its blue hue, but he could not explain the preference for blue light, nor could atmospheric dust explain the intensity of the sky's color. In 1871, Lord Rayleigh published two papers on the color and polarization of skylight to quantify Tyndall's effect in water droplets in terms of the tiny particulates' volumes and refractive indices. In 1881, with the benefit of James Clerk Maxwell's 1865 proof of the electromagnetic nature of light, he showed that his equations followed from electromagnetism. In 1899, he showed that they applied to individual molecules, with terms containing particulate volumes and refractive indices replaced with terms for molecular polarizability. Small size parameter approximation The size of a scattering particle is often parameterized by the ratio where r is the particle's radius, λ is the wavelength of the light and x is a dimensionless parameter that characterizes the particle's interaction with the incident radiation such that: Objects with x ≫ 1 act as geometric shapes, scattering light according to their projected area. At the intermediate x ≃ 1 of Mie scattering, interference effects develop through phase variations over the object's surface. Rayleigh scattering applies to the case when the scattering particle is very small (x ≪ 1, with a particle size < 1/10 of wavelength) and the whole surface re-radiates with the same phase. Because the particles are randomly positioned, the scattered light arrives at a particular point with a random collection of phases; it is incoherent and the resulting intensity is just the sum of the squares of the amplitudes from each particle and therefore proportional to the inverse fourth power of the wavelength and the sixth power of its size. The wavelength dependence is characteristic of dipole scattering and the volume dependence will apply to any scattering mechanism. In detail, the intensity of light scattered by any one of the small spheres of radius r and refractive index n from a beam of unpolarized light of wavelength λ and intensity I0 is given by where R is the distance to the particle and θ is the scattering angle. Averaging this over all angles gives the Rayleigh scattering cross-section of the particles in air: Here n is the refractive index of the spheres that approximate the molecules of the gas; the index of the gas surrounding the spheres is neglected, an approximation that introduces an error of less than 0.05%. The fraction of light scattered by scattering particles over the unit travel length (e.g., meter) is the number of particles per unit volume N times the cross-section. For example, air has a refractive index of 1.0002793 at atmospheric pressure, where there are about molecules per cubic meter, and therefore the major constituent of the atmosphere, nitrogen, has a Rayleigh cross section of at a wavelength of 532 nm (green light). This means that about a fraction 10−5 of the light will be scattered for every meter of travel. The strong wavelength dependence of the scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths. From molecules The expression above can also be written in terms of individual molecules by expressing the dependence on refractive index in terms of the molecular polarizability α, proportional to the dipole moment induced by the electric field of the light. In this case, the Rayleigh scattering intensity for a single particle is given in CGS-units by and in SI-units by . Effect of fluctuations When the dielectric constant of a certain region of volume is different from the average dielectric constant of the medium , then any incident light will be scattered according to the following equation where represents the variance of the fluctuation in the dielectric constant . Cause of the blue color of the sky The blue color of the sky is a consequence of three factors: the blackbody spectrum of sunlight coming into the Earth's atmosphere, Rayleigh scattering of that light off oxygen and nitrogen molecules, and the response of the human visual system. The strong wavelength dependence of the Rayleigh scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths. This results in the indirect blue and violet light coming from all regions of the sky. The human eye responds to this wavelength combination as if it were a combination of blue and white light. Some of the scattering can also be from sulfate particles. For years after large Plinian eruptions, the blue cast of the sky is notably brightened by the persistent sulfate load of the stratospheric gases. Some works of the artist J. M. W. Turner may owe their vivid red colours to the eruption of Mount Tambora in his lifetime. In locations with little light pollution, the moonlit night sky is also blue, because moonlight is reflected sunlight, with a slightly lower color temperature due to the brownish color of the Moon. The moonlit sky is not perceived as blue, however, because at low light levels human vision comes mainly from rod cells that do not produce any color perception (Purkinje effect). Of sound in amorphous solids Rayleigh scattering is also an important mechanism of wave scattering in amorphous solids such as glass, and is responsible for acoustic wave damping and phonon damping in glasses and granular matter at low or not too high temperatures. This is because in glasses at higher temperatures the Rayleigh-type scattering regime is obscured by the anharmonic damping (typically with a ~λ−2 dependence on wavelength), which becomes increasingly more important as the temperature rises. In amorphous solids – glasses – optical fibers Rayleigh scattering is an important component of the scattering of optical signals in optical fibers. Silica fibers are glasses, disordered materials with microscopic variations of density and refractive index. These give rise to energy losses due to the scattered light, with the following coefficient: where n is the refraction index, p is the photoelastic coefficient of the glass, k is the Boltzmann constant, and β is the isothermal compressibility. Tf is a fictive temperature, representing the temperature at which the density fluctuations are "frozen" in the material. In porous materials Rayleigh-type λ−4 scattering can also be exhibited by porous materials. An example is the strong optical scattering by nanoporous materials. The strong contrast in refractive index between pores and solid parts of sintered alumina results in very strong scattering, with light completely changing direction each five micrometers on average. The λ−4-type scattering is caused by the nanoporous structure (a narrow pore size distribution around ~70 nm) obtained by sintering monodispersive alumina powder. See also HRS Computing – scientific simulation software Works References Further reading C.F. Bohren, D. Huffman, Absorption and scattering of light by small particles, John Wiley, New York 1983. Contains a good description of the asymptotic behavior of Mie theory for small size parameter (Rayleigh approximation). Gives a brief history of theories of why the sky is blue leading up to Rayleigh's discovery, and a brief description of Rayleigh scattering. External links HyperPhysics description of Rayleigh scattering Full physical explanation of sky color, in simple terms Scattering, absorption and radiative transfer (optics) Atmospheric optical phenomena Visibility Light Phenomena Scientific phenomena Physical phenomena
0.769903
0.998629
0.768848
Conservation of mass
In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time. The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products. The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry. In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics. Mass is also not generally conserved in open systems. Such is the case when any energy or matter is allowed into, or out of, the system. However, unless radioactivity or nuclear reactions are involved, the amount of energy entering or escaping such systems (as heat, mechanical work, or electromagnetic radiation) is usually too small to be measured as a change in the mass of the system. For systems that include large gravitational fields, general relativity has to be taken into account; thus mass–energy conservation becomes a more complex concept, subject to different definitions, and neither mass nor energy is as strictly and simply conserved as is the case in special relativity. Formulation and examples The law of conservation of mass can only be formulated in classical mechanics, in which the energy scales associated with an isolated system are much smaller than , where is the mass of a typical object in the system, measured in the frame of reference where the object is at rest, and is the speed of light. The law can be formulated mathematically in the fields of fluid mechanics and continuum mechanics, where the conservation of mass is usually expressed using the continuity equation, given in differential form as where is the density (mass per unit volume), is the time, is the divergence, and is the flow velocity field. The interpretation of the continuity equation for mass is the following: For a given closed surface in the system, the change, over any time interval, of the mass enclosed by the surface is equal to the mass that traverses the surface during that time interval: positive if the matter goes in and negative if the matter goes out. For the whole isolated system, this condition implies that the total mass , the sum of the masses of all components in the system, does not change over time, i.e. where is the differential that defines the integral over the whole volume of the system. The continuity equation for the mass is part of the Euler equations of fluid dynamics. Many other convection–diffusion equations describe the conservation and flow of mass and matter in a given system. In chemistry, the calculation of the amount of reactant and products in a chemical reaction, or stoichiometry, is founded on the principle of conservation of mass. The principle implies that during a chemical reaction the total mass of the reactants is equal to the total mass of the products. For example, in the following reaction where one molecule of methane and two oxygen molecules are converted into one molecule of carbon dioxide and two of water. The number of molecules resulting from the reaction can be derived from the principle of conservation of mass, as initially four hydrogen atoms, 4 oxygen atoms and one carbon atom are present (as well as in the final state); thus the number water molecules produced must be exactly two per molecule of carbon dioxide produced. Many engineering problems are solved by following the mass distribution of a given system over time; this methodology is known as mass balance. History As early as 520 BCE, Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, stated that the universe and its constituents such as matter cannot be destroyed or created. The Jain text Tattvarthasutra (2nd century CE) states that a substance is permanent, but its modes are characterised by creation and destruction. An important idea in ancient Greek philosophy was that "Nothing comes from nothing", so that what exists now has always existed: no new matter can come into existence where there was none before. An explicit statement of this, along with the further principle that nothing can pass away into nothing, is found in Empedocles (c.4th century BCE): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." A further principle of conservation was stated by Epicurus around the 3rd century BCE, who wrote in describing the nature of the Universe that "the totality of things was always such as it is now, and always will be". Discoveries in chemistry By the 18th century the principle of conservation of mass during chemical reactions was widely used and was an important assumption during experiments, even before a definition was widely established, though an expression of the law can be dated back to Hero of Alexandria’s time, as can be seen in the works of Joseph Black, Henry Cavendish, and Jean Rey. One of the first to outline the principle was Mikhail Lomonosov in 1756. He may have demonstrated it by experiments and certainly had discussed the principle in 1748 in correspondence with Leonhard Euler, though his claim on the subject is sometimes challenged. According to the Soviet physicist Yakov Dorfman:The universal law was formulated by Lomonosov on the basis of general philosophical materialistic considerations, it was never questioned or tested by him, but on the contrary, served him as a solid starting position in all research throughout his life. A more refined series of experiments were later carried out by Antoine Lavoisier who expressed his conclusion in 1773 and popularized the principle of conservation of mass. The demonstrations of the principle disproved the then popular phlogiston theory that said that mass could be gained or lost in combustion and heat processes. The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases. For example, a piece of wood weighs less after burning; this seemed to suggest that some of its mass disappears, or is transformed or lost. Careful experiments were performed in which chemical reactions such as rusting were allowed to take place in sealed glass ampoules; it was found that the chemical reaction did not change the weight of the sealed container and its contents. Weighing of gases using scales was not possible until the invention of the vacuum pump in the 17th century. Once understood, the conservation of mass was of great importance in progressing from alchemy to modern chemistry. Once early chemists realized that chemical substances never disappeared but were only transformed into other substances with the same weight, these scientists could for the first time embark on quantitative studies of the transformations of substances. The idea of mass conservation plus a surmise that certain "elemental substances" also could not be transformed into others by chemical reactions, in turn led to an understanding of chemical elements, as well as the idea that all chemical processes and transformations (such as burning and metabolic reactions) are reactions between invariant amounts or weights of these chemical elements. Following the pioneering work of Lavoisier, the exhaustive experiments of Jean Stas supported the consistency of this law in chemical reactions, even though they were carried out with other intentions. His research indicated that in certain reactions the loss or gain could not have been more than 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, and by Morley and Stas on the other, is enormous. Modern physics The law of conservation of mass was challenged with the advent of special relativity. In one of the Annus Mirabilis papers of Albert Einstein in 1905, he suggested an equivalence between mass and energy. This theory implied several assertions, like the idea that internal energy of a system could contribute to the mass of the whole system, or that mass could be converted into electromagnetic radiation. However, as Max Planck pointed out, a change in mass as a result of extraction or addition of chemical energy, as predicted by Einstein's theory, is so small that it could not be measured with the available instruments and could not be presented as a test of special relativity. Einstein speculated that the energies associated with newly discovered radioactivity were significant enough, compared with the mass of systems producing them, to enable their change of mass to be measured, once the energy of the reaction had been removed from the system. This later indeed proved to be possible, although it was eventually to be the first artificial nuclear transmutation reaction in 1932, demonstrated by Cockcroft and Walton, that proved the first successful test of Einstein's theory regarding mass loss with energy gain. The law of conservation of mass and the analogous law of conservation of energy were finally generalized and unified into the principle of mass–energy equivalence, described by Albert Einstein's equation . Special relativity also redefines the concept of mass and energy, which can be used interchangeably and are defined relative to the frame of reference. Several quantities had to be defined for consistency, such as the rest mass of a particle (mass in the rest frame of the particle) and the relativistic mass (in another frame). The latter term is usually less frequently used. Generalization Special relativity In special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it does continue to apply to totally closed (isolated) systems. If energy cannot escape a system, its mass cannot decrease. In relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass. Also, mass must be differentiated from matter, since matter may not be perfectly conserved in isolated systems, even though mass is always conserved in such systems. However, matter is so nearly conserved in chemistry that violations of matter conservation were not measured until the nuclear age, and the assumption of matter conservation remains an important practical concept in most systems in chemistry and other studies that do not involve the high energies typical of radioactivity and nuclear reactions. The mass associated with chemical amounts of energy is too small to measure The change in mass of certain kinds of open systems where atoms or massive particles are not allowed to escape, but other types of energy (such as light or heat) are allowed to enter, escape or be merged, went unnoticed during the 19th century, because the change in mass associated with addition or loss of small quantities of thermal or radiant energy in chemical reactions is very small. (In theory, mass would not change at all for experiments conducted in isolated systems where heat and work were not allowed in or out.) Mass conservation remains correct if energy is not lost The conservation of relativistic mass implies the viewpoint of a single observer (or the view from a single inertial frame) since changing inertial frames may result in a change of the total energy (relativistic energy) for systems, and this quantity determines the relativistic mass. The principle that the mass of a system of particles must be equal to the sum of their rest masses, though true in classical physics, may be false in special relativity. Rest masses cannot be summed to derive the total mass of a system because this practice does not take into account other forms of energy, such as kinetic energy, potential energy, and the energy of massless particles such as photons. All forms of energy in a system affect the total mass of the system. For moving massive particles in a system, examining the rest masses of the various particles also amounts to introducing many different inertial observation frames, which is prohibited if total system energy and momentum are to be conserved. Additionally, in the rest frame of any one particle this procedure ignores the momenta of other particles, which affect the system mass if the other particles are in motion in this frame. For the special type of mass called invariant mass, changing the inertial frame of observation for a whole closed system has no effect on the measure of invariant mass of the system, which remains both conserved and invariant (unchanging), even for different observers who view the entire system. Invariant mass is a system combination of energy and momentum, which is invariant for any observer, because in any inertial frame, the energies and momenta of the various particles always add to the same quantity (the momentum may be negative, so the addition amounts to a subtraction). The invariant mass is the relativistic mass of the system when viewed in the center of momentum frame. It is the minimum mass which a system may exhibit, as viewed from all possible inertial frames. The conservation of both relativistic and invariant mass applies even to systems of particles created by pair production, where energy for new particles may come from kinetic energy of other particles, or from one or more photons as part of a system that includes other particles besides a photon. Again, neither the relativistic nor the invariant mass of totally closed (that is, isolated) systems changes when new particles are created. However, different inertial observers will disagree on the value of this conserved mass, if it is the relativistic mass (i.e., relativistic mass is conserved but not invariant). However, all observers agree on the value of the conserved mass if the mass being measured is the invariant mass (i.e., invariant mass is both conserved and invariant). The mass–energy equivalence formula gives a different prediction in non-isolated systems, since if energy is allowed to escape a system, both relativistic mass and invariant mass will escape also. In this case, the mass–energy equivalence formula predicts that the change in mass of a system is associated with the change in its energy due to energy being added or subtracted: This form of the equation in terms of changes was the form in which it was originally presented by Einstein. In this sense, mass changes in any system are explained if the mass of the energy added or removed from the system is taken into account. The formula implies that bound systems have an invariant mass (rest mass for the system) less than the sum of their parts, if the binding energy has been allowed to escape the system after the system has been bound. This may happen by converting system potential energy into some other kind of active energy, such as kinetic energy or photons, which easily escape a bound system. The difference in system masses, called a mass defect, is a measure of the binding energy in bound systems – in other words, the energy needed to break the system apart. The greater the mass defect, the larger the binding energy. The binding energy (which itself has mass) must be released (as light or heat) when the parts combine to form the bound system, and this is the reason the mass of the bound system decreases when the energy leaves the system. The total invariant mass is actually conserved, when the mass of the binding energy that has escaped, is taken into account. General relativity In general relativity, the total invariant mass of photons in an expanding volume of space will decrease, due to the red shift of such an expansion. The conservation of both mass and energy therefore depends on various corrections made to energy in the theory, due to the changing gravitational potential energy of such systems. See also Charge conservation Conservation law Fick's laws of diffusion Law of definite proportions Law of multiple proportions References Mass Conservation laws
0.770862
0.997363
0.76883
Orbital mechanics
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Orbital mechanics is a core discipline within space-mission design and control. Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers. General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun). History Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared. Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1605. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777. Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy. Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return. Practical techniques Rules of thumb The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun. Kepler's laws of planetary motion: Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center. A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured. The square of a satellite's orbital period is proportional to the cube of its average distance from the planet. Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change. A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet. If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust. From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit. The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and actually slow down relative to the leading craft, missing the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete. To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit. These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important. Laws of astrodynamics The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus. In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric. Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile. Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are: The orbit of every planet is an ellipse with the Sun at one of the foci. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits. Escape velocity The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by where G is the gravitational constant and r is the distance between the two bodies; while the specific kinetic energy of an object is given by where v is its Velocity; and so the total specific orbital energy is Since energy is conserved, cannot depend on the distance, , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite only if this quantity is nonnegative, which implies The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit. Formulae for free orbits Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is: is called the gravitational parameter. and are the masses of objects 1 and 2, and is the specific angular momentum of object 2 with respect to object 1. The parameter is known as the true anomaly, is the semi-latus rectum, while is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements. Circular orbits All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows: Centrifugal acceleration matches the acceleration due to gravity. So, Therefore, where is the gravitational constant, equal to 6.6743 × 10−11 m3/(kg·s2) To properly use this formula, the units must be consistent; for example, must be in kilograms, and must be in meters. The answer will be in meters per second. The quantity is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System. Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by : To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore, Elliptical orbits If , then the denominator of the equation of free orbits varies with the true anomaly , but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis , which is given by: The maximum value is reached when . This point is called the apoapsis, and its radial coordinate, denoted , is Let be the distance measured along the apse line from periapsis to apoapsis , as illustrated in the equation below: Substituting the equations above, we get: a is the semimajor axis of the ellipse. Solving for , and substituting the result in the conic section curve formula above, we get: Orbital period Under standard assumptions the orbital period of a body traveling along an elliptic orbit can be computed as: where: is the standard gravitational parameter, is the length of the semi-major axis. Conclusions: The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis, For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law). Velocity Under standard assumptions the orbital speed of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as: where: is the standard gravitational parameter, is the distance between the orbiting bodies. is the length of the semi-major axis. The velocity equation for a hyperbolic trajectory is . Energy Under standard assumptions, specific orbital energy of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form: where: is the speed of the orbiting body, is the distance of the orbiting body from the center of mass of the central body, is the semi-major axis, is the standard gravitational parameter. Conclusions: For a given semi-major axis the specific orbital energy is independent of the eccentricity. Using the virial theorem we find: the time-average of the specific potential energy is equal to the time-average of is the time-average of the specific kinetic energy is equal to Parabolic orbits If the eccentricity equals 1, then the orbit equation becomes: where: is the radial distance of the orbiting body from the mass center of the central body, is specific angular momentum of the orbiting body, is the true anomaly of the orbiting body, is the standard gravitational parameter. As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by: where: is the speed of the orbiting body. In other words, the speed anywhere on a parabolic path is: Hyperbolic orbits If , the orbit formula, describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when . we denote this value of true anomaly since the radial distance approaches infinity as the true anomaly approaches , known as the true anomaly of the asymptote. Observe that lies between 90° and 180°. From the trigonometric identity it follows that: Energy Under standard assumptions, specific orbital energy of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form: where: is the orbital velocity of orbiting body, is the radial distance of orbiting body from central body, is the negative semi-major axis of the orbit's hyperbola, is standard gravitational parameter. Hyperbolic excess velocity Under standard assumptions the body traveling along a hyperbolic trajectory will attain at infinity an orbital velocity called hyperbolic excess velocity that can be computed as: where: is standard gravitational parameter, is the negative semi-major axis of orbit's hyperbola. The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by Calculating trajectories Kepler's equation One approach to calculating orbits (mainly used historically) is to use Kepler's equation: . where M is the mean anomaly, E is the eccentric anomaly, and is the eccentricity. With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of from periapsis is broken into two steps: Compute the eccentric anomaly from true anomaly Compute the time-of-flight from the eccentric anomaly Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in , meaning it cannot be solved for algebraically. Kepler's equation can be solved for analytically by inversion. A solution of Kepler's equation, valid for all real values of is: Evaluating this yields: Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of and solve for time-of-flight; then adjust as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence. The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity is nearly 1, and substituting into the formula for mean anomaly, , we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below. Conic orbits For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits. The patched conic approximation The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist. The size of the "neighborhoods" (or spheres of influence) vary with radius : where is the semimajor axis of the planet's orbit relative to the Sun; and are the masses of the planet and Sun, respectively. This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required. The universal variable formulation To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory. Perturbations The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors and at a given epoch . In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be). However, perturbations cause the orbital elements to change over time. Hence, the position element is written as and the velocity element as , indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions and . The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects. Equatorial bulges cause precession of the node and the perigee Tesseral harmonics of the gravity field introduce additional perturbations Lunar and solar gravity perturbations alter the orbits Atmospheric drag reduces the semi-major axis unless make-up thrust is used Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude. Orbital maneuver In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM). Orbital transfer Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle. The Hohmann transfer orbit requires a minimal delta-v. A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer. Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v. Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust. For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis. Gravity assist and the Oberth effect In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel. This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's Third Law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible. The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v. Interplanetary Transport Network and fuzzy orbits It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's Trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart. They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun point and returned using very little propellant. See also Celestial mechanics Chaos theory Kepler orbit Lagrange point Mechanical engineering N-body problem Roche limit Spacecraft propulsion Universal variable formulation References Further reading Many of the options, procedures, and supporting theory are covered in standard works such as: External links ORBITAL MECHANICS (Rocket and Space Technology) Java Astrodynamics Toolkit Astrodynamics-based Space Traffic and Event Knowledge Graph
0.772538
0.995187
0.76882
Maxwell relations
Maxwell's relations are a set of equations in thermodynamics which are derivable from the symmetry of second derivatives and from the definitions of the thermodynamic potentials. These relations are named for the nineteenth-century physicist James Clerk Maxwell. Equations The structure of Maxwell relations is a statement of equality among the second derivatives for continuous functions. It follows directly from the fact that the order of differentiation of an analytic function of two variables is irrelevant (Schwarz theorem). In the case of Maxwell relations the function considered is a thermodynamic potential and and are two different natural variables for that potential, we have where the partial derivatives are taken with all other natural variables held constant. For every thermodynamic potential there are possible Maxwell relations where is the number of natural variables for that potential. The four most common Maxwell relations The four most common Maxwell relations are the equalities of the second derivatives of each of the four thermodynamic potentials, with respect to their thermal natural variable (temperature , or entropy and their mechanical natural variable (pressure , or volume where the potentials as functions of their natural thermal and mechanical variables are the internal energy , enthalpy , Helmholtz free energy , and Gibbs free energy . The thermodynamic square can be used as a mnemonic to recall and derive these relations. The usefulness of these relations lies in their quantifying entropy changes, which are not directly measurable, in terms of measurable quantities like temperature, volume, and pressure. Each equation can be re-expressed using the relationship which are sometimes also known as Maxwell relations. Derivations Short derivation This section is based on chapter 5 of. Suppose we are given four real variables , restricted to move on a 2-dimensional surface in . Then, if we know two of them, we can determine the other two uniquely (generically). In particular, we may take any two variables as the independent variables, and let the other two be the dependent variables, then we can take all these partial derivatives. Proposition: Proof: This is just the chain rule. Proposition: Proof. We can ignore . Then locally the surface is just . Then , etc. Now multiply them. Proof of Maxwell's relations: There are four real variables , restricted on the 2-dimensional surface of possible thermodynamic states. This allows us to use the previous two propositions. It suffices to prove the first of the four relations, as the other three can be obtained by transforming the first relation using the previous two propositions. Pick as the independent variables, and as the dependent variable. We have . Now, since the surface is , that is,which yields the result. Another derivation Based on. Since , around any cycle, we haveTake the cycle infinitesimal, we find that . That is, the map is area-preserving. By the chain rule for Jacobians, for any coordinate transform , we haveNow setting to various values gives us the four Maxwell relations. For example, setting gives us Extended derivations Maxwell relations are based on simple partial differentiation rules, in particular the total differential of a function and the symmetry of evaluating second order partial derivatives. Derivation based on Jacobians If we view the first law of thermodynamics, as a statement about differential forms, and take the exterior derivative of this equation, we get since . This leads to the fundamental identity The physical meaning of this identity can be seen by noting that the two sides are the equivalent ways of writing the work done in an infinitesimal Carnot cycle. An equivalent way of writing the identity is The Maxwell relations now follow directly. For example, The critical step is the penultimate one. The other Maxwell relations follow in similar fashion. For example, General Maxwell relationships The above are not the only Maxwell relationships. When other work terms involving other natural variables besides the volume work are considered or when the number of particles is included as a natural variable, other Maxwell relations become apparent. For example, if we have a single-component gas, then the number of particles N  is also a natural variable of the above four thermodynamic potentials. The Maxwell relationship for the enthalpy with respect to pressure and particle number would then be: where is the chemical potential. In addition, there are other thermodynamic potentials besides the four that are commonly used, and each of these potentials will yield a set of Maxwell relations. For example, the grand potential yields: See also Table of thermodynamic equations Thermodynamic equations References James Clerk Maxwell Thermodynamic equations
0.773865
0.993478
0.768818
Convection–diffusion equation
The convection–diffusion equation is a parabolic partial differential equation that combines the diffusion and convection (advection) equations. It describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation. Equation The general equation in conservative form is where is the variable of interest (species concentration for mass transfer, temperature for heat transfer), is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport, is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, might be the concentration of salt in a river, and then would be the velocity of the water flow as a function of time and location. Another example, might be the concentration of small bubbles in a calm lake, and then would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, is the (hypothetical) superficial velocity. describes sources or sinks of the quantity , i.e. the creation or destruction of the quantity. For example, for a chemical species, means that a chemical reaction is creating more of the species, and means that a chemical reaction is destroying the species. For heat transport, might occur if thermal energy is being generated by friction. represents gradient and represents divergence. In this equation, represents concentration gradient. In general, , , and may vary with space and time. In cases in which they depend on concentration as well, the equation becomes nonlinear, giving rise to many distinctive mixing phenomena such as Rayleigh–Bénard convection when depends on temperature in the heat transfer formulation and reaction–diffusion pattern formation when depends on concentration in the mass transfer formulation. Often there are several quantities, each with its own convection–diffusion equation, where the destruction of one quantity entails the creation of another. For example, when methane burns, it involves not only the destruction of methane and oxygen but also the creation of carbon dioxide and water vapor. Therefore, while each of these chemicals has its own convection–diffusion equation, they are coupled together and must be solved as a system of differential equations. Derivation The convection–diffusion equation can be derived in a straightforward way from the continuity equation, which states that the rate of change for a scalar quantity in a differential control volume is given by flow and diffusion into and out of that part of the system along with any generation or consumption inside the control volume: where is the total flux and is a net volumetric source for . There are two sources of flux in this situation. First, diffusive flux arises due to diffusion. This is typically approximated by Fick's first law: i.e., the flux of the diffusing material (relative to the bulk motion) in any part of the system is proportional to the local concentration gradient. Second, when there is overall convection or flow, there is an associated flux called advective flux: The total flux (in a stationary coordinate system) is given by the sum of these two: Plugging into the continuity equation: Common simplifications In a common situation, the diffusion coefficient is constant, there are no sources or sinks, and the velocity field describes an incompressible flow (i.e., it has zero divergence). Then the formula simplifies to: In this case the equation can be put in the simple convective form: where the derivative of the left hand side is the material derivative of the variable c. In non-interacting material, (for example, when temperature is close to absolute zero, dilute gas has almost zero mass diffusivity), hence the transport equation is simply the continuity equation: Using Fourier transform in both temporal and spatial domain (that is, with integral kernel ), its characteristic equation can be obtained: which gives the general solution: where is any differentiable scalar function. This is the basis of temperature measurement for near Bose–Einstein condensate via time of flight method. Stationary version The stationary convection–diffusion equation describes the steady-state behavior of a convection–diffusion system. In a steady state, , so the equation to solve becomes the second order equation: In one spatial dimension, the equation can be written as Which can be integrated one time in the space variable x to give: Where D is not zero, this is an inhomogeneous first-order linear differential equation with variable coefficients in the variable c(x): where the coefficients are: and: On the other hand, in the positions x where D=0, the first-order diffusion term disappears and the solution becomes simply the ratio: Velocity in response to a force In some cases, the average velocity field exists because of a force; for example, the equation might describe the flow of ions dissolved in a liquid, with an electric field pulling the ions in some direction (as in gel electrophoresis). In this situation, it is usually called the drift–diffusion equation or the Smoluchowski equation, after Marian Smoluchowski who described it in 1915 (not to be confused with the Einstein–Smoluchowski relation or Smoluchowski coagulation equation). Typically, the average velocity is directly proportional to the applied force, giving the equation: where is the force, and characterizes the friction or viscous drag. (The inverse is called mobility.) Derivation of Einstein relation When the force is associated with a potential energy (see conservative force), a steady-state solution to the above equation (i.e. ) is: (assuming and are constant). In other words, there are more particles where the energy is lower. This concentration profile is expected to agree with the Boltzmann distribution (more precisely, the Gibbs measure). From this assumption, the Einstein relation can be proven: Similar equations in other contexts The convection–diffusion equation is a relatively simple equation describing flows, or alternatively, describing a stochastically-changing system. Therefore, the same or similar equation arises in many contexts unrelated to flows through space. It is formally identical to the Fokker–Planck equation for the velocity of a particle. It is closely related to the Black–Scholes equation and other equations in financial mathematics. It is closely related to the Navier–Stokes equations, because the flow of momentum in a fluid is mathematically similar to the flow of mass or energy. The correspondence is clearest in the case of an incompressible Newtonian fluid, in which case the Navier–Stokes equation is: where is the momentum of the fluid (per unit volume) at each point (equal to the density multiplied by the velocity ), is viscosity, is fluid pressure, and is any other body force such as gravity. In this equation, the term on the left-hand side describes the change in momentum at a given point; the first term on the right describes the diffusion of momentum by viscosity; the second term on the right describes the advective flow of momentum; and the last two terms on the right describes the external and internal forces which can act as sources or sinks of momentum. In probability theory The convection–diffusion equation (with ) can be viewed as a stochastic differential equation, describing random motion with diffusivity and bias . For example, the equation can describe the Brownian motion of a single particle, where the variable describes the probability distribution for the particle to be in a given position at a given time. The reason the equation can be used that way is because there is no mathematical difference between the probability distribution of a single particle, and the concentration profile of a collection of infinitely many particles (as long as the particles do not interact with each other). The Langevin equation describes advection, diffusion, and other phenomena in an explicitly stochastic way. One of the simplest forms of the Langevin equation is when its "noise term" is Gaussian; in this case, the Langevin equation is exactly equivalent to the convection–diffusion equation. However, the Langevin equation is more general. In semiconductor physics In semiconductor physics, this equation is called the drift–diffusion equation. The word "drift" is related to drift current and drift velocity. The equation is normally written: where and are the concentrations (densities) of electrons and holes, respectively, is the elementary charge, and are the electric currents due to electrons and holes respectively, and are the corresponding "particle currents" of electrons and holes respectively, represents carrier generation and recombination ( for generation of electron-hole pairs, for recombination.) is the electric field vector and are electron and hole mobility. The diffusion coefficient and mobility are related by the Einstein relation as above: where is the Boltzmann constant and is absolute temperature. The drift current and diffusion current refer separately to the two terms in the expressions for , namely: This equation can be solved together with Poisson's equation numerically. An example of results of solving the drift diffusion equation is shown on the right. When light shines on the center of semiconductor, carriers are generated in the middle and diffuse towards two ends. The drift–diffusion equation is solved in this structure and electron density distribution is displayed in the figure. One can see the gradient of carrier from center towards two ends. See also Advanced Simulation Library Buckley–Leverett equation Burgers' equation Conservation equations Double diffusive convection Incompressible Navier–Stokes equations Natural convection Nernst–Planck equation Numerical solution of the convection–diffusion equation Notes References Further reading Diffusion Parabolic partial differential equations Stochastic differential equations Transport phenomena Equations of physics
0.773719
0.993648
0.768804
Stefan–Boltzmann law
The Stefan–Boltzmann law, also known as Stefan's law, describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature. It is named for Josef Stefan, who empirically derived the relationship, and Ludwig Boltzmann who derived the law theoretically. For an ideal absorber/emitter or black body, the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance) is directly proportional to the fourth power of the black body's temperature, : The constant of proportionality, , is called the Stefan–Boltzmann constant. It has the value In the general case, the Stefan–Boltzmann law for radiant exitance takes the form: where is the emissivity of the surface emitting the radiation. The emissivity is generally between zero and one. An emissivity of one corresponds to a black body. Detailed explanation The radiant exitance (previously called radiant emittance), , has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J⋅s⋅m), or equivalently, watts per square metre (W⋅m). The SI unit for absolute temperature, , is the kelvin (K). To find the total power, , radiated from an object, multiply the radiant exitance by the object's surface area, : Matter that does not absorb all incident radiation emits less total energy than a black body. Emissions are reduced by a factor , where the emissivity, , is a material property which, for most matter, satisfies . Emissivity can in general depend on wavelength, direction, and polarization. However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity, which reflects emissions as totaled over all wavelengths, directions, and polarizations. The form of the Stefan–Boltzmann law that includes emissivity is applicable to all matter, provided that matter is in a state of local thermodynamic equilibrium (LTE) so that its temperature is well-defined. (This is a trivial conclusion, since the emissivity, , is defined to be the quantity that makes this equation valid. What is non-trivial is the proposition that , which is a consequence of Kirchhoff's law of thermal radiation.) A so-called grey body is a body for which the spectral emissivity is independent of wavelength, so that the total emissivity, , is a constant. In the more general (and realistic) case, the spectral emissivity depends on wavelength. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function. It follows that if the spectral emissivity depends on wavelength then the total emissivity depends on the temperature, i.e., . However, if the dependence on wavelength is small, then the dependence on temperature will be small as well. Wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures are not subject to ray-optical limits and may be designed to have an emissivity greater than 1. In national and international standards documents, the symbol is recommended to denote radiant exitance; a superscript circle (°) indicates a term relate to a black body. (A subscript "e" is added when it is important to distinguish the energetic (radiometric) quantity radiant exitance, , from the analogous human vision (photometric) quantity, luminous exitance, denoted .) In common usage, the symbol used for radiant exitance (often called radiant emittance) varies among different texts and in different fields. The Stefan–Boltzmann law may be expressed as a formula for radiance as a function of temperature. Radiance is measured in watts per square metre per steradian (W⋅m⋅sr). The Stefan–Boltzmann law for the radiance of a black body is: The Stefan–Boltzmann law expressed as a formula for radiation energy density is: where is the speed of light. History In 1864, John Tyndall presented measurements of the infrared emission by a platinum filament and the corresponding color of the filament. The proportionality to the fourth power of the absolute temperature was deduced by Josef Stefan (1835–1893) in 1877 on the basis of Tyndall's experimental measurements, in the article Über die Beziehung zwischen der Wärmestrahlung und der Temperatur (On the relationship between thermal radiation and temperature) in the Bulletins from the sessions of the Vienna Academy of Sciences. A derivation of the law from theoretical considerations was presented by Ludwig Boltzmann (1844–1906) in 1884, drawing upon the work of Adolfo Bartoli. Bartoli in 1876 had derived the existence of radiation pressure from the principles of thermodynamics. Following Bartoli, Boltzmann considered an ideal heat engine using electromagnetic radiation instead of an ideal gas as working matter. The law was almost immediately experimentally verified. Heinrich Weber in 1888 pointed out deviations at higher temperatures, but perfect accuracy within measurement uncertainties was confirmed up to temperatures of 1535 K by 1897. The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light, the Boltzmann constant and the Planck constant, is a direct consequence of Planck's law as formulated in 1900. Stefan–Boltzmann constant The Stefan–Boltzmann constant, , is derived from other known physical constants: where is the Boltzmann constant, the is the Planck constant, and is the speed of light in vacuum. As of the 2019 revision of the SI, which establishes exact fixed values for , , and , the Stefan–Boltzmann constant is exactly: Thus, Prior to this, the value of was calculated from the measured value of the gas constant. The numerical value of the Stefan–Boltzmann constant is different in other systems of units, as shown in the table below. Examples Temperature of the Sun With his law, Stefan also determined the temperature of the Sun's surface. He inferred from the data of Jacques-Louis Soret (1827–1890) that the energy flux density from the Sun is 29 times greater than the energy flux density of a certain warmed metal lamella (a thin plate). A round lamella was placed at such a distance from the measuring device that it would be seen at the same angular diameter as the Sun. Soret estimated the temperature of the lamella to be approximately 1900 °C to 2000 °C. Stefan surmised that 1/3 of the energy flux from the Sun is absorbed by the Earth's atmosphere, so he took for the correct Sun's energy flux a value 3/2 times greater than Soret's value, namely 29 × 3/2 = 43.5. Precise measurements of atmospheric absorption were not made until 1888 and 1904. The temperature Stefan obtained was a median value of previous ones, 1950 °C and the absolute thermodynamic one 2200 K. As 2.574 = 43.5, it follows from the law that the temperature of the Sun is 2.57 times greater than the temperature of the lamella, so Stefan got a value of 5430 °C or 5700 K. This was the first sensible value for the temperature of the Sun. Before this, values ranging from as low as 1800 °C to as high as were claimed. The lower value of 1800 °C was determined by Claude Pouillet (1790–1868) in 1838 using the Dulong–Petit law. Pouillet also took just half the value of the Sun's correct energy flux. Temperature of stars The temperature of stars other than the Sun can be approximated using a similar means by treating the emitted energy as a black body radiation. So: where is the luminosity, is the Stefan–Boltzmann constant, is the stellar radius and is the effective temperature. This formula can then be rearranged to calculate the temperature: or alternatively the radius: The same formulae can also be simplified to compute the parameters relative to the Sun: where is the solar radius, and so forth. They can also be rewritten in terms of the surface area A and radiant exitance : where and With the Stefan–Boltzmann law, astronomers can easily infer the radii of stars. The law is also met in the thermodynamics of black holes in so-called Hawking radiation. Effective temperature of the Earth Similarly we can calculate the effective temperature of the Earth T⊕ by equating the energy received from the Sun and the energy radiated by the Earth, under the black-body approximation (Earth's own production of energy being small enough to be negligible). The luminosity of the Sun, L⊙, is given by: At Earth, this energy is passing through a sphere with a radius of a0, the distance between the Earth and the Sun, and the irradiance (received power per unit area) is given by The Earth has a radius of R⊕, and therefore has a cross-section of . The radiant flux (i.e. solar power) absorbed by the Earth is thus given by: Because the Stefan–Boltzmann law uses a fourth power, it has a stabilizing effect on the exchange and the flux emitted by Earth tends to be equal to the flux absorbed, close to the steady state where: T⊕ can then be found: where T⊙ is the temperature of the Sun, R⊙ the radius of the Sun, and a0 is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere. The Earth has an albedo of 0.3, meaning that 30% of the solar radiation that hits the planet gets scattered back into space without absorption. The effect of albedo on temperature can be approximated by assuming that the energy absorbed is multiplied by 0.7, but that the planet still radiates as a black body (the latter by definition of effective temperature, which is what we are calculating). This approximation reduces the temperature by a factor of 0.71/4, giving . The above temperature is Earth's as seen from space, not ground temperature but an average over all emitting bodies of Earth from surface to high altitude. Because of the greenhouse effect, the Earth's actual average surface temperature is about , which is higher than the effective temperature, and even higher than the temperature that a black body would have. In the above discussion, we have assumed that the whole surface of the earth is at one temperature. Another interesting question is to ask what the temperature of a blackbody surface on the earth would be assuming that it reaches equilibrium with the sunlight falling on it. This of course depends on the angle of the sun on the surface and on how much air the sunlight has gone through. When the sun is at the zenith and the surface is horizontal, the irradiance can be as high as 1120 W/m2. The Stefan–Boltzmann law then gives a temperature of or . (Above the atmosphere, the result is even higher: .) We can think of the earth's surface as "trying" to reach equilibrium temperature during the day, but being cooled by the atmosphere, and "trying" to reach equilibrium with starlight and possibly moonlight at night, but being warmed by the atmosphere. Origination Thermodynamic derivation of the energy density The fact that the energy density of the box containing radiation is proportional to can be derived using thermodynamics. This derivation uses the relation between the radiation pressure p and the internal energy density , a relation that can be shown using the form of the electromagnetic stress–energy tensor. This relation is: Now, from the fundamental thermodynamic relation we obtain the following expression, after dividing by and fixing : The last equality comes from the following Maxwell relation: From the definition of energy density it follows that where the energy density of radiation only depends on the temperature, therefore Now, the equality is after substitution of Meanwhile, the pressure is the rate of momentum change per unit area. Since the momentum of a photon is the same as the energy divided by the speed of light, where the factor 1/3 comes from the projection of the momentum transfer onto the normal to the wall of the container. Since the partial derivative can be expressed as a relationship between only and (if one isolates it on one side of the equality), the partial derivative can be replaced by the ordinary derivative. After separating the differentials the equality becomes which leads immediately to , with as some constant of integration. Derivation from Planck's law The law can be derived by considering a small flat black body surface radiating out into a half-sphere. This derivation uses spherical coordinates, with θ as the zenith angle and φ as the azimuthal angle; and the small flat blackbody surface lies on the xy-plane, where θ = /2. The intensity of the light emitted from the blackbody surface is given by Planck's law, where is the amount of power per unit surface area per unit solid angle per unit frequency emitted at a frequency by a black body at temperature T. is the Planck constant is the speed of light, and is the Boltzmann constant. The quantity is the power radiated by a surface of area A through a solid angle in the frequency range between and . The Stefan–Boltzmann law gives the power emitted per unit area of the emitting body, Note that the cosine appears because black bodies are Lambertian (i.e. they obey Lambert's cosine law), meaning that the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle. To derive the Stefan–Boltzmann law, we must integrate over the half-sphere and integrate from 0 to ∞. Then we plug in for I: To evaluate this integral, do a substitution, which gives: The integral on the right is standard and goes by many names: it is a particular case of a Bose–Einstein integral, the polylogarithm, or the Riemann zeta function . The value of the integral is (where is the Gamma function), giving the result that, for a perfect blackbody surface: Finally, this proof started out only considering a small flat surface. However, any differentiable surface can be approximated by a collection of small flat surfaces. So long as the geometry of the surface does not cause the blackbody to reabsorb its own radiation, the total energy radiated is just the sum of the energies radiated by each surface; and the total surface area is just the sum of the areas of each surface—so this law holds for all convex blackbodies, too, so long as the surface has the same temperature throughout. The law extends to radiation from non-convex bodies by using the fact that the convex hull of a black body radiates as though it were itself a black body. Energy density The total energy density U can be similarly calculated, except the integration is over the whole sphere and there is no cosine, and the energy flux (U c) should be divided by the velocity c to give the energy density U: Thus is replaced by , giving an extra factor of 4. Thus, in total: The product is sometimes known as the radiation constant or radiation density constant. Decomposition in terms of photons The Stephan–Boltzmann law can be expressed as where the flux of photons, , is given by and the average energy per photon,, is given by Marr and Wilkin (2012) recommend that students be taught about instead of being taught Wien's displacement law, and that the above decomposition be taught when the Stefan–Boltzmann law is taught. See also Black-body radiation Rayleigh–Jeans law Sakuma–Hattori equation Notes References Laws of thermodynamics Power laws Heat transfer Ludwig Boltzmann
0.769959
0.998479
0.768788
Aberration (astronomy)
In astronomy, aberration (also referred to as astronomical aberration, stellar aberration, or velocity aberration) is a phenomenon where celestial objects exhibit an apparent motion about their true positions based on the velocity of the observer: It causes objects to appear to be displaced towards the observer's direction of motion. The change in angle is of the order of where is the speed of light and the velocity of the observer. In the case of "stellar" or "annual" aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth's velocity changes as it revolves around the Sun, by a maximum angle of approximately 20 arcseconds in right ascension or declination. The term aberration has historically been used to refer to a number of related phenomena concerning the propagation of light in moving bodies. Aberration is distinct from parallax, which is a change in the apparent position of a relatively nearby object, as measured by a moving observer, relative to more distant objects that define a reference frame. The amount of parallax depends on the distance of the object from the observer, whereas aberration does not. Aberration is also related to light-time correction and relativistic beaming, although it is often considered separately from these effects. Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of special relativity. It was first observed in the late 1600s by astronomers searching for stellar parallax in order to confirm the heliocentric model of the Solar System. However, it was not understood at the time to be a different phenomenon. In 1727, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley's theory was incompatible with 19th-century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrik Lorentz's aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz's elaboration of Maxwell's electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of special relativity in 1905, which presents a general form of the equation for aberration in terms of such theory. Explanation Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to consider the apparent direction of falling rain. If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed. The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer's frame. This effect is sometimes called the "searchlight" or "headlight" effect. In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth's moving frame is tilted relative to the angle observed in the Sun's frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun. While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes observable even at the classical level (see history). The theory of special relativity is required to correctly account for aberration. The relativistic explanation is very similar to the classical one however, and in both theories aberration may be understood as a case of addition of velocities. Classical explanation In the Sun's frame, consider a beam of light with velocity equal to the speed of light , with x and y velocity components and , and thus at an angle such that . If the Earth is moving at velocity in the x direction relative to the Sun, then by velocity addition the x component of the beam's velocity in the Earth's frame of reference is , and the y velocity is unchanged, . Thus the angle of the light in the Earth's frame in terms of the angle in the Sun's frame is In the case of , this result reduces to , which in the limit may be approximated by . Relativistic explanation The reasoning in the relativistic case is the same except that the relativistic velocity addition formulas must be used, which can be derived from Lorentz transformations between different frames of reference. These formulas are where , giving the components of the light beam in the Earth's frame in terms of the components in the Sun's frame. The angle of the beam in the Earth's frame is thus In the case of , this result reduces to , and in the limit this may be approximated by . This relativistic derivation keeps the speed of light constant in all frames of reference, unlike the classical derivation above. Relationship to light-time correction and relativistic beaming Aberration is related to two other phenomena, light-time correction, which is due to the motion of an observed object during the time taken by its light to reach an observer, and relativistic beaming, which is an angling of the light emitted by a moving light source. It can be considered equivalent to them but in a different inertial frame of reference. In aberration, the observer is considered to be moving relative to a (for the sake of simplicity) stationary light source, while in light-time correction and relativistic beaming the light source is considered to be moving relative to a stationary observer. Consider the case of an observer and a light source moving relative to each other at constant velocity, with a light beam moving from the source to the observer. At the moment of emission, the beam in the observer's rest frame is tilted compared to the one in the source's rest frame, as understood through relativistic beaming. During the time it takes the light beam to reach the observer the light source moves in the observer's frame, and the 'true position' of the light source is displaced relative to the apparent position the observer sees, as explained by light-time correction. Finally, the beam in the observer's frame at the moment of observation is tilted compared to the beam in source's frame, which can be understood as an aberrational effect. Thus, a person in the light source's frame would describe the apparent tilting of the beam in terms of aberration, while a person in the observer's frame would describe it as a light-time effect. The relationship between these phenomena is only valid if the observer and source's frames are inertial frames. In practice, because the Earth is not an inertial rest frame but experiences centripetal acceleration towards the Sun, many aberrational effects such as annual aberration on Earth cannot be considered light-time corrections. However, if the time between emission and detection of the light is short compared to the orbital period of the Earth, the Earth may be approximated as an inertial frame and aberrational effects are equivalent to light-time corrections. Types The Astronomical Almanac describes several different types of aberration, arising from differing components of the Earth's and observed object's motion: Stellar aberration: "The apparent angular displacement of the observed position of a celestial body resulting from the motion of the observer. Stellar aberration is divided into diurnal, annual, and secular components." Annual aberration: "The component of stellar aberration resulting from the motion of the Earth about the Sun." Diurnal aberration: "The component of stellar aberration resulting from the observer's diurnal motion about the center of the Earth due to the Earth's rotation." Secular aberration: "The component of stellar aberration resulting from the essentially uniform and almost rectilinear motion of the entire solar system in space. Secular aberration is usually disregarded." Planetary aberration: "The apparent angular displacement of the observed position of a solar system body from its instantaneous geocentric direction as would be seen by an observer at the geocenter. This displacement is caused by the aberration of light and light-time displacement." Annual aberration Annual aberration is caused by the motion of an observer on Earth as the planet revolves around the Sun. Due to orbital eccentricity, the orbital velocity of Earth (in the Sun's rest frame) varies periodically during the year as the planet traverses its elliptic orbit and consequently the aberration also varies periodically, typically causing stars to appear to move in small ellipses. Approximating Earth's orbit as circular, the maximum displacement of a star due to annual aberration is known as the constant of aberration, conventionally represented by . It may be calculated using the relation substituting the Earth's average speed in the Sun's frame for and the speed of light . Its accepted value is 20.49552 arcseconds (sec) or 0.000099365 radians (rad) (at J2000). Assuming a circular orbit, annual aberration causes stars exactly on the ecliptic (the plane of Earth's orbit) to appear to move back and forth along a straight line, varying by on either side of their position in the Sun's frame. A star that is precisely at one of the ecliptic poles (at 90° from the ecliptic plane) will appear to move in a circle of radius about its true position, and stars at intermediate ecliptic latitudes will appear to move along a small ellipse. For illustration, consider a star at the northern ecliptic pole viewed by an observer at a point on the Arctic Circle. Such an observer will see the star transit at the zenith, once every day (strictly speaking sidereal day). At the time of the March equinox, Earth's orbit carries the observer in a southwards direction, and the star's apparent declination is therefore displaced to the south by an angle of . On the September equinox, the star's position is displaced to the north by an equal and opposite amount. On either solstice, the displacement in declination is 0. Conversely, the amount of displacement in right ascension is 0 on either equinox and at maximum on either solstice. In actuality, Earth's orbit is slightly elliptic rather than circular, and its speed varies somewhat over the course of its orbit, which means the description above is only approximate. Aberration is more accurately calculated using Earth's instantaneous velocity relative to the barycenter of the Solar System. Note that the displacement due to aberration is orthogonal to any displacement due to parallax. If parallax is detectable, the maximum displacement to the south would occur in December, and the maximum displacement to the north in June. It is this apparently anomalous motion that so mystified early astronomers. Solar annual aberration A special case of annual aberration is the nearly constant deflection of the Sun from its position in the Sun's rest frame by towards the west (as viewed from Earth), opposite to the apparent motion of the Sun along the ecliptic (which is from west to east, as seen from Earth). The deflection thus makes the Sun appear to be behind (or retarded) from its rest-frame position on the ecliptic by a position or angle . This deflection may equivalently be described as a light-time effect due to motion of the Earth during the 8.3 minutes that it takes light to travel from the Sun to Earth. The relation with is : [0.000099365 rad / 2 π rad] x [365.25 d x 24 h/d x 60 min/h] = 8.3167 min ≈ 8 min 19 sec = 499 sec. This is possible since the transit time of sunlight is short relative to the orbital period of the Earth, so the Earth's frame may be approximated as inertial. In the Earth's frame, the Sun moves, at a mean velocity v = 29.789 km/s, by a distance ≈ 14,864.7 km in the time it takes light to reach Earth, ≈ 499 sec for the orbit of mean radius = 1 AU = 149,597,870.7 km. This gives an angular correction ≈ 0.000099364 rad = 20.49539 sec, which can be solved to give ≈ 0.000099365 rad = 20.49559 sec, very nearly the same as the aberrational correction (here is in radian and not in arcsecond). Diurnal aberration Diurnal aberration is caused by the velocity of the observer on the surface of the rotating Earth. It is therefore dependent not only on the time of the observation, but also the latitude and longitude of the observer. Its effect is much smaller than that of annual aberration, and is only 0.32 arcseconds in the case of an observer at the Equator, where the rotational velocity is greatest. Secular aberration The secular component of aberration, caused by the motion of the Solar System in space, has been further subdivided into several components: aberration resulting from the motion of the solar system barycenter around the center of our Galaxy, aberration resulting from the motion of the Galaxy relative to the Local Group, and aberration resulting from the motion of the Local Group relative to the cosmic microwave background. Secular aberration affects the apparent positions of stars and extragalactic objects. The large, constant part of secular aberration cannot be directly observed and "It has been standard practice to absorb this large, nearly constant effect into the reported" positions of stars. In about 200 million years, the Sun circles the galactic center, whose measured location is near right ascension (α = 266.4°) and declination (δ = −29.0°). The constant, unobservable, effect of the solar system's motion around the galactic center has been computed variously as 150 or 165 arcseconds. The other, observable, part is an acceleration toward the galactic center of approximately 2.5 × 10−10 m/s2, which yields a change of aberration of about 5 μas/yr. Highly precise measurements extending over several years can observe this change in secular aberration, often called the secular aberration drift or the acceleration of the Solar System, as a small apparent proper motion. Recently, highly precise astrometry of extragalactic objects using both Very Long Baseline Interferometry and the Gaia space observatory have successfully measured this small effect. The first VLBI measurement of the apparent motion, over a period of 20 years, of 555 extragalactic objects towards the center of our galaxy at equatorial coordinates of α = 263° and δ = −20° indicated a secular aberration drift 6.4 ±1.5 μas/yr. Later determinations using a series of VLBI measurements extending over almost 40 years determined the secular aberration drift to be 5.83 ± 0.23 μas/yr in the direction α = 270.2 ± 2.3° and δ = −20.2° ± 3.6°. Optical observations using only 33 months of Gaia satellite data of 1.6 million extragalactic sources indicated an acceleration of the solar system of 2.32 ± 0.16 × 10−10 m/s2 and a corresponding secular aberration drift of 5.05 ± 0.35 μas/yr in the direction of α = 269.1° ± 5.4°, δ = −31.6° ± 4.1°. It is expected that later Gaia data releases, incorporating about 66 and 120 months of data, will reduce the random errors of these results by factors of 0.35 and 0.15. The latest edition of the International Celestial Reference Frame (ICRF3) adopted a recommended galactocentric aberration constant of 5.8 μas/yr and recommended a correction for secular aberration to obtain the highest positional accuracy for times other than the reference epoch 2015.0. Planetary aberration Planetary aberration is the combination of the aberration of light (due to Earth's velocity) and light-time correction (due to the object's motion and distance), as calculated in the rest frame of the Solar System. Both are determined at the instant when the moving object's light reaches the moving observer on Earth. It is so called because it is usually applied to planets and other objects in the Solar System whose motion and distance are accurately known. Discovery and first observations The discovery of the aberration of light was totally unexpected, and it was only by considerable perseverance and perspicacity that Bradley was able to explain it in 1727. It originated from attempts to discover whether stars possessed appreciable parallaxes. Search for stellar parallax The Copernican heliocentric theory of the Solar System had received confirmation by the observations of Galileo and Tycho Brahe and the mathematical investigations of Kepler and Newton. As early as 1573, Thomas Digges had suggested that parallactic shifting of the stars should occur according to the heliocentric model, and consequently if stellar parallax could be observed it would help confirm this theory. Many observers claimed to have determined such parallaxes, but Tycho Brahe and Giovanni Battista Riccioli concluded that they existed only in the minds of the observers, and were due to instrumental and personal errors. However, in 1680 Jean Picard, in his Voyage d'Uranibourg, stated, as a result of ten years' observations, that Polaris, the Pole Star, exhibited variations in its position amounting to 40 annually. Some astronomers endeavoured to explain this by parallax, but these attempts failed because the motion differed from that which parallax would produce. John Flamsteed, from measurements made in 1689 and succeeding years with his mural quadrant, similarly concluded that the declination of Polaris was 40 less in July than in September. Robert Hooke, in 1674, published his observations of γ Draconis, a star of magnitude 2m which passes practically overhead at the latitude of London (hence its observations are largely free from the complex corrections due to atmospheric refraction), and concluded that this star was 23 more northerly in July than in October. James Bradley's observations Consequently, when Bradley and Samuel Molyneux entered this sphere of research in 1725, there was still considerable uncertainty as to whether stellar parallaxes had been observed or not, and it was with the intention of definitely answering this question that they erected a large telescope at Molyneux's house at Kew. They decided to reinvestigate the motion of γ Draconis with a telescope constructed by George Graham (1675–1751), a celebrated instrument-maker. This was fixed to a vertical chimney stack in such manner as to permit a small oscillation of the eyepiece, the amount of which (i.e. the deviation from the vertical) was regulated and measured by the introduction of a screw and a plumb line. The instrument was set up in November 1725, and observations on γ Draconis were made starting in December. The star was observed to move 40 southwards between September and March, and then reversed its course from March to September. At the same time, 35 Camelopardalis, a star with a right ascension nearly exactly opposite to that of γ Draconis, was 19" more northerly at the beginning of March than in September. The asymmetry of these results, which were expected to be mirror images of each other, were completely unexpected and inexplicable by existing theories. Early hypotheses Bradley and Molyneux discussed several hypotheses in the hope of finding the solution. Since the apparent motion was evidently caused neither by parallax nor observational errors, Bradley first hypothesized that it could be due to oscillations in the orientation of the Earth's axis relative to the celestial sphere – a phenomenon known as nutation. 35 Camelopardalis was seen to possess an apparent motion which could be consistent with nutation, but since its declination varied only one half as much as that of γ Draconis, it was obvious that nutation did not supply the answer (however, Bradley later went on to discover that the Earth does indeed nutate). He also investigated the possibility that the motion was due to an irregular distribution of the Earth's atmosphere, thus involving abnormal variations in the refractive index, but again obtained negative results. On August 19, 1727, Bradley embarked upon a further series of observations using a telescope of his own erected at the Rectory, Wanstead. This instrument had the advantage of a larger field of view and he was able to obtain precise positions of a large number of stars over the course of about twenty years. During his first two years at Wanstead, he established the existence of the phenomenon of aberration beyond all doubt, and this also enabled him to formulate a set of rules that would allow the calculation of the effect on any given star at a specified date. Development of the theory of aberration Bradley eventually developed his explanation of aberration in about September 1728 and this theory was presented to the Royal Society in mid January the following year. One well-known story was that he saw the change of direction of a wind vane on a boat on the Thames, caused not by an alteration of the wind itself, but by a change of course of the boat relative to the wind direction. However, there is no record of this incident in Bradley's own account of the discovery, and it may therefore be apocryphal. The following table shows the magnitude of deviation from true declination for γ Draconis and the direction, on the planes of the solstitial colure and ecliptic prime meridian, of the tangent of the velocity of the Earth in its orbit for each of the four months where the extremes are found, as well as expected deviation from true ecliptic longitude if Bradley had measured its deviation from right ascension: Bradley proposed that the aberration of light not only affected declination, but right ascension as well, so that a star in the pole of the ecliptic would describe a little ellipse with a diameter of about 40", but for simplicity, he assumed it to be a circle. Since he only observed the deviation in declination, and not in right ascension, his calculations for the maximum deviation of a star in the pole of the ecliptic are for its declination only, which will coincide with the diameter of the little circle described by such star. For eight different stars, his calculations are as follows: Based on these calculations, Bradley was able to estimate the constant of aberration at 20.2", which is equal to 0.00009793 radians, and with this was able to estimate the speed of light at per second. By projecting the little circle for a star in the pole of the ecliptic, he could simplify the calculation of the relationship between the speed of light and the speed of the Earth's annual motion in its orbit as follows: Thus, the speed of light to the speed of the Earth's annual motion in its orbit is 10,210 to one, from whence it would follow, that light moves, or is propagated as far as from the Sun to the Earth in 8 minutes 12 seconds. The original motivation of the search for stellar parallax was to test the Copernican theory that the Earth revolves around the Sun. The change of aberration in the course of the year demonstrates the relative motion of the Earth and the stars. Retrodiction on Descartes' lightspeed argument In the prior century, René Descartes argued that if light were not instantaneous, then shadows of moving objects would lag; and if propagation times over terrestrial distances were appreciable, then during a lunar eclipse the Sun, Earth, and Moon would be out of alignment by hours' motion, contrary to observation. Huygens commented that, on Rømer's lightspeed data (yielding an earth-moon round-trip time of only seconds), the lag angle would be imperceptible. What they both overlooked is that aberration (as understood only later) would exactly counteract the lag even if large, leaving this eclipse method completely insensitive to light speed. (Otherwise, shadow-lag methods could be made to sense absolute translational motion, contrary to a basic principle of relativity.) Historical theories of aberration The phenomenon of aberration became a driving force for many physical theories during the 200 years between its observation and the explanation by Albert Einstein. The first classical explanation was provided in 1729, by James Bradley as described above, who attributed it to the finite speed of light and the motion of Earth in its orbit around the Sun. However, this explanation proved inaccurate once the wave nature of light was better understood, and correcting it became a major goal of the 19th century theories of luminiferous aether. Augustin-Jean Fresnel proposed a correction due to the motion of a medium (the aether) through which light propagated, known as "partial aether drag". He proposed that objects partially drag the aether along with them as they move, and this became the accepted explanation for aberration for some time. George Stokes proposed a similar theory, explaining that aberration occurs due to the flow of aether induced by the motion of the Earth. Accumulated evidence against these explanations, combined with new understanding of the electromagnetic nature of light, led Hendrik Lorentz to develop an electron theory which featured an immobile aether, and he explained that objects contract in length as they move through the aether. Motivated by these previous theories, Albert Einstein then developed the theory of special relativity in 1905, which provides the modern account of aberration. Bradley's classical explanation Bradley conceived of an explanation in terms of a corpuscular theory of light in which light is made of particles. His classical explanation appeals to the motion of the earth relative to a beam of light-particles moving at a finite velocity, and is developed in the Sun's frame of reference, unlike the classical derivation given above. Consider the case where a distant star is motionless relative to the Sun, and the star is extremely far away, so that parallax may be ignored. In the rest frame of the Sun, this means light from the star travels in parallel paths to the Earth observer, and arrives at the same angle regardless of where the Earth is in its orbit. Suppose the star is observed on Earth with a telescope, idealized as a narrow tube. The light enters the tube from the star at angle and travels at speed taking a time to reach the bottom of the tube, where it is detected. Suppose observations are made from Earth, which is moving with a speed . During the transit of the light, the tube moves a distance . Consequently, for the particles of light to reach the bottom of the tube, the tube must be inclined at an angle different from , resulting in an apparent position of the star at angle . As the Earth proceeds in its orbit it changes direction, so changes with the time of year the observation is made. The apparent angle and true angle are related using trigonometry as: . In the case of , this gives . While this is different from the more accurate relativistic result described above, in the limit of small angle and low velocity they are approximately the same, within the error of the measurements of Bradley's day. These results allowed Bradley to make one of the earliest measurements of the speed of light. Luminiferous aether In the early nineteenth century the wave theory of light was being rediscovered, and in 1804 Thomas Young adapted Bradley's explanation for corpuscular light to wavelike light traveling through a medium known as the luminiferous aether. His reasoning was the same as Bradley's, but it required that this medium be immobile in the Sun's reference frame and must pass through the earth unaffected, otherwise the medium (and therefore the light) would move along with the earth and no aberration would be observed. He wrote: However, it soon became clear Young's theory could not account for aberration when materials with a non-vacuum refractive index were present. An important example is of a telescope filled with water. The speed of light in such a telescope will be slower than in vacuum, and is given by rather than where is the refractive index of the water. Thus, by Bradley and Young's reasoning the aberration angle is given by . which predicts a medium-dependent angle of aberration. When refraction at the telescope's objective is taken into account this result deviates even more from the vacuum result. In 1810 François Arago performed a similar experiment and found that the aberration was unaffected by the medium in the telescope, providing solid evidence against Young's theory. This experiment was subsequently verified by many others in the following decades, most accurately by Airy in 1871, with the same result. Aether drag models Fresnel's aether drag In 1818, Augustin Fresnel developed a modified explanation to account for the water telescope and for other aberration phenomena. He explained that the aether is generally at rest in the Sun's frame of reference, but objects partially drag the aether along with them as they move. That is, the aether in an object of index of refraction moving at velocity is partially dragged with a velocity bringing the light along with it. This factor is known as "Fresnel's dragging coefficient". This dragging effect, along with refraction at the telescope's objective, compensates for the slower speed of light in the water telescope in Bradley's explanation. With this modification Fresnel obtained Bradley's vacuum result even for non-vacuum telescopes, and was also able to predict many other phenomena related to the propagation of light in moving bodies. Fresnel's dragging coefficient became the dominant explanation of aberration for the next decades. Stokes' aether drag However, the fact that light is polarized (discovered by Fresnel himself) led scientists such as Cauchy and Green to believe that the aether was a totally immobile elastic solid as opposed to Fresnel's fluid aether. There was thus renewed need for an explanation of aberration consistent both with Fresnel's predictions (and Arago's observations) as well as polarization. In 1845, Stokes proposed a 'putty-like' aether which acts as a liquid on large scales but as a solid on small scales, thus supporting both the transverse vibrations required for polarized light and the aether flow required to explain aberration. Making only the assumptions that the fluid is irrotational and that the boundary conditions of the flow are such that the aether has zero velocity far from the Earth, but moves at the Earth's velocity at its surface and within it, he was able to completely account for aberration. The velocity of the aether outside of the Earth would decrease as a function of distance from the Earth so light rays from stars would be progressively dragged as they approached the surface of the Earth. The Earth's motion would be unaffected by the aether due to D'Alembert's paradox. Both Fresnel and Stokes' theories were popular. However, the question of aberration was put aside during much of the second half of the 19th century as focus of inquiry turned to the electromagnetic properties of aether. Lorentz' length contraction In the 1880s once electromagnetism was better understood, interest turned again to the problem of aberration. By this time flaws were known to both Fresnel's and Stokes' theories. Fresnel's theory required that the relative velocity of aether and matter to be different for light of different colors, and it was shown that the boundary conditions Stokes had assumed in his theory were inconsistent with his assumption of irrotational flow. At the same time, the modern theories of electromagnetic aether could not account for aberration at all. Many scientists such as Maxwell, Heaviside and Hertz unsuccessfully attempted to solve these problems by incorporating either Fresnel or Stokes' theories into Maxwell's new electromagnetic laws. Hendrik Lorentz spent considerable effort along these lines. After working on this problem for a decade, the issues with Stokes' theory caused him to abandon it and to follow Fresnel's suggestion of a (mostly) stationary aether (1892, 1895). However, in Lorentz's model the aether was completely immobile, like the electromagnetic aethers of Cauchy, Green and Maxwell and unlike Fresnel's aether. He obtained Fresnel's dragging coefficient from modifications of Maxwell's electromagnetic theory, including a modification of the time coordinates in moving frames ("local time"). In order to explain the Michelson–Morley experiment (1887), which apparently contradicted both Fresnel's and Lorentz's immobile aether theories, and apparently confirmed Stokes' complete aether drag, Lorentz theorized (1892) that objects undergo "length contraction" by a factor of in the direction of their motion through the aether. In this way, aberration (and all related optical phenomena) can be accounted for in the context of an immobile aether. Lorentz' theory became the basis for much research in the next decade, and beyond. Its predictions for aberration are identical to those of the relativistic theory. Special relativity Lorentz' theory matched experiment well, but it was complicated and made many unsubstantiated physical assumptions about the microscopic nature of electromagnetic media. In his 1905 theory of special relativity, Albert Einstein reinterpreted the results of Lorentz' theory in a much simpler and more natural conceptual framework which disposed of the idea of an aether. His derivation is given above, and is now the accepted explanation. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of aberration: Other important motivations for Einstein's development of relativity were the moving magnet and conductor problem and (indirectly) the negative aether drift experiments, already mentioned by him in the introduction of his first relativity paper. Einstein wrote in a note in 1952: While Einstein's result is the same as Bradley's original equation except for an extra factor of , Bradley's result does not merely give the classical limit of the relativistic case, in the sense that it gives incorrect predictions even at low relative velocities. Bradley's explanation cannot account for situations such as the water telescope, nor for many other optical effects (such as interference) that might occur within the telescope. This is because in the Earth's frame it predicts that the direction of propagation of the light beam in the telescope is not normal to the wavefronts of the beam, in contradiction with Maxwell's theory of electromagnetism. It also does not preserve the speed of light c between frames. However, Bradley did correctly infer that the effect was due to relative velocities. See also Apparent place Stellar parallax Astronomical nutation Proper motion Timeline of electromagnetism and classical optics Relativistic aberration Notes References Further reading P. Kenneth Seidelmann (Ed.), Explanatory Supplement to the Astronomical Almanac (University Science Books, 1992), 127–135, 700. Stephen Peter Rigaud, Miscellaneous Works and Correspondence of the Rev. James Bradley, D.D. F.R.S. (1832). Charles Hutton, Mathematical and Philosophical Dictionary (1795). H. H. Turner, Astronomical Discovery (1904). Thomas Simpson, Essays on Several Curious and Useful Subjects in Speculative and Mix'd Mathematicks (1740). :de:August Ludwig Busch, Reduction of the Observations Made by Bradley at Kew and Wansted to Determine the Quantities of Aberration and Nutation (1838). External links Courtney Seligman on Bradley's observations Electromagnetic radiation Astrometry Radiation
0.779048
0.986817
0.768778
Rotating reference frame
A rotating frame of reference is a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame. An everyday example of a rotating reference frame is the surface of the Earth. (This article considers only frames rotating about a fixed axis. For more general rotations, see Euler angles.) Fictitious forces All non-inertial reference frames exhibit fictitious forces; rotating reference frames are characterized by three: the centrifugal force, the Coriolis force, and, for non-uniformly rotating reference frames, the Euler force. Scientists in a rotating box can measure the rotation speed and axis of rotation by measuring these fictitious forces. For example, Léon Foucault was able to show the Coriolis force that results from Earth's rotation using the Foucault pendulum. If Earth were to rotate many times faster, these fictitious forces could be felt by humans, as they are when on a spinning carousel. Centrifugal force In classical mechanics, centrifugal force is an outward force associated with rotation. Centrifugal force is one of several so-called pseudo-forces (also known as inertial forces), so named because, unlike real forces, they do not originate in interactions with other bodies situated in the environment of the particle upon which they act. Instead, centrifugal force originates in the rotation of the frame of reference within which observations are made. Coriolis force The mathematical expression for the Coriolis force appeared in an 1835 paper by a French scientist Gaspard-Gustave Coriolis in connection with hydrodynamics, and also in the tidal equations of Pierre-Simon Laplace in 1778. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Perhaps the most commonly encountered rotating reference frame is the Earth. Moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the left in the southern. Movements of air in the atmosphere and water in the ocean are notable examples of this behavior: rather than flowing directly from areas of high pressure to low pressure, as they would on a non-rotating planet, winds and currents tend to flow to the right of this direction north of the equator, and to the left of this direction south of the equator. This effect is responsible for the rotation of large cyclones (see Coriolis effects in meteorology). Euler force In classical mechanics, the Euler acceleration (named for Leonhard Euler), also known as azimuthal acceleration or transverse acceleration is an acceleration that appears when a non-uniformly rotating reference frame is used for analysis of motion and there is variation in the angular velocity of the reference frame's axis. This article is restricted to a frame of reference that rotates about a fixed axis. The Euler force is a fictitious force on a body that is related to the Euler acceleration by F  = ma, where a is the Euler acceleration and m is the mass of the body. Relating rotating frames to stationary frames The following is a derivation of the formulas for accelerations as well as fictitious forces in a rotating frame. It begins with the relation between a particle's coordinates in a rotating frame and its coordinates in an inertial (stationary) frame. Then, by taking time derivatives, formulas are derived that relate the velocity of the particle as seen in the two frames, and the acceleration relative to each frame. Using these accelerations, the fictitious forces are identified by comparing Newton's second law as formulated in the two different frames. Relation between positions in the two frames To derive these fictitious forces, it's helpful to be able to convert between the coordinates of the rotating reference frame and the coordinates of an inertial reference frame with the same origin. If the rotation is about the axis with a constant angular velocity (so and which implies for some constant where denotes the angle in the -plane formed at time by and the -axis), and if the two reference frames coincide at time (meaning when so take or some other integer multiple of ), the transformation from rotating coordinates to inertial coordinates can be written whereas the reverse transformation is This result can be obtained from a rotation matrix. Introduce the unit vectors representing standard unit basis vectors in the rotating frame. The time-derivatives of these unit vectors are found next. Suppose the frames are aligned at and the -axis is the axis of rotation. Then for a counterclockwise rotation through angle : where the components are expressed in the stationary frame. Likewise, Thus the time derivative of these vectors, which rotate without changing magnitude, is where This result is the same as found using a vector cross product with the rotation vector pointed along the z-axis of rotation namely, where is either or Time derivatives in the two frames Introduce unit vectors , now representing standard unit basis vectors in the general rotating frame. As they rotate they will remain normalized and perpendicular to each other. If they rotate at the speed of about an axis along the rotation vector then each unit vector of the rotating coordinate system (such as or ) abides by the following equation: So if denotes the transformation taking basis vectors of the inertial- to the rotating frame, with matrix columns equal to the basis vectors of the rotating frame, then the cross product multiplication by the rotation vector is given by . If is a vector function that is written as and we want to examine its first derivative then (using the product rule of differentiation): where denotes the rate of change of as observed in the rotating coordinate system. As a shorthand the differentiation is expressed as: This result is also known as the transport theorem in analytical dynamics and is also sometimes referred to as the basic kinematic equation. Relation between velocities in the two frames A velocity of an object is the time-derivative of the object's position, so The time derivative of a position in a rotating reference frame has two components, one from the explicit time dependence due to motion of the object itself in the rotating reference frame, and another from the frame's own rotation. Applying the result of the previous subsection to the displacement the velocities in the two reference frames are related by the equation where subscript means the inertial frame of reference, and means the rotating frame of reference. Relation between accelerations in the two frames Acceleration is the second time derivative of position, or the first time derivative of velocity where subscript means the inertial frame of reference, the rotating frame of reference, and where the expression, again, in the bracketed expression on the left is to be interpreted as an operator working onto the bracketed expression on the right. As , the first time derivatives of inside either frame, when expressed with respect to the basis of e.g. the inertial frame, coincide. Carrying out the differentiations and re-arranging some terms yields the acceleration relative to the rotating reference frame, where is the apparent acceleration in the rotating reference frame, the term represents centrifugal acceleration, and the term is the Coriolis acceleration. The last term, , is the Euler acceleration and is zero in uniformly rotating frames. Newton's second law in the two frames When the expression for acceleration is multiplied by the mass of the particle, the three extra terms on the right-hand side result in fictitious forces in the rotating reference frame, that is, apparent forces that result from being in a non-inertial reference frame, rather than from any physical interaction between bodies. Using Newton's second law of motion we obtain: the Coriolis force the centrifugal force and the Euler force where is the mass of the object being acted upon by these fictitious forces. Notice that all three forces vanish when the frame is not rotating, that is, when For completeness, the inertial acceleration due to impressed external forces can be determined from the total physical force in the inertial (non-rotating) frame (for example, force from physical interactions such as electromagnetic forces) using Newton's second law in the inertial frame: Newton's law in the rotating frame then becomes In other words, to handle the laws of motion in a rotating reference frame: Use in magnetic resonance It is convenient to consider magnetic resonance in a frame that rotates at the Larmor frequency of the spins. This is illustrated in the animation below. The rotating wave approximation may also be used. See also Absolute rotation Centrifugal force (rotating reference frame) Centrifugal force as seen from systems rotating about a fixed axis Mechanics of planar particle motion Fictitious forces exhibited by a particle in planar motion as seen by the particle itself and by observers in a co-rotating frame of reference Coriolis force The effect of the Coriolis force on the Earth and other rotating systems Inertial frame of reference Non-inertial frame Fictitious force A more general treatment of the subject of this article References External links Animation clip showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces. Frames of reference Classical mechanics Astronomical coordinate systems Rotation
0.775263
0.991555
0.768715
Sources of electrical energy
This article provides information on the following six methods of producing electric power. Friction: Energy produced by rubbing two material together. Heat: Energy produced by heating the junction where two unlike metals are joined. Light: Energy produced by light being absorbed by photoelectric cells, or solar power. Chemical: Energy produced by chemical reaction in a voltaic cell, such as an electric battery. Pressure: Energy produced by compressing or decompressing specific crystals. Magnetism: Energy produced in a conductor that cuts or is cut by magnetic lines of force. Friction Friction is the least-used of the six methods of producing energy. If a cloth rubs against an object, the object will display an effect called friction electricity. The object becomes charged due to the rubbing process, and now possesses an static electrical charge, hence it is also called static electricity. There are two main types of electrical charge: positive and negative. Each type of charge attracts the opposite type and repels the same type. This can be stated in the following way: Like charges repel and unlike charges attract. Static electricity has several applications. Its main application is in Van de Graaff generators, used to produce high voltages in order to test the dielectric strength of insulating materials. Other uses are in electrostatic painting and sandpaper manufacturing. The course grains acquire a negative charge as they move across the negative plate. As unlike charges attract, the positive plate attracts the course grains and their impact velocity enables them to be embedded into the adhesive. Heat In 1821 Thomas Seebeck discovered that the junction between two metals generates a voltage that is a function of temperature. If a closed circuit consists of conductors of two different metals, and if one junction of the two metals is at a higher temperature than the other, an electromotive force is created in a specific polarity. An example of this is in the case of copper and iron, the electrons first flow along the iron from the hot junction to the cold one. The electrons cross from the iron to the copper at the hot junction, and from the copper to the iron at the cold junction. This property of electromotive force production is known as the Seebeck effect. This effect is utilized in the most widely employed method of thermometry. Light The sun's rays can be used to produce electrical energy. The direct user of sunlight is the solar cell or photovoltaic cell, which converts sunlight directly into electrical energy without the incorporation of a mechanical device. This technology is simpler than the fossil-fuel-driven systems of producing electrical energy. A solar cell is formed by a light-sensitive p-n junction semiconductor, which when exposed to sunlight is excited to conduction by the photons in light. When light, in the form of photons, hits the cell and strikes an atom, photo-ionisation creates electron-hole pairs. The electrostatic field causes separation of these pairs, establishing an electromotive force in the process. The electric field sends the electron to the p-type material, and the hole to the n-type material. If an external current path is provided, electrical energy will be available to do work. The electron flow provides the current, and the cell's electric field creates the voltage. With both current and voltage the silicon cell has power. The greater the amount of light falling on the cell's surface, the greater is the probability of photons releasing electrons, and hence more electric energy is produced. Chemical When a zinc electrode and copper electrode are placed in a dilute solution of sulfuric acid, the two metals react to each other's presence within the electrolyte and develop a potential difference of about 1 volt between them. When a conducting path joins the electrodes externally, the zinc electrode dissolves slowly into the acid electrolyte, The zinc molecule goes into the electrolyte in the form of positive ions while its electrons are left on the electrode. The copper electrode on the other hand does not dissolve in the electrolyte. Instead, it gives up its electrons to the positively charged ions of hydrogen in the electrolyte, turning them into molecules of hydrogen gas that bubble up around the electrode. The zinc ion combines with the sulfate ion to form zinc sulfate, and this salt falls to the bottom of the cell. The effect of all this is that the dissolving zinc electrode becomes negatively charged, the copper electrode is left with a positive charge, and electrons from the zinc pass through the external circuit to the copper electrode. Pressure The molecules of some crystals and ceramics are permanently polarised: some parts of the molecule are positively charged, while other parts are negatively charged. These materials produce an electric charge when the material changes dimension as a result of an imposed external force. The charge produced is referred to as piezoelectricity. Many crystalline materials such as the natural crystals of quartz and Rochelle salted together with manufactured polycrystalline ceramics such as lead titanate zirconate and barium titanate exhibit piezoelectric effects. Piezoelectric materials are used as buzzers inside pagers, ultrasonic cleaners and mobile phones, and in gas igniters. In addition, these piezoelectric sensors are able to convert pressure, force, vibration, or shock into electrical energy. Being capable only of measuring active events, they are also used in flow meters, accelerometers and level detectors, as well as motor vehicles, to sense changes in the transmission, fuel injection and coolant pressure. When a voltage or an applied electric field stresses a piezo element electrically, its dimensions change. This phenomenon is known as electrostriction, or the reverse piezoelectric effect. This effect enables the element to act as a translating device called an actuator. Piezoelectric materials are used in power actuators, converting electrical energy into mechanical energy, and in acoustic transducers, converting electric fields into sound waves. Magnetism The most useful and widely employed application of magnetism is in the production of electrical energy. The mechanical power needed to assist in this production is provided by a number of different sources. These sources are called prime movers, and include diesel, petrol and natural gas engines. Coal, oil, natural gas, biomass and nuclear energy are energy sources that are used to heat water to produce super-heated steam. Non-mechanical prime movers include water, steam, wind, wave motion and tidal current. These non-mechanical prime movers engage a turbine that is coupled to a generator. Generators that employ the principle of electro-magnetic induction carry out the final conversion of these energy sources. In order to do this, three necessary conditions must exist before a voltage is created by magnetism: movement, conductors and a magnetic field. In accordance with these conditions, when a conductor or conductors move through a magnetic field to cut the lines of force, electrons are enabled to enter the conduction band thereby inducing an electric pressure for the production of alternating current in an external circuit. This may be referred to as an elementary alternator, consisting of a single wire loop called an armature with each end being attached to slip-rings and arranged so as to revolve midway between the magnetic poles. Two copper-graphite brushes connect with the external circuit on the slip-rings in order to collect the alternating current, generated in the conductor when the alternator is in operation. Another machine used for converting mechanical energy into electrical energy by means of electromagnetic induction is called a dynamo or direct current generator. The key difference between an alternator and a generator is that the alternator delivers AC (alternating current) to the external circuit, while the generator delivers DC (direct current). In both machines alternating current is induced in the armature, but the type of current delivered to the external circuit depends on the way in which the induced current is collected. In an alternator, the current is collected by brushes bearing against slip-rings; in a generator, a form of rotating switch called the commutator is placed between the armature and the external circuit. The commutator is designed to reverse the connections with the external circuit at the instant of each reversal of induced current in the armature, producing rectified current or direct current. This rectified current is not pure like the current of a voltaic cell but is instead a pulsating current that is constant in direction and varying in intensity. Electric power
0.782668
0.982116
0.76867
Heat capacity ratio
In thermal physics and thermodynamics, the heat capacity ratio, also known as the adiabatic index, the ratio of specific heats, or Laplace's coefficient, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor and is denoted by (gamma) for an ideal gas or (kappa), the isentropic exponent for a real gas. The symbol is used by aerospace and chemical engineers. where is the heat capacity, the molar heat capacity (heat capacity per mole), and the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes and refer to constant-pressure and constant-volume conditions respectively. The heat capacity ratio is important for its applications in thermodynamical reversible processes, especially involving ideal gases; the speed of sound depends on this factor. Thought experiment To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals , with representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked piston is proportional to , whereas the total amount of heat added is proportional to . Therefore, the heat capacity ratio in this example is 1.4. Another way of understanding the difference between and is that applies if work is done to the system, which causes a change in volume (such as by moving a piston so as to compress the contents of a cylinder), or if work is done by the system, which changes its temperature (such as heating the gas in a cylinder to cause a piston to move). applies only if , that is, no work is done. Consider the difference between adding heat to the gas with a locked piston and adding heat with a piston free to move, so that pressure remains constant. In the second case, the gas will both heat and expand, causing the piston to do mechanical work on the atmosphere. The heat that is added to the gas goes only partly into heating the gas, while the rest is transformed into the mechanical work performed by the piston. In the first, constant-volume case (locked piston), there is no external motion, and thus no mechanical work is done on the atmosphere; is used. In the second case, additional work is done as the volume changes, so the amount of heat required to raise the gas temperature (the specific heat capacity) is higher for this constant-pressure case. Ideal-gas relations For an ideal gas, the molar heat capacity is at most a function of temperature, since the internal energy is solely a function of temperature for a closed system, i.e., , where is the amount of substance in moles. In thermodynamic terms, this is a consequence of the fact that the internal pressure of an ideal gas vanishes. Mayer's relation allows us to deduce the value of from the more easily measured (and more commonly tabulated) value of : This relation may be used to show the heat capacities may be expressed in terms of the heat capacity ratio and the gas constant: Relation with degrees of freedom The classical equipartition theorem predicts that the heat capacity ratio for an ideal gas can be related to the thermally accessible degrees of freedom of a molecule by Thus we observe that for a monatomic gas, with 3 translational degrees of freedom per atom: As an example of this behavior, at 273 K (0 °C) the noble gases He, Ne, and Ar all have nearly the same value of , equal to 1.664. For a diatomic gas, often 5 degrees of freedom are assumed to contribute at room temperature since each molecule has 3 translational and 2 rotational degrees of freedom, and the single vibrational degree of freedom is often not included since vibrations are often not thermally active except at high temperatures, as predicted by quantum statistical mechanics. Thus we have For example, terrestrial air is primarily made up of diatomic gases (around 78% nitrogen, N2, and 21% oxygen, O2), and at standard conditions it can be considered to be an ideal gas. The above value of 1.4 is highly consistent with the measured adiabatic indices for dry air within a temperature range of 0–200 °C, exhibiting a deviation of only 0.2% (see tabulation above). For a linear triatomic molecule such as , there are only 5 degrees of freedom (3 translations and 2 rotations), assuming vibrational modes are not excited. However, as mass increases and the frequency of vibrational modes decreases, vibrational degrees of freedom start to enter into the equation at far lower temperatures than is typically the case for diatomic molecules. For example, it requires a far larger temperature to excite the single vibrational mode for , for which one quantum of vibration is a fairly large amount of energy, than for the bending or stretching vibrations of . For a non-linear triatomic gas, such as water vapor, which has 3 translational and 3 rotational degrees of freedom, this model predicts Real-gas relations As noted above, as temperature increases, higher-energy vibrational states become accessible to molecular gases, thus increasing the number of degrees of freedom and lowering . Conversely, as the temperature is lowered, rotational degrees of freedom may become unequally partitioned as well. As a result, both and increase with increasing temperature. Despite this, if the density is fairly low and intermolecular forces are negligible, the two heat capacities may still continue to differ from each other by a fixed constant (as above, ), which reflects the relatively constant difference in work done during expansion for constant pressure vs. constant volume conditions. Thus, the ratio of the two values, , decreases with increasing temperature. However, when the gas density is sufficiently high and intermolecular forces are important, thermodynamic expressions may sometimes be used to accurately describe the relationship between the two heat capacities, as explained below. Unfortunately the situation can become considerably more complex if the temperature is sufficiently high for molecules to dissociate or carry out other chemical reactions, in which case thermodynamic expressions arising from simple equations of state may not be adequate. Thermodynamic expressions Values based on approximations (particularly ) are in many cases not sufficiently accurate for practical engineering calculations, such as flow rates through pipes and valves at moderate to high pressures. An experimental value should be used rather than one based on this approximation, where possible. A rigorous value for the ratio can also be calculated by determining from the residual properties expressed as Values for are readily available and recorded, but values for need to be determined via relations such as these. See relations between specific heats for the derivation of the thermodynamic relations between the heat capacities. The above definition is the approach used to develop rigorous expressions from equations of state (such as Peng–Robinson), which match experimental values so closely that there is little need to develop a database of ratios or values. Values can also be determined through finite-difference approximation. Adiabatic process This ratio gives the important relation for an isentropic (quasistatic, reversible, adiabatic process) process of a simple compressible calorically-perfect ideal gas: is constant Using the ideal gas law, : is constant is constant where is the pressure of the gas, is the volume, and is the thermodynamic temperature. In gas dynamics we are interested in the local relations between pressure, density and temperature, rather than considering a fixed quantity of gas. By considering the density as the inverse of the volume for a unit mass, we can take in these relations. Since for constant entropy, , we have , or , it follows that For an imperfect or non-ideal gas, Chandrasekhar defined three different adiabatic indices so that the adiabatic relations can be written in the same form as above; these are used in the theory of stellar structure: All of these are equal to in the case of an ideal gas. See also Relations between heat capacities Heat capacity Specific heat capacity Speed of sound Thermodynamic equations Thermodynamics Volumetric heat capacity Notes References Thermodynamic properties Physical quantities Ratios Thought experiments in physics
0.77175
0.995989
0.768654
Particle physics
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combination of protons and neutrons is called nuclear physics. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction. Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators. Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle. These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory. Practical particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated by theoretical particle physicists and its presence confirmed by practical experiments. History The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics. Standard Model The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, , and bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model. Subatomic particles Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model. Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles. Quarks and leptons Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or 1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement. There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist. Bosons Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light. The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state. Antiparticles and color charge Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted and . When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles. Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)). Composite The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark). An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon. Hypothetical The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy. Experimental laboratories The world's major particle physics laboratories are: Brookhaven National Laboratory (Long Island, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider. Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000. CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments. DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL. Fermi National Accelerator Laboratory (Fermilab) (Batavia, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009. Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO). KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons. SLAC National Accelerator Laboratory (Menlo Park, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world. Theory Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today. One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists. Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions. A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE". There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity. Practical applications In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics. Future Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments. See also References External links Particle physics
0.770121
0.998069
0.768634