content_id
stringlengths
14
14
page_title
stringlengths
1
250
section_title
stringlengths
1
1.26k
breadcrumb
stringlengths
1
1.39k
text
stringlengths
9
3.55k
c_f3ly1571y6p2
Quantum graph
Summary
Metric_graph
In mathematics and physics, a quantum graph is a linear, network-shaped structure of vertices connected on edges (i.e., a graph) in which each edge is given a length and where a differential (or pseudo-differential) equation is posed on each edge. An example would be a power network consisting of power lines (edges) connected at transformer stations (vertices); the differential equations would then describe the voltage along each of the lines, with boundary conditions for each edge provided at the adjacent vertices ensuring that the current added over all edges adds to zero at each vertex. Quantum graphs were first studied by Linus Pauling as models of free electrons in organic molecules in the 1930s. They also arise in a variety of mathematical contexts, e.g. as model systems in quantum chaos, in the study of waveguides, in photonic crystals and in Anderson localization, or as limit on shrinking thin wires. Quantum graphs have become prominent models in mesoscopic physics used to obtain a theoretical understanding of nanotechnology. Another, more simple notion of quantum graphs was introduced by Freedman et al.Aside from actually solving the differential equations posed on a quantum graph for purposes of concrete applications, typical questions that arise are those of controllability (what inputs have to be provided to bring the system into a desired state, for example providing sufficient power to all houses on a power network) and identifiability (how and where one has to measure something to obtain a complete picture of the state of the system, for example measuring the pressure of a water pipe network to determine whether or not there is a leaking pipe).
c_tz0uc2ivxyor
Recurrent tensor
Summary
Recurrent_tensor
In mathematics and physics, a recurrent tensor, with respect to a connection ∇ {\displaystyle \nabla } on a manifold M, is a tensor T for which there is a one-form ω on M such that ∇ T = ω ⊗ T . {\displaystyle \nabla T=\omega \otimes T.\,}
c_c5dtmsmblysf
Scalar fields
Summary
Scalar_field_(physics)
In mathematics and physics, a scalar field is a function associating a single number to every point in a space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units). In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
c_kcmnd9szkkoj
Soliton wave
Summary
Soliton_wave
In mathematics and physics, a soliton is a nonlinear, self-reinforcing, localized wave packet that is strongly stable, in that it preserves its shape while propagating freely, at constant velocity, and recovers it even after collisions with other such localized wave packets. Its remarkable stability can be traced to a balanced cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons were subsequently found to provide stable solutions of a wide class of weakly nonlinear dispersive partial differential equations describing physical systems.
c_btup9ypi418r
Soliton wave
Summary
Soliton_wave
The soliton phenomenon was first described in 1834 by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation". The term soliton was coined by Zabusky and Kruskal to describe localized, strongly stable propagating solutions to the Korteweg–de Vries equation, which models waves of the type seen by Russell. The name was meant to characterize the solitary nature of the waves, with the 'on' suffix recalling the usage for particles such as electrons, baryons or hadrons, reflecting their observed particle-like behaviour.
c_hwxjxnd8llpg
Tensor analysis
Summary
Tensor_analysis
In mathematics and physics, a tensor field assigns a tensor to each point of a mathematical space (typically a Euclidean space or manifold). Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in materials, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a pure number plus a direction, like velocity), a tensor field is a generalization of a scalar field or vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M, we call A a tensor field on M. Many mathematical structures called "tensors" are also tensor fields. For example, the Riemann curvature tensor is a tensor field as it associates a tensor to each point of a Riemannian manifold, which is a topological space.
c_k0n5a0qpq7ob
Traveling plane wave
Summary
Traveling_plane_wave
In mathematics and physics, a traveling plane wave is a special case of plane wave, namely a field whose evolution in time can be described as simple translation of its values at a constant wave speed c {\displaystyle c} , along a fixed direction of propagation n → {\displaystyle {\vec {n}}} . Such a field can be written as F ( x → , t ) = G ( x → ⋅ n → − c t ) {\displaystyle F({\vec {x}},t)=G\left({\vec {x}}\cdot {\vec {n}}-ct\right)\,} where G ( u ) {\displaystyle G(u)} is a function of a single real parameter u = d − c t {\displaystyle u=d-ct} . The function G {\displaystyle G} describes the profile of the wave, namely the value of the field at time t = 0 {\displaystyle t=0} , for each displacement d = x → ⋅ n → {\displaystyle d={\vec {x}}\cdot {\vec {n}}} . For each displacement d {\displaystyle d} , the moving plane perpendicular to n → {\displaystyle {\vec {n}}} at distance d + c t {\displaystyle d+ct} from the origin is called a wavefront.
c_wdw9k51w5y5s
Traveling plane wave
Summary
Traveling_plane_wave
This plane too travels along the direction of propagation n → {\displaystyle {\vec {n}}} with velocity c {\displaystyle c} ; and the value of the field is then the same, and constant in time, at every one of its points. The wave F {\displaystyle F} may be a scalar or vector field; its values are the values of G {\displaystyle G} . A sinusoidal plane wave is a special case, when G ( u ) {\displaystyle G(u)} is a sinusoidal function of u {\displaystyle u} .
c_p4p97l1jvvom
Vector space
Summary
Linear_space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms.
c_boaix5xqd703
Vector space
Summary
Linear_space
The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces.
c_z66rmk2e1dki
Vector space
Summary
Linear_space
This provides a concise and synthetic way for manipulating and studying systems of linear equations. Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic).
c_v5g8wi0e2nva
Vector space
Summary
Linear_space
A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas.
c_95iyq6agz3kk
Vector space
Summary
Linear_space
Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension. Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces.
c_o3a4l1zx0cwv
Acceleration vector
Summary
Acceleration_(differential_geometry)
In mathematics and physics, acceleration is the rate of change of velocity of a curve with respect to a given linear connection. This operation provides us with a measure of the rate and direction of the "bend".
c_30xfa66ld0gr
Equipotential
Summary
Equipotential_surface
In mathematics and physics, an equipotential or isopotential refers to a region in space where every point is at the same potential. This usually refers to a scalar potential (in that case it is a level set of the potential), although it can also be applied to vector potentials. An equipotential of a scalar potential function in n-dimensional space is typically an (n − 1)-dimensional space. The del operator illustrates the relationship between a vector field and its associated scalar potential field.
c_z34bgi5lfbbn
Equipotential
Summary
Equipotential_surface
An equipotential region might be referred as being 'of equipotential' or simply be called 'an equipotential'. An equipotential region of a scalar potential in three-dimensional space is often an equipotential surface (or potential isosurface), but it can also be a three-dimensional mathematical solid in space. The gradient of the scalar potential (and hence also its opposite, as in the case of a vector field with an associated potential field) is everywhere perpendicular to the equipotential surface, and zero inside a three-dimensional equipotential region.
c_2xeo2d1hy0oj
Equipotential
Summary
Equipotential_surface
Electrical conductors offer an intuitive example. If a and b are any two points within or at the surface of a given conductor, and given there is no flow of charge being exchanged between the two points, then the potential difference is zero between the two points. Thus, an equipotential would contain both points a and b as they have the same potential.
c_i81svgzbxjpp
Equipotential
Summary
Equipotential_surface
Extending this definition, an isopotential is the locus of all points that are of the same potential. Gravity is perpendicular to the equipotential surfaces of the gravity potential, and in electrostatics and steady electric currents, the electric field (and hence the current, if any) is perpendicular to the equipotential surfaces of the electric potential (voltage). In gravity, a hollow sphere has a three-dimensional equipotential region inside, with no gravity from the sphere (see shell theorem).
c_9m08ebyhqf4t
Equipotential
Summary
Equipotential_surface
In electrostatics, a conductor is a three-dimensional equipotential region. In the case of a hollow conductor (Faraday cage), the equipotential region includes the space inside. A ball will not be accelerated left or right by the force of gravity if it is resting on a flat, horizontal surface, because it is an equipotential surface. For the gravity of Earth, the corresponding geopotential isosurface (the equigeopotential) that best fits mean sea level is called the geoid.
c_lvwbxzf05kae
D'Alembert–Euler condition
Summary
D'Alembert–Euler_condition
In mathematics and physics, especially the study of mechanics and fluid dynamics, the d'Alembert-Euler condition is a requirement that the streaklines of a flow are irrotational. Let x = x(X,t) be the coordinates of the point x into which X is carried at time t by a (fluid) flow. Let x ¨ = D 2 x D t {\displaystyle {\ddot {\mathbf {x} }}={\frac {D^{2}\mathbf {x} }{Dt}}} be the second material derivative of x. Then the d'Alembert-Euler condition is: c u r l x = 0 . {\displaystyle \mathrm {curl} \ \mathbf {x} =\mathbf {0} .\,} The d'Alembert-Euler condition is named for Jean le Rond d'Alembert and Leonhard Euler who independently first described its use in the mid-18th century. It is not to be confused with the Cauchy–Riemann conditions.
c_hyoovzti80e7
Asymptotic homogenization
Summary
Asymptotic_homogenization
In mathematics and physics, homogenization is a method of studying partial differential equations with rapidly oscillating coefficients, such as ∇ ⋅ ( A ( x → ϵ ) ∇ u ϵ ) = f {\displaystyle \nabla \cdot \left(A\left({\frac {\vec {x}}{\epsilon }}\right)\nabla u_{\epsilon }\right)=f} where ϵ {\displaystyle \epsilon } is a very small parameter and A ( y → ) {\displaystyle A\left({\vec {y}}\right)} is a 1-periodic coefficient: A ( y → + e → i ) = A ( y → ) {\displaystyle A\left({\vec {y}}+{\vec {e}}_{i}\right)=A\left({\vec {y}}\right)} , i = 1 , … , n {\displaystyle i=1,\dots ,n} . It turns out that the study of these equations is also of great importance in physics and engineering, since equations of this type govern the physics of inhomogeneous or heterogeneous materials. Of course, all matter is inhomogeneous at some scale, but frequently it is convenient to treat it as homogeneous. A good example is the continuum concept which is used in continuum mechanics.
c_6b4l6afrepl5
Asymptotic homogenization
Summary
Asymptotic_homogenization
Under this assumption, materials such as fluids, solids, etc. can be treated as homogeneous materials and associated with these materials are material properties such as shear modulus, elastic moduli, etc. Frequently, inhomogeneous materials (such as composite materials) possess microstructure and therefore they are subjected to loads or forcings which vary on a length scale which is far bigger than the characteristic length scale of the microstructure. In this situation, one can often replace the equation above with an equation of the form ∇ ⋅ ( A ∗ ∇ u ) = f {\displaystyle \nabla \cdot \left(A^{*}\nabla u\right)=f} where A ∗ {\displaystyle A^{*}} is a constant tensor coefficient and is known as the effective property associated with the material in question. It can be explicitly computed as A i j ∗ = ∫ ( 0 , 1 ) n A ( y → ) ( ∇ w j ( y → ) + e → j ) ⋅ e → i d y 1 … d y n , i , j = 1 , … , n {\displaystyle A_{ij}^{*}=\int _{(0,1)^{n}}A({\vec {y}})\left(\nabla w_{j}({\vec {y}})+{\vec {e}}_{j}\right)\cdot {\vec {e}}_{i}\,dy_{1}\dots dy_{n},\qquad i,j=1,\dots ,n} from 1-periodic functions w j {\displaystyle w_{j}} satisfying: ∇ y ⋅ ( A ( y → ) ∇ w j ) = − ∇ y ⋅ ( A ( y → ) e → j ) .
c_2ll7pnovch64
Asymptotic homogenization
Summary
Asymptotic_homogenization
{\displaystyle \nabla _{y}\cdot \left(A({\vec {y}})\nabla w_{j}\right)=-\nabla _{y}\cdot \left(A({\vec {y}}){\vec {e}}_{j}\right).} This process of replacing an equation with a highly oscillatory coefficient with one with a homogeneous (uniform) coefficient is known as homogenization.
c_h8bkqavpng6y
Asymptotic homogenization
Summary
Asymptotic_homogenization
This subject is inextricably linked with the subject of micromechanics for this very reason. In homogenization one equation is replaced by another if u ϵ ≈ u {\displaystyle u_{\epsilon }\approx u} for small enough ϵ {\displaystyle \epsilon } , provided u ϵ → u {\displaystyle u_{\epsilon }\to u} in some appropriate norm as ϵ → 0 {\displaystyle \epsilon \to 0} . As a result of the above, homogenization can therefore be viewed as an extension of the continuum concept to materials which possess microstructure.
c_prglxl4fe0jj
Asymptotic homogenization
Summary
Asymptotic_homogenization
The analogue of the differential element in the continuum concept (which contains enough atom, or molecular structure to be representative of that material), is known as the "Representative Volume Element" in homogenization and micromechanics. This element contains enough statistical information about the inhomogeneous medium in order to be representative of the material. Therefore averaging over this element gives an effective property such as A ∗ {\displaystyle A^{*}} above.
c_hlyw7p9pia00
Asymptotic homogenization
Summary
Asymptotic_homogenization
Classical results of homogenization theory were obtained for media with periodic microstructure modeled by partial differential equations with periodic coefficients. These results were later generalized to spatially homogeneous random media modeled by differential equations with random coefficients which statistical properties are the same at every point in space. In practice, many applications require a more general way of modeling that is neither periodic nor statistically homogeneous. For this end the methods of the homogenization theory have been extended to partial differential equations, which coefficients are neither periodic nor statistically homogeneous (so-called arbitrarily rough coefficients).
c_5v8p88co8hww
Warped model
Summary
Warped_model
In mathematics and physics, in particular differential geometry and general relativity, a warped geometry is a Riemannian or Lorentzian manifold whose metric tensor can be written in form d s 2 = g a b ( y ) d y a d y b + f ( y ) g i j ( x ) d x i d x j . {\displaystyle ds^{2}=g_{ab}(y)\,dy^{a}\,dy^{b}+f(y)g_{ij}(x)\,dx^{i}\,dx^{j}.} The geometry almost decomposes into a Cartesian product of the y geometry and the x geometry – except that the x part is warped, i.e. it is rescaled by a scalar function of the other coordinates y. For this reason, the metric of a warped geometry is often called a warped product metric.Warped geometries are useful in that separation of variables can be used when solving partial differential equations over them.
c_seh0x65jp5r3
Generalizations of Pauli matrices
Summary
Generalizations_of_Pauli_matrices
In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized.
c_h1b35j16ax8v
Euler's Equation
Summary
Euler's_equations
In mathematics and physics, many topics are named in honor of Swiss mathematician Leonhard Euler (1707–1783), who made many important discoveries and innovations. Many of these items named after Euler include their own unique function, equation, formula, identity, number (single or sequence), or other mathematical entity. Many of these entities have been given simple and ambiguous names such as Euler's function, Euler's equation, and Euler's formula. Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler.
c_ia6rp1eg25un
Multiple-scale analysis
Summary
Multiple-scale_analysis
In mathematics and physics, multiple-scale analysis (also called the method of multiple scales) comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions. Mathematics research from about the 1980s proposes that coordinate transforms and invariant manifolds provide a sounder support for multiscale modelling (for example, see center manifold and slow manifold).
c_tum3ojj8tki0
Anti-de Sitter space
Summary
Anti-de_Sitter_space
In mathematics and physics, n-dimensional anti-de Sitter space (AdSn) is a maximally symmetric Lorentzian manifold with constant negative scalar curvature. Anti-de Sitter space and de Sitter space are named after Willem de Sitter (1872–1934), professor of astronomy at Leiden University and director of the Leiden Observatory. Willem de Sitter and Albert Einstein worked together closely in Leiden in the 1920s on the spacetime structure of the universe. Manifolds of constant curvature are most familiar in the case of two dimensions, where the elliptic plane or surface of a sphere is a surface of constant positive curvature, a flat (i.e., Euclidean) plane is a surface of constant zero curvature, and a hyperbolic plane is a surface of constant negative curvature.
c_b4p5n6549cbg
Anti-de Sitter space
Summary
Anti-de_Sitter_space
Einstein's general theory of relativity places space and time on equal footing, so that one considers the geometry of a unified spacetime instead of considering space and time separately. The cases of spacetime of constant curvature are de Sitter space (positive), Minkowski space (zero), and anti-de Sitter space (negative). As such, they are exact solutions of the Einstein field equations for an empty universe with a positive, zero, or negative cosmological constant, respectively. Anti-de Sitter space generalises to any number of space dimensions. In higher dimensions, it is best known for its role in the AdS/CFT correspondence, which suggests that it is possible to describe a force in quantum mechanics (like electromagnetism, the weak force or the strong force) in a certain number of dimensions (for example four) with a string theory where the strings exist in an anti-de Sitter space, with one additional (non-compact) dimension.
c_nvtetlrgunlp
Toda field theory
Summary
Toda_field_theory
In mathematics and physics, specifically the study of field theory and partial differential equations, a Toda field theory, named after Morikazu Toda, is specified by a choice of Lie algebra and a specific Lagrangian.
c_n0mexcyue0o2
Super Minkowski space
Summary
Super_Minkowski_space
In mathematics and physics, super Minkowski space or Minkowski superspace is a supersymmetric extension of Minkowski space, sometimes used as the base manifold (or rather, supermanifold) for superfields. It is acted on by the super Poincaré algebra.
c_k0x9n60m8ix6
Kinetic Monte Carlo surface growth method
Summary
Kinetic_Monte_Carlo_surface_growth_method
In mathematics and physics, surface growth refers to models used in the dynamical study of the growth of a surface, usually by means of a stochastic differential equation of a field.
c_3a3dzh9ttibq
Artin billiard
Summary
Artin_billiard
In mathematics and physics, the Artin billiard is a type of a dynamical billiard first studied by Emil Artin in 1924. It describes the geodesic motion of a free particle on the non-compact Riemann surface H / Γ , {\displaystyle \mathbb {H} /\Gamma ,} where H {\displaystyle \mathbb {H} } is the upper half-plane endowed with the Poincaré metric and Γ = P S L ( 2 , Z ) {\displaystyle \Gamma =PSL(2,\mathbb {Z} )} is the modular group. It can be viewed as the motion on the fundamental domain of the modular group with the sides identified. The system is notable in that it is an exactly solvable system that is strongly chaotic: it is not only ergodic, but is also strong mixing.
c_7fi0o13sgevj
Artin billiard
Summary
Artin_billiard
As such, it is an example of an Anosov flow. Artin's paper used symbolic dynamics for analysis of the system.
c_7vpamnw7nun6
Artin billiard
Summary
Artin_billiard
The quantum mechanical version of Artin's billiard is also exactly solvable. The eigenvalue spectrum consists of a bound state and a continuous spectrum above the energy E = 1 / 4 {\displaystyle E=1/4} . The wave functions are given by Bessel functions.
c_d9qcbhtgchls
Christoffel symbol
Summary
Connection_coefficient
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor.
c_josd06glal7p
Christoffel symbol
Summary
Connection_coefficient
Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group O(p, q). As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold.
c_t41dc9qd01iq
Christoffel symbol
Summary
Connection_coefficient
The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.
c_hdz1y83qqguv
Christoffel symbol
Summary
Connection_coefficient
In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point.
c_0gejk67u1im8
Christoffel symbol
Summary
Connection_coefficient
At each point of the underlying n-dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted Γijk for i, j, k = 1, 2, ..., n. Each entry of this n × n × n array is a real number. Under linear coordinate transformations on the manifold, the Christoffel symbols transform like the components of a tensor, but under general coordinate transformations (diffeomorphisms) they do not. Most of the algebraic properties of the Christoffel symbols follow from their relationship to the affine connection; only a few follow from the fact that the structure group is the orthogonal group O(m, n) (or the Lorentz group O(3, 1) for general relativity).
c_fcbr0v56asjv
Christoffel symbol
Summary
Connection_coefficient
Christoffel symbols are used for performing practical calculations. For example, the Riemann curvature tensor can be expressed entirely in terms of the Christoffel symbols and their first partial derivatives. In general relativity, the connection plays the role of the gravitational force field with the corresponding gravitational potential being the metric tensor. When the coordinate system and the metric tensor share some symmetry, many of the Γijk are zero. The Christoffel symbols are named for Elwin Bruno Christoffel (1829–1900).
c_6sy3onfsg7ij
Kadomtsev–Petviashvili equation
Summary
Kadomtsev–Petviashvili_equation
In mathematics and physics, the Kadomtsev–Petviashvili equation (often abbreviated as KP equation) is a partial differential equation to describe nonlinear wave motion. Named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili, the KP equation is usually written as where λ = ± 1 {\displaystyle \lambda =\pm 1} . The above form shows that the KP equation is a generalization to two spatial dimensions, x and y, of the one-dimensional Korteweg–de Vries (KdV) equation. To be physically meaningful, the wave propagation direction has to be not-too-far from the x direction, i.e. with only slow variations of solutions in the y direction.
c_xjlx77y364x5
Kadomtsev–Petviashvili equation
Summary
Kadomtsev–Petviashvili_equation
Like the KdV equation, the KP equation is completely integrable. It can also be solved using the inverse scattering transform much like the nonlinear Schrödinger equation.In 2002, the regularized version of the KP equation, naturally referred to as the Benjamin–Bona–Mahony–Kadomtsev–Petviashvili equation (or simply the BBM-KP equation), was introduced as an alternative model for small amplitude long waves in shallow water moving mainly in the x direction in 2+1 space. ∂ x ( ∂ t u + u ∂ x u + ϵ 2 ∂ x x t u ) + λ ∂ y y u = 0 {\displaystyle \displaystyle \partial _{x}(\partial _{t}u+u\partial _{x}u+\epsilon ^{2}\partial _{xxt}u)+\lambda \partial _{yy}u=0} where λ = ± 1 {\displaystyle \lambda =\pm 1} . The BBM-KP equation provides an alternative to the usual KP equation, in a similar way that the Benjamin–Bona–Mahony equation is related to the classical Korteweg–de Vries equation, as the linearized dispersion relation of the BBM-KP is a good approximation to that of the KP but does not exhibit the unwanted limiting behavior as the Fourier variable dual to x approaches ± ∞ {\displaystyle \pm \infty } .
c_5l8yjl4lx49h
Magnus expansion
Summary
Magnus_expansion
In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.
c_46a91dfy7q0o
Poincaré recurrence theorem
Summary
Recurrence_time
In mathematics and physics, the Poincaré recurrence theorem states that certain dynamical systems will, after a sufficiently long but finite time, return to a state arbitrarily close to (for continuous state systems), or exactly the same as (for discrete state systems), their initial state. The Poincaré recurrence time is the length of time elapsed until the recurrence. This time may vary greatly depending on the exact initial state and required degree of closeness. The result applies to isolated mechanical systems subject to some constraints, e.g., all particles must be bound to a finite volume.
c_hc17h0thj9ow
Poincaré recurrence theorem
Summary
Recurrence_time
The theorem is commonly discussed in the context of ergodic theory, dynamical systems and statistical mechanics. Systems to which the Poincaré recurrence theorem applies are called conservative systems. The theorem is named after Henri Poincaré, who discussed it in 1890 and proved by Constantin Carathéodory using measure theory in 1919.
c_mg5iywu2f3ow
Geometric center
Summary
Centroid
In mathematics and physics, the centroid, also known as geometric center or center of figure, of a plane figure or solid figure is the arithmetic mean position of all the points in the surface of the figure. The same definition extends to any object in n-dimensional Euclidean space.In geometry, one often assumes uniform mass density, in which case the barycenter or center of mass coincides with the centroid. Informally, it can be understood as the point at which a cutout of the shape (with uniformly distributed mass) could be perfectly balanced on the tip of a pin.In physics, if variations in gravity are considered, then a center of gravity can be defined as the weighted mean of all points weighted by their specific weight. In geography, the centroid of a radial projection of a region of the Earth's surface to sea level is the region's geographical center.
c_f2tyoo9j7zhu
Diamagnetic inequality
Summary
Diamagnetic_inequality
In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum.To precisely state the inequality, let L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} denote the usual Hilbert space of square-integrable functions, and H 1 ( R n ) {\displaystyle H^{1}(\mathbb {R} ^{n})} the Sobolev space of square-integrable functions with square-integrable derivatives. Let f , A 1 , … , A n {\displaystyle f,A_{1},\dots ,A_{n}} be measurable functions on R n {\displaystyle \mathbb {R} ^{n}} and suppose that A j ∈ L loc 2 ( R n ) {\displaystyle A_{j}\in L_{\text{loc}}^{2}(\mathbb {R} ^{n})} is real-valued, f {\displaystyle f} is complex-valued, and f , ( ∂ 1 + i A 1 ) f , … , ( ∂ n + i A n ) f ∈ L 2 ( R n ) {\displaystyle f,(\partial _{1}+iA_{1})f,\dots ,(\partial _{n}+iA_{n})f\in L^{2}(\mathbb {R} ^{n})} . Then for almost every x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} , In particular, | f | ∈ H 1 ( R n ) {\displaystyle |f|\in H^{1}(\mathbb {R} ^{n})} .
c_rfex29mgveun
Heat diffusion
Summary
Heat_equation
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations.
c_pcwwmysnt982
Heat diffusion
Summary
Heat_equation
The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003.
c_6ttzwl5ju4jf
Heat diffusion
Summary
Heat_equation
Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time.
c_df0v2vrwwlij
Heat diffusion
Summary
Heat_equation
In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of "artificial viscosity" methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr.
c_zjcakn03qeug
Inverse scattering
Summary
Inverse_scattering
In mathematics and physics, the inverse scattering problem is the problem of determining characteristics of an object, based on data of how it scatters incoming radiation or particles. It is the inverse problem to the direct scattering problem, which is to determine how radiation or particles are scattered based on the properties of the scatterer. Soliton equations are a class of partial differential equations which can be studied and solved by a method called the inverse scattering transform, which reduces the nonlinear PDEs to a linear inverse scattering problem.
c_3fphq55lqv9j
Inverse scattering
Summary
Inverse_scattering
The nonlinear Schrödinger equation, the Korteweg–de Vries equation and the KP equation are examples of soliton equations. In one space dimension the inverse scattering problem is equivalent to a Riemann-Hilbert problem. Since its early statement for radiolocation, many applications have been found for inverse scattering techniques, including echolocation, geophysical survey, nondestructive testing, medical imaging, quantum field theory.
c_2b44kqv2qooh
Orientation entanglement
Summary
Orientation_entanglement
In mathematics and physics, the notion of orientation entanglement is sometimes used to develop intuition relating to the geometry of spinors or alternatively as a concrete realization of the failure of the special orthogonal groups to be simply connected.
c_g8sz0159r1ls
Plate trick
Summary
Plate_trick
In mathematics and physics, the plate trick, also known as Dirac's string trick, the belt trick, or the Balinese cup trick, is any of several demonstrations of the idea that rotating an object with strings attached to it by 360 degrees does not return the system to its original state, while a second rotation of 360 degrees, a total rotation of 720 degrees, does. Mathematically, it is a demonstration of the theorem that SU(2) (which double-covers SO(3)) is simply connected. To say that SU(2) double-covers SO(3) essentially means that the unit quaternions represent the group of rotations twice over. A detailed, intuitive, yet semi-formal articulation can be found in the article on tangloids.
c_xhvl0jq3a2w5
Right hand grip rule
Summary
Right-hand_screw_rule
In mathematics and physics, the right-hand rule is a common mnemonic for understanding the orientation of axes in three-dimensional space. It is also a convenient method for quickly finding the direction of the cross product of two vectors. Rather than a mathematical fact, it is a convention, closely related to the convention that rotation around an axis is positive if the sense of rotation as seen from the axis is counterclockwise and negative if it is clockwise. Most left-hand and right-hand rules arise from the fact that the three axes of three-dimensional space have two possible orientations.
c_v6gks3y4qoz3
Right hand grip rule
Summary
Right-hand_screw_rule
One can see this by holding one's hands outward and together, palms up, with the thumbs out-stretched to the right and left, and the fingers making a curling motion from straight outward to pointing upward. If the curling motion of the fingers represents a movement from the first (x-axis) to the second (y-axis), then the third (z-axis) can point along either thumb. Left-hand and right-hand rules arise when dealing with coordinate axes.
c_kyyv6l9bn29p
Right hand grip rule
Summary
Right-hand_screw_rule
The rule can be used to find the direction of the magnetic field, rotation, spirals, electromagnetic fields, mirror images, and enantiomers in mathematics and chemistry. The sequence is often: index finger along the first vector, then middle finger along the second, then thumb along the third. Two other sequences also work because they preserve the cyclic nature of the cross product (and the underlying Levi-Civita symbol): Middle finger, thumb, index finger. Thumb, index finger, middle finger.
c_lbj5keb7o06a
Spectral asymmetry
Summary
Spectral_asymmetry
In mathematics and physics, the spectral asymmetry is the asymmetry in the distribution of the spectrum of eigenvalues of an operator. In mathematics, the spectral asymmetry arises in the study of elliptic operators on compact manifolds, and is given a deep meaning by the Atiyah-Singer index theorem. In physics, it has numerous applications, typically resulting in a fractional charge due to the asymmetry of the spectrum of a Dirac operator.
c_vwnzs15qbnpp
Spectral asymmetry
Summary
Spectral_asymmetry
For example, the vacuum expectation value of the baryon number is given by the spectral asymmetry of the Hamiltonian operator. The spectral asymmetry of the confined quark fields is an important property of the chiral bag model. For fermions, it is known as the Witten index, and can be understood as describing the Casimir effect for fermions.
c_amnuzi3iq22x
Generator (mathematics)
Summary
Generator_(mathematics)
In mathematics and physics, the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set. The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set.
c_8e8dmsqlxp50
Vector notation
Summary
Vector_notation
In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For representing a vector, the common typographic convention is lower case, upright boldface type, as in v. The International Organization for Standardization (ISO) recommends either bold italic serif, as in v, or non-bold italic serif accented by a right arrow, as in v → {\displaystyle {\vec {v}}} .In advanced mathematics, vectors are often represented in a simple italic type, like any variable.
c_xfmer9gnawig
Quota rule
Summary
Quota_rule
In mathematics and political science, the quota rule describes a desired property of a proportional apportionment or election method. It states that the number of seats that should be allocated to a given party should be between the upper or lower roundings (called upper and lower quotas) of its fractional proportional share (called natural quota). As an example, if a party deserves 10.56 seats out of 15, the quota rule states that when the seats are allotted, the party may get 10 or 11 seats, but not lower or higher. Many common election methods, such as all highest averages methods, violate the quota rule.
c_0zeqa9o3d38o
Skorokhod's embedding theorem
Summary
Skorokhod's_embedding_theorem
In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process (Brownian motion) evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A. V. Skorokhod.
c_q04wq0ie3j1e
Continuum percolation theory
Summary
Continuum_percolation_theory
In mathematics and probability theory, continuum percolation theory is a branch of mathematics that extends discrete percolation theory to continuous space (often Euclidean space ℝn). More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type of point process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components.
c_ac1xo9wdumbt
Continuum percolation theory
Summary
Continuum_percolation_theory
As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components. Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study of random graphs and random geometric graphs. Continuum percolation arose from an early mathematical model for wireless networks, which, with the rise of several wireless network technologies in recent years, has been generalized and studied in order to determine the theoretical bounds of information capacity and performance in wireless networks. In addition to this setting, continuum percolation has gained application in other disciplines including biology, geology, and physics, such as the study of porous material and semiconductors, while becoming a subject of mathematical interest in its own right.
c_w906f2tqolrn
Borell–TIS inequality
Summary
Borell–TIS_inequality
In mathematics and probability, the Borell–TIS inequality is a result bounding the probability of a deviation of the uniform norm of a centered Gaussian stochastic process above its expected value. The result is named for Christer Borell and its independent discoverers Boris Tsirelson, Ildar Ibragimov, and Vladimir Sudakov. The inequality has been described as "the single most important tool in the study of Gaussian processes."
c_vred2azkmcdq
Harmonic spinor
Summary
Harmonic_spinor
In mathematics and quantum mechanics, a Dirac operator is a differential operator that is a formal square root, or half-iterate, of a second-order operator such as a Laplacian. The original case which concerned Paul Dirac was to factorise formally an operator for Minkowski space, to get a form of quantum theory compatible with special relativity; to get the relevant Laplacian as a product of first-order operators he introduced spinors. It was first published in 1928.
c_4yr8h6q6tvct
Symbols of grouping
Summary
Symbols_of_grouping
In mathematics and related subjects, understanding a mathematical expression depends on an understanding of symbols of grouping, such as parentheses (), brackets , and braces {}. These same symbols are also used in ways where they are not symbols of grouping. For example, in the expression 3(x+y) the parentheses are symbols of grouping, but in the expression (3, 5) the parentheses may indicate an open interval. The most common symbols of grouping are the parentheses and the brackets, and the brackets are usually used to avoid too many repeated parentheses.
c_yytvpti8k34r
Symbols of grouping
Summary
Symbols_of_grouping
For example, to indicate the product of binomials, parentheses are usually used, thus: ( 2 x + 3 ) ( 3 x + 4 ) {\displaystyle (2x+3)(3x+4)} . But if one of the binomials itself contains parentheses, as in ( 2 ( a + b ) + 3 ) {\displaystyle (2(a+b)+3)} one or more pairs of parentheses may be replaced by brackets, thus: {\displaystyle } . Beyond elementary mathematics, brackets are mostly used for other purposes, e.g. to denote a closed interval, or an equivalence class, so they appear rarely for grouping.
c_u0a9ywtl2g3s
Symbols of grouping
Summary
Symbols_of_grouping
The usage of the word "parentheses" varies from country to country. In the United States, the word parentheses (singular "parenthesis") is used for the curved symbol of grouping, but in many other countries the curved symbol of grouping is called a "bracket" and the symbol of grouping with two right angles joined is called a "square bracket". The symbol of grouping knows as "braces" has two major uses.
c_epmxyyvbsilh
Symbols of grouping
Summary
Symbols_of_grouping
If two of these symbols are used, one on the left and the mirror image of it on the right, it almost always indicates a set, as in { a , b , c } {\displaystyle \{a,b,c\}} , the set containing three members, a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} . But if it is used only on the left, it groups two or more simultaneous equations. There are other symbols of grouping.
c_wn596xqoqjil
Symbols of grouping
Summary
Symbols_of_grouping
One is the bar above an expression, as in the square root sign in which the bar is a symbol of grouping. For example √p+q is the square root of the sum. The bar is also a symbol of grouping in repeated decimal digits.
c_6ulc00xxix5n
Symbols of grouping
Summary
Symbols_of_grouping
A decimal point followed by one or more digits with a bar over them, for example 0.123, represents the repeating decimal 0.123123123... .A superscript is understood to be grouped as long as it continues in the form of a superscript. For example if an x has a superscript of the forma+b, the sum is the exponent. For example: x2+3, it is understood that the 2+3 is grouped, and that the exponent is the sum of 2 and 3. These rules are understood by all mathematicians.
c_880rtqmmi0b0
Nonlinear differential equation
Summary
Nonlinear_equation
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
c_9vzcuabimkb0
Nonlinear differential equation
Summary
Nonlinear_equation
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
c_drl49wysnx4i
Nonlinear differential equation
Summary
Nonlinear_equation
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic.
c_lhtbbs89the1
Nonlinear differential equation
Summary
Nonlinear_equation
Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others: Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
c_3m4euqt5ulqn
Hereditarily finite set
Summary
Hereditarily_finite_set
In mathematics and set theory, hereditarily finite sets are defined as finite sets whose elements are all hereditarily finite sets. In other words, the set itself is finite, and all of its elements are finite sets, recursively all the way down to the empty set.
c_yelrf7621wzi
Analytic representation
Summary
Analytic_representation
In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform. The analytic representation of a real-valued function is an analytic signal, comprising the original function and its Hilbert transform. This representation facilitates many mathematical manipulations.
c_7cwr8cjb08iu
Analytic representation
Summary
Analytic_representation
The basic idea is that the negative frequency components of the Fourier transform (or spectrum) of a real-valued function are superfluous, due to the Hermitian symmetry of such a spectrum. These negative frequency components can be discarded with no loss of information, provided one is willing to deal with a complex-valued function instead. That makes certain attributes of the function more accessible and facilitates the derivation of modulation and demodulation techniques, such as single-sideband. As long as the manipulated function has no negative frequency components (that is, it is still analytic), the conversion from complex back to real is just a matter of discarding the imaginary part. The analytic representation is a generalization of the phasor concept: while the phasor is restricted to time-invariant amplitude, phase, and frequency, the analytic signal allows for time-variable parameters.
c_ls5m83b47ws9
Hilbert Transform
Summary
Hilbert_Transform
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function 1 / ( π t ) {\displaystyle 1/(\pi t)} (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π⁄2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
c_hscs546823g3
Z-transform
Summary
Z-transform
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex frequency-domain (the z-domain or z-plane) representation.It can be considered as a discrete-time equivalent of the Laplace transform (the s-domain or s-plane). This similarity is explored in the theory of time-scale calculus. While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle.
c_ni60dxsxye5j
Z-transform
Summary
Z-transform
The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle. One of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by inspection, manipulation, or numerical approximation. Such methods tend not to be accurate except in the vicinity of the complex unity, i.e. at low frequencies.
c_w2dt7ftvkzxv
Advanced Z-transform
Summary
Advanced_Z-transform
In mathematics and signal processing, the advanced z-transform is an extension of the z-transform, to incorporate ideal delays that are not multiples of the sampling time. It takes the form F ( z , m ) = ∑ k = 0 ∞ f ( k T + m ) z − k {\displaystyle F(z,m)=\sum _{k=0}^{\infty }f(kT+m)z^{-k}} where T is the sampling period m (the "delay parameter") is a fraction of the sampling period . {\displaystyle .} It is also known as the modified z-transform. The advanced z-transform is widely applied, for example to accurately model processing delays in digital control.
c_kpf8re6g68bs
Constant-Q transform
Summary
Variable-Q_transform
In mathematics and signal processing, the constant-Q transform and variable-Q transform, simply known as CQT and VQT, transforms a data series to the frequency domain. It is related to the Fourier transform and very closely related to the complex Morlet wavelet transform. Its design is suited for musical representation. The transform can be thought of as a series of filters fk, logarithmically spaced in frequency, with the k-th filter having a spectral width δfk equal to a multiple of the previous filter's width: δ f k = 2 1 / n ⋅ δ f k − 1 = ( 2 1 / n ) k ⋅ δ f min , {\displaystyle \delta f_{k}=2^{1/n}\cdot \delta f_{k-1}=\left(2^{1/n}\right)^{k}\cdot \delta f_{\text{min}},} where δfk is the bandwidth of the k-th filter, fmin is the central frequency of the lowest filter, and n is the number of filters per octave.
c_t7nypcrs1j16
Collaboration graph
Summary
Collaboration_graph
In mathematics and social science, a collaboration graph is a graph modeling some social network where the vertices represent participants of that network (usually individual people) and where two distinct participants are joined by an edge whenever there is a collaborative relationship between them of a particular kind. Collaboration graphs are used to measure the closeness of collaborative relationships between the participants of the network.
c_mdb3xj51kz37
Constant-energy surface
Summary
Gamma_point
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone.
c_t8gy9vz8prai
Constant-energy surface
Summary
Gamma_point
The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice.
c_f1erkmv81kmn
Constant-energy surface
Summary
Gamma_point
There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bragg planes.
c_kxvcfstz9bi5
Constant-energy surface
Summary
Gamma_point
A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969), a French physicist.Within the Brillouin zone, a constant-energy surface represents the loci of all the k → {\displaystyle {\vec {k}}} -points (that is, all the electron momentum values) that have the same energy. Fermi surface is a special constant-energy surface that separates the unfilled orbitals from the filled ones at zero kelvin.
c_sdsrctzfpqt2
Dimension of an algebraic variety
Summary
Dimension_of_an_algebraic_variety
In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways. Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding.
c_88xzfgk4r4v9
Formal space
Summary
Rational_homotopy_theory
In mathematics and specifically in topology, rational homotopy theory is a simplified version of homotopy theory for topological spaces, in which all torsion in the homotopy groups is ignored. It was founded by Dennis Sullivan (1977) and Daniel Quillen (1969). This simplification of homotopy theory makes certain calculations much easier. Rational homotopy types of simply connected spaces can be identified with (isomorphism classes of) certain algebraic objects called Sullivan minimal models, which are commutative differential graded algebras over the rational numbers satisfying certain conditions.
c_falm61b9hcdk
Formal space
Summary
Rational_homotopy_theory
A geometric application was the theorem of Sullivan and Micheline Vigué-Poirrier (1976): every simply connected closed Riemannian manifold X whose rational cohomology ring is not generated by one element has infinitely many geometrically distinct closed geodesics. The proof used rational homotopy theory to show that the Betti numbers of the free loop space of X are unbounded. The theorem then follows from a 1969 result of Detlef Gromoll and Wolfgang Meyer.
c_ngd2mjnbn8nb
Skorokhod's representation theorem
Summary
Skorokhod's_representation_theorem
In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space. It is named for the Soviet mathematician A. V. Skorokhod.
c_oe1vyq8b28h0
Mean of circular quantities
Summary
Circular_mean
In mathematics and statistics, a circular mean or angular mean is a mean designed for angles and similar cyclic quantities, such as times of day, and fractional parts of real numbers. This is necessary since most of the usual means may not be appropriate on angle-like quantities. For example, the arithmetic mean of 0° and 360° is 180°, which is misleading because 360° equals 0° modulo a full cycle.