content_id
stringlengths 14
14
| page_title
stringlengths 1
250
| section_title
stringlengths 1
1.26k
⌀ | breadcrumb
stringlengths 1
1.39k
| text
stringlengths 9
3.55k
|
---|---|---|---|---|
c_tzal0xvd8c55
|
Decomposable operator
|
Summary
|
Direct_integral
|
Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings. Results on direct integrals can be viewed as generalizations of results about finite-dimensional C*-algebras of matrices; in this case the results are easy to prove directly. The infinite-dimensional case is complicated by measure-theoretic technicalities. Direct integral theory was also used by George Mackey in his analysis of systems of imprimitivity and his general theory of induced representations of locally compact separable groups.
|
c_oaruqm485cmc
|
Bulgarian solitaire
|
Summary
|
Bulgarian_solitaire
|
In mathematics and game theory, Bulgarian solitaire is a card game that was introduced by Martin Gardner. In the game, a pack of N {\displaystyle N} cards is divided into several piles. Then for each pile, remove one card; collect the removed cards together to form a new pile (piles of zero size are ignored). If N {\displaystyle N} is a triangular number (that is, N = 1 + 2 + ⋯ + k {\displaystyle N=1+2+\cdots +k} for some k {\displaystyle k} ), then it is known that Bulgarian solitaire will reach a stable configuration in which the sizes of the piles are 1 , 2 , … , k {\displaystyle 1,2,\ldots ,k} . This state is reached in k 2 − k {\displaystyle k^{2}-k} moves or fewer. If N {\displaystyle N} is not triangular, no stable configuration exists and a limit cycle is reached.
|
c_2r8x9nkwnqfx
|
Shortest-path graph
|
Summary
|
Shortest-path_graph
|
In mathematics and geographic information science, a shortest-path graph is an undirected graph defined from a set of points in the Euclidean plane. The shortest-path graph is proposed with the idea of inferring edges between a point set such that the shortest path taken over the inferred edges will roughly align with the shortest path taken over the imprecise region represented by the point set. The edge set of the shortest-path graph varies based on a single parameter t ≥ 1. When the weight of an edge is defined as its Euclidean length raised to the power of the parameter t ≥ 1, the edge is present in the shortest-path graph if and only if it is the least weight path between its endpoints.
|
c_h4j1crg4wnex
|
Discrete symmetry
|
Summary
|
Discrete_symmetry
|
In mathematics and geometry, a discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. In mathematics and theoretical physics, a discrete symmetry is a symmetry under the transformations of a discrete group—e.g.
|
c_bktwr9z7t4y3
|
Discrete symmetry
|
Summary
|
Discrete_symmetry
|
a topological group with a discrete topology whose elements form a finite or a countable set. One of the most prominent discrete symmetries in physics is parity symmetry. It manifests itself in various elementary physical quantum systems, such as quantum harmonic oscillator, electron orbitals of Hydrogen-like atoms by forcing wavefunctions to be even or odd. This in turn gives rise to selection rules that determine which transition lines are visible in atomic absorption spectra.
|
c_xbdj7is6h80a
|
Block (permutation group theory)
|
Summary
|
Block_(permutation_group_theory)
|
In mathematics and group theory, a block system for the action of a group G on a set X is a partition of X that is G-invariant. In terms of the associated equivalence relation on X, G-invariance means that x ~ y implies gx ~ gyfor all g ∈ G and all x, y ∈ X. The action of G on X induces a natural action of G on any block system for X. The set of orbits of the G-set X is an example of a block system. The corresponding equivalence relation is the smallest G-invariant equivalence on X such that the induced action on the block system is trivial.
|
c_cps87sw0a9w7
|
Block (permutation group theory)
|
Summary
|
Block_(permutation_group_theory)
|
The partition into singleton sets is a block system and if X is non-empty then the partition into one set X itself is a block system as well (if X is a singleton set then these two partitions are identical). A transitive (and thus non-empty) G-set X is said to be primitive if it has no other block systems. For a non-empty G-set X the transitivity requirement in the previous definition is only necessary in the case when |X|=2 and the group action is trivial.
|
c_7rah8f8ymunt
|
Multiplicative group
|
Summary
|
Multiplicative_notation
|
In mathematics and group theory, the term multiplicative group refers to one of the following concepts: the group under multiplication of the invertible elements of a field, ring, or other structure for which one of its operations is referred to as multiplication. In the case of a field F, the group is (F ∖ {0}, •), where 0 refers to the zero element of F and the binary operation • is the field multiplication, the algebraic torus GL(1)..
|
c_9davk616mouk
|
Variadic functions
|
Summary
|
Variadic_functions
|
In mathematics and in computer programming, a variadic function is a function of indefinite arity, i.e., one which accepts a variable number of arguments. Support for variadic functions differs widely among programming languages. The term variadic is a neologism, dating back to 1936–1937. The term was not widely used until the 1970s.
|
c_gxg25yklqxwb
|
Lehmer code
|
Summary
|
Lehmer_code
|
In mathematics and in particular in combinatorics, the Lehmer code is a particular way to encode each possible permutation of a sequence of n numbers. It is an instance of a scheme for numbering permutations and is an example of an inversion table. The Lehmer code is named in reference to Derrick Henry Lehmer, but the code had been known since 1888 at least.
|
c_pqnmq4zo3bl7
|
Measurable function
|
Summary
|
Borel_function
|
In mathematics and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
|
c_8h5vh150xey8
|
Hurwitz's theorem (complex analysis)
|
Summary
|
Hurwitz's_theorem_(complex_analysis)
|
In mathematics and in particular the field of complex analysis, Hurwitz's theorem is a theorem associating the zeroes of a sequence of holomorphic, compact locally uniformly convergent functions with that of their corresponding limit. The theorem is named after Adolf Hurwitz.
|
c_yjrcmkfuodox
|
Stone–von Neumann theorem
|
Summary
|
Stone–von_Neumann_theorem
|
In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.
|
c_c1bk25t90b0l
|
Sigma-martingale
|
Summary
|
Sigma-martingale
|
In mathematics and information theory of probability, a sigma-martingale is a semimartingale with an integral representation. Sigma-martingales were introduced by C.S. Chou and M. Emery in 1977 and 1978. In financial mathematics, sigma-martingales appear in the fundamental theorem of asset pricing as an equivalent condition to no free lunch with vanishing risk (a no-arbitrage condition).
|
c_bw1z7shx7iiu
|
Sanov's theorem
|
Summary
|
Sanov's_theorem
|
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables.
|
c_rf7xfigy7oxz
|
Sanov's theorem
|
Summary
|
Sanov's_theorem
|
Let A be a set of probability distributions over an alphabet X, and let q be an arbitrary distribution over X (where q may or may not be in A). Suppose we draw n i.i.d. samples from q, represented by the vector x n = x 1 , x 2 , … , x n {\displaystyle x^{n}=x_{1},x_{2},\ldots ,x_{n}} .
|
c_xptt4xh1xnl4
|
Sanov's theorem
|
Summary
|
Sanov's_theorem
|
Then, we have the following bound on the probability that the empirical measure p ^ x n {\displaystyle {\hat {p}}_{x^{n}}} of the samples falls within the set A: q n ( p ^ x n ∈ A ) ≤ ( n + 1 ) | X | 2 − n D K L ( p ∗ | | q ) {\displaystyle q^{n}({\hat {p}}_{x^{n}}\in A)\leq (n+1)^{|X|}2^{-nD_{\mathrm {KL} }(p^{*}||q)}} ,where q n {\displaystyle q^{n}} is the joint probability distribution on X n {\displaystyle X^{n}} , and p ∗ {\displaystyle p^{*}} is the information projection of q onto A.In words, the probability of drawing an atypical distribution is bounded by a function of the KL divergence from the true distribution to the atypical one; in the case that we consider a set of possible atypical distributions, there is a dominant atypical distribution, given by the information projection. Furthermore, if A is the closure of its interior, lim n → ∞ 1 n log q n ( p ^ x n ∈ A ) = − D K L ( p ∗ | | q ) . {\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\log q^{n}({\hat {p}}_{x^{n}}\in A)=-D_{\mathrm {KL} }(p^{*}||q).}
|
c_yzmp54rbbf0z
|
Sturm–Liouville eigenproblem
|
Summary
|
Sturm–Liouville_equation
|
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form: for given functions p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and w ( x ) {\displaystyle w(x)} , together with some boundary conditions at extreme values of x {\displaystyle x} . The goals of a given Sturm–Liouville problem are: To find the λ for which there exists a non-trivial solution to the problem. Such values λ are called the eigenvalues of the problem. For each eigenvalue λ, to find the corresponding solution y = y ( x ) {\displaystyle y=y(x)} of the problem.
|
c_b6ot4yv2astv
|
Sturm–Liouville eigenproblem
|
Summary
|
Sturm–Liouville_equation
|
Such functions y {\displaystyle y} are called the eigenfunctions associated to each λ.Sturm–Liouville theory is the general study of Sturm–Liouville problems. In particular, for a "regular" Sturm–Liouville problem, it can be shown that there are an infinite number of eigenvalues each with a unique eigenfunction, and that these eigenfunctions form an orthonormal basis of a certain Hilbert space of functions.
|
c_mq15u4v29r3g
|
Sturm–Liouville eigenproblem
|
Summary
|
Sturm–Liouville_equation
|
This theory is important in applied mathematics, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear partial differential equations. For example, in quantum mechanics, the one-dimensional time-independent Schrödinger equation is a Sturm–Liouville problem. Sturm–Liouville theory is named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882) who developed the theory.
|
c_7ix8mtp0konv
|
Parametric family of functions
|
Summary
|
Parametric_family
|
In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters.Common examples are parametrized (families of) functions, probability distributions, curves, shapes, etc.
|
c_x460vzaldcvp
|
Stefan problem
|
Summary
|
Stefan_problem
|
In mathematics and its applications, particularly to phase transitions in matter, a Stefan problem is a particular kind of boundary value problem for a system of partial differential equations (PDE), in which the boundary between the phases can move with time. The classical Stefan problem aims to describe the evolution of the boundary between two phases of a material undergoing a phase change, for example the melting of a solid, such as ice to water. This is accomplished by solving heat equations in both regions, subject to given boundary and initial conditions. At the interface between the phases (in the classical problem) the temperature is set to the phase change temperature.
|
c_rq78e5sok0oa
|
Stefan problem
|
Summary
|
Stefan_problem
|
To close the mathematical system a further equation, the Stefan condition, is required. This is an energy balance which defines the position of the moving interface. Note that this evolving boundary is an unknown (hyper-)surface; hence, Stefan problems are examples of free boundary problems. Analogous problems occur, for example, in the study of porous media flow, mathematical finance and crystal growth from monomer solutions.
|
c_u30wxawc999c
|
Mean square
|
Summary
|
Mean_square
|
In mathematics and its applications, the mean square is normally defined as the arithmetic mean of the squares of a set of numbers or of a random variable.It may also be defined as the arithmetic mean of the squares of the deviations between a set of numbers and a reference value (e.g., may be a mean or an assumed mean of the data), in which case it may be known as mean square deviation. When the reference value is the assumed true value, the result is known as mean squared error. A typical estimate for the sample variance from a set of sample values x i {\displaystyle x_{i}} uses a divisor of the number of values minus one, n-1, rather than n as in a simple quadratic mean, and this is still called the "mean square" (e.g. in analysis of variance): s 2 = 1 n − 1 ∑ ( x i − x ¯ ) 2 {\displaystyle s^{2}=\textstyle {\frac {1}{n-1}}\sum (x_{i}-{\bar {x}})^{2}} The second moment of a random variable, E ( X 2 ) {\displaystyle E(X^{2})} is also called the mean square. The square root of a mean square is known as the root mean square (RMS or rms), and can be used as an estimate of the standard deviation of a random variable. == References ==
|
c_g33afrr0gbue
|
Root Mean Square
|
Summary
|
Root_mean_square_voltage
|
In mathematics and its applications, the root mean square of a set of numbers x i {\displaystyle x_{i}} (abbreviated as RMS, RMS or rms and denoted in formulas as either x R M S {\displaystyle x_{\mathrm {RMS} }} or R M S x {\displaystyle \mathrm {RMS} _{x}} ) is defined as the square root of the mean square (the arithmetic mean of the squares) of the set. The RMS is also known as the quadratic mean (denoted M 2 {\displaystyle M_{2}} ) and is a particular case of the generalized mean. The RMS of a continuously varying function (denoted f R M S {\displaystyle f_{\mathrm {RMS} }} ) can be defined in terms of an integral of the squares of the instantaneous values during a cycle. For alternating electric current, RMS is equal to the value of the constant direct current that would produce the same power dissipation in a resistive load. In estimation theory, the root-mean-square deviation of an estimator is a measure of the imperfection of the fit of the estimator to the data.
|
c_izjdtlvbevnp
|
Signed distance function
|
Summary
|
Signed_distance_function
|
In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space, with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside).
|
c_yl10bfsf2mb0
|
Axiom of heredity
|
Summary
|
Axiom_of_heredity
|
In mathematics and logic, Ackermann set theory (AST) is an axiomatic set theory proposed by Wilhelm Ackermann in 1956.AST differs from Zermelo–Fraenkel set theory (ZF) in that it allows proper classes, that is, objects that are not sets, including a class of all sets. It replaces several of the standard ZF axioms for constructing new sets with a principle known as Ackermann's schema. Intuitively, the schema allows a new set to be constructed if it can be defined by a formula which does not refer to the class of all sets.
|
c_svhojgq60nzo
|
Axiom of heredity
|
Summary
|
Axiom_of_heredity
|
In its use of classes, AST differs from other alternative set theories such as Morse–Kelley set theory and Von Neumann–Bernays–Gödel set theory in that a class may be an element of another class. William N. Reinhardt established in 1970 that AST is effectively equivalent in strength to ZF, putting it on equal foundations. In particular, AST is consistent if and only if ZF is consistent.
|
c_trn58xw8se77
|
Corollary
|
Summary
|
Corollary
|
In mathematics and logic, a corollary ( KORR-ə-lerr-ee, UK: korr-OL-ər-ee) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else (e.g., violence as a corollary of revolutionary social changes).
|
c_jom03t28tnxd
|
Direct proof
|
Summary
|
Direct_proof
|
In mathematics and logic, a direct proof is a way of showing the truth or falsehood of a given statement by a straightforward combination of established facts, usually axioms, existing lemmas and theorems, without making any further assumptions. In order to directly prove a conditional statement of the form "If p, then q", it suffices to consider the situations in which the statement p is true. Logical deduction is employed to reason from assumptions to conclusion. The type of logic employed is almost invariably first-order logic, employing the quantifiers for all and there exists.
|
c_1iqddi12kl32
|
Direct proof
|
Summary
|
Direct_proof
|
Common proof rules used are modus ponens and universal instantiation.In contrast, an indirect proof may begin with certain hypothetical scenarios and then proceed to eliminate the uncertainties in each of these scenarios until an inescapable conclusion is forced. For example, instead of showing directly p ⇒ q, one proves its contrapositive ~q ⇒ ~p (one assumes ~q and shows that it leads to ~p). Since p ⇒ q and ~q ⇒ ~p are equivalent by the principle of transposition (see law of excluded middle), p ⇒ q is indirectly proved. Proof methods that are not direct include proof by contradiction, including proof by infinite descent. Direct proof methods include proof by exhaustion and proof by induction.
|
c_kwy9af9k6a39
|
Ordered logic
|
Summary
|
Ordered_logic
|
In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic. The term "higher-order logic" is commonly used to mean higher-order simple predicate logic.
|
c_dy5k30waa4mz
|
Ordered logic
|
Summary
|
Ordered_logic
|
Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types.
|
c_6qrwf2ujhn82
|
Vacuous truth
|
Summary
|
Vacuously_true
|
In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied. It is sometimes said that a statement is vacuously true because it does not really say anything. For example, the statement "all cell phones in the room are turned off" will be true when no cell phones are in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off", which would otherwise be incoherent and false.
|
c_43tpc5i42d9m
|
Vacuous truth
|
Summary
|
Vacuously_true
|
More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent. One example of such a statement is "if Tokyo is in France, then the Eiffel Tower is in Bolivia". Such statements are considered vacuous truths, because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent.
|
c_wy48djh89wjb
|
Vacuous truth
|
Summary
|
Vacuously_true
|
In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("Tokyo is in France" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way. Examples common to everyday speech include conditional phrases used as idioms of improbability like "when hell freezes over..." and "when pigs can fly...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition. In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction.
|
c_oq8hrcsbqc8e
|
Vacuous truth
|
Summary
|
Vacuously_true
|
This notion has relevance in pure mathematics, as well as in any other field that uses classical logic. Outside of mathematics, statements which can be characterized informally as vacuously true can be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist.
|
c_dax6rvsqjcv5
|
Vacuous truth
|
Summary
|
Vacuously_true
|
For example, a child might truthfully tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten some vegetables, even though that is not true. In addition, a vacuous truth is often used colloquially with absurd statements, either to confidently assert something (e.g. "the dog was red, or I'm a monkey's uncle" to strongly claim that the dog was red), or to express doubt, sarcasm, disbelief, incredulity or indignation (e.g. "yes, and I'm the King of England" to disagree a previously made statement).
|
c_japhvu0qj9vf
|
Lexical ambiguity
|
Mathematical interpretation of ambiguity
|
Lexical_ambiguity > Mathematical interpretation of ambiguity
|
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, X = Y {\displaystyle X=Y} leaves open what the value of X is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as X = 2 , X = 3 {\displaystyle X=2,X=3} , which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
|
c_mpm2e9zf150u
|
Axiomatic proof
|
Summary
|
Axiomatic_framework
|
In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system.
|
c_pum2p1tu2hgw
|
Infinitary operation
|
Summary
|
Finitary
|
In mathematics and logic, an operation is finitary if it has finite arity, i.e. if it has a finite number of input values. Similarly, an infinitary operation is one with an infinite number of input values. In standard mathematics, an operation is finitary by definition. Therefore these terms are usually only used in the context of infinitary logic.
|
c_tz41zacuc2hn
|
Effectiveness
|
Usage
|
Operational_effectiveness > Usage
|
In mathematics and logic, effective is used to describe metalogical methods that fit the criteria of an effective procedure. In group theory, a group element acts effectively (or faithfully) on a point, if that point is not fixed by the action. In physics, an effective theory is, similar to a phenomenological theory, a framework intended to explain certain (observed) effects without the claim that the theory correctly models the underlying (unobserved) processes.
|
c_h36pk8pkr4n4
|
Effectiveness
|
Usage
|
Operational_effectiveness > Usage
|
In heat transfer, effectiveness is a measure of the performance of a heat exchanger when using the NTU method. In medicine, effectiveness relates to how well a treatment works in practice, especially as shown in pragmatic clinical trials, as opposed to efficacy, which measures how well it works in explanatory clinical trials or research laboratory studies. In management, effectiveness relates to getting the right things done. Peter Drucker reminds us that "effectiveness can and must be learned".In human–computer interaction, effectiveness is defined as "the accuracy and completeness of users' tasks while using a system".In military science, effectiveness is a criterion used to assess changes determined in the target system, in its behavior, capability, or assets, tied to the attainment of an end state, achievement of an objective, or creation of an effect, while combat effectiveness is: "...the readiness of a military unit to engage in combat based on behavioral, operational, and leadership considerations. Combat effectiveness measures the ability of a military force to accomplish its objective and is one component of overall military effectiveness."
|
c_pn9x68r0woj0
|
Plural quantification
|
Summary
|
Multigrade_predicate
|
In mathematics and logic, plural quantification is the theory that an individual variable x may take on plural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories. The point of the theory is to give first-order logic the power of set theory, but without any "existential commitment" to such objects as sets. The classic expositions are Boolos 1984 and Lewis 1991.
|
c_r66u2ki2wt46
|
Uniqueness quantification
|
Summary
|
Uniqueness_quantification
|
In mathematics and logic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition. This sort of quantification is known as uniqueness quantification or unique existential quantification, and is often denoted with the symbols "∃!" or "∃=1". For example, the formal statement ∃ ! n ∈ N ( n − 2 = 4 ) {\displaystyle \exists !n\in \mathbb {N} \,(n-2=4)} may be read as "there is exactly one natural number n {\displaystyle n} such that n − 2 = 4 {\displaystyle n-2=4} ".
|
c_x7rfqbbngnsv
|
Mackey–Glass equations
|
Summary
|
Mackey-Glass_equations
|
In mathematics and mathematical biology, the Mackey–Glass equations, named after Michael Mackey and Leon Glass, refer to a family of delay differential equations whose behaviour manages to mimic both healthy and pathological behaviour in certain biological contexts, controlled by the equation's parameters. Originally, they were used to model the variation in the relative quantity of mature cells in the blood. The equations are defined as: and where P ( t ) {\displaystyle P(t)} represents the density of cells over time, and β 0 , θ , n , τ , γ {\displaystyle \beta _{0},\theta ,n,\tau ,\gamma } are parameters of the equations. Equation (2), in particular, is notable in dynamical systems since it can result in chaotic attractors with various dimensions.
|
c_gpncse912f5v
|
Boolean value
|
Summary
|
Boolean_operation_(Boolean_algebra)
|
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers.
|
c_dvf6u64zj69x
|
Boolean value
|
Summary
|
Boolean_operation_(Boolean_algebra)
|
Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and the negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations, in the same way that elementary algebra describes numerical operations.
|
c_k18gfgv3czcz
|
Boolean value
|
Summary
|
Boolean_operation_(Boolean_algebra)
|
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term "Boolean algebra" was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.
|
c_24mncqk1epc8
|
Infimal convolution
|
Summary
|
Convex_conjugate
|
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). It allows in particular for a far reaching generalization of Lagrangian duality.
|
c_ssb52a16m3m2
|
Slater integrals
|
Summary
|
Slater_integrals
|
In mathematics and mathematical physics, Slater integrals are certain integrals of products of three spherical harmonics. They occur naturally when applying an orthonormal basis of functions on the unit sphere that transform in a particular way under rotations in three dimensions. Such integrals are particularly useful when computing properties of atoms which have natural spherical symmetry. These integrals are defined below along with some of their mathematical properties.
|
c_91vy1tr2zqrk
|
Holonomic basis
|
Summary
|
Holonomic_basis
|
In mathematics and mathematical physics, a coordinate basis or holonomic basis for a differentiable manifold M is a set of basis vector fields {e1, ..., en} defined at every point P of a region of the manifold as e α = lim δ x α → 0 δ s δ x α , {\displaystyle \mathbf {e} _{\alpha }=\lim _{\delta x^{\alpha }\to 0}{\frac {\delta \mathbf {s} }{\delta x^{\alpha }}},} where δs is the displacement vector between the point P and a nearby point Q whose coordinate separation from P is δxα along the coordinate curve xα (i.e. the curve on the manifold through P for which the local coordinate xα varies and all other coordinates are constant).It is possible to make an association between such a basis and directional derivative operators. Given a parameterized curve C on the manifold defined by xα(λ) with the tangent vector u = uαeα, where uα = dxα/dλ, and a function f(xα) defined in a neighbourhood of C, the variation of f along C can be written as d f d λ = d x α d λ ∂ f ∂ x α = u α ∂ ∂ x α f . {\displaystyle {\frac {df}{d\lambda }}={\frac {dx^{\alpha }}{d\lambda }}{\frac {\partial f}{\partial x^{\alpha }}}=u^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}f.}
|
c_ud8u4hxiyh0b
|
Holonomic basis
|
Summary
|
Holonomic_basis
|
Since we have that u = uαeα, the identification is often made between a coordinate basis vector eα and the partial derivative operator ∂/∂xα, under the interpretation of vectors as operators acting on functions.A local condition for a basis {e1, ..., en} to be holonomic is that all mutual Lie derivatives vanish: = L e α e β = 0. {\displaystyle \left={\mathcal {L}}_{\mathbf {e} _{\alpha }}\mathbf {e} _{\beta }=0.} A basis that is not holonomic is called an anholonomic, non-holonomic or non-coordinate basis. Given a metric tensor g on a manifold M, it is in general not possible to find a coordinate basis that is orthonormal in any open region U of M. An obvious exception is when M is the real coordinate space Rn considered as a manifold with g being the Euclidean metric δij ei ⊗ ej at every point.
|
c_xjaxhwmgwz5g
|
Factorization algebra
|
Summary
|
Factorization_algebra
|
In mathematics and mathematical physics, a factorization algebra is an algebraic structure first introduced by Beilinson and Drinfel'd in an algebro-geometric setting as a reformulation of chiral algebras, and also studied in a more general setting by Costello to study quantum field theory.
|
c_ecg3ca91tez0
|
Complex spacetime
|
Summary
|
Complex_spacetime
|
In mathematics and mathematical physics, complex spacetime extends the traditional notion of spacetime described by real-valued space and time coordinates to complex-valued space and time coordinates. The notion is entirely mathematical with no physics implied, but should be seen as a tool, for instance, as exemplified by the Wick rotation.
|
c_qxa6valiikjy
|
Probabilistic potential theory
|
Summary
|
Probabilistic_potential_theory
|
In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation.
|
c_hamovbh2wg5q
|
Probabilistic potential theory
|
Summary
|
Probabilistic_potential_theory
|
For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of the Laplace equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other. Modern potential theory is also intimately connected with probability and the theory of Markov chains.
|
c_prdyslztc9wb
|
Probabilistic potential theory
|
Summary
|
Probabilistic_potential_theory
|
In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others.
|
c_0wv6h5k9kkqo
|
Raising and lowering indices
|
Summary
|
Index_gymnastics
|
In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.
|
c_jrt17zeh2i9h
|
Euler–Rodrigues formula
|
Summary
|
Euler–Rodrigues_parameters
|
In mathematics and mechanics, the Euler–Rodrigues formula describes the rotation of a vector in three dimensions. It is based on Rodrigues' rotation formula, but uses a different parametrization. The rotation is described by four Euler parameters due to Leonhard Euler. The Rodrigues formula (named after Olinde Rodrigues), a method of calculating the position of a rotated point, is used in some software applications, such as flight simulators and computer games.
|
c_liq922wi3wmn
|
Aluthge transform
|
Summary
|
Aluthge_transform
|
In mathematics and more precisely in functional analysis, the Aluthge transformation is an operation defined on the set of bounded operators of a Hilbert space. It was introduced by Ariyadasa Aluthge to study p-hyponormal linear operators.
|
c_6v2h6jutsa9a
|
Commuting probability
|
Summary
|
Commuting_probability
|
In mathematics and more precisely in group theory, the commuting probability (also called degree of commutativity or commutativity degree) of a finite group is the probability that two randomly chosen elements commute. It can be used to measure how close to abelian a finite group is. It can be generalized to infinite groups equipped with a suitable probability measure, and can also be generalized to other algebraic structures such as rings.
|
c_z9hnnrxhfi8v
|
Simple radical extension
|
Summary
|
Simple_radical_extension
|
In mathematics and more specifically in field theory, a radical extension of a field K is an extension of K that is obtained by adjoining a sequence of nth roots of elements.
|
c_pvt9nqttwxxr
|
Centering matrix
|
Summary
|
Centering_matrix
|
In mathematics and multivariate statistics, the centering matrix is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect as subtracting the mean of the components of the vector from every component of that vector.
|
c_y6gon7ykvl5t
|
Adaptive stepsize
|
Summary
|
Adaptive_step_size
|
In mathematics and numerical analysis, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration) in order to control the errors of the method and to ensure stability properties such as A-stability. Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative. For example, when modeling the motion of a satellite about the earth as a standard Kepler orbit, a fixed time-stepping method such as the Euler method may be sufficient.
|
c_fepfxv02p54d
|
Adaptive stepsize
|
Summary
|
Adaptive_step_size
|
However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the Three-body problem. There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if the spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed. Romberg's method and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize.
|
c_nhiwl84fvdai
|
Mexican hat wavelet
|
Summary
|
Mexican_hat_wavelet
|
In mathematics and numerical analysis, the Ricker wavelet ψ ( t ) = 2 3 σ π 1 / 4 ( 1 − ( t σ ) 2 ) e − t 2 2 σ 2 {\displaystyle \psi (t)={\frac {2}{{\sqrt {3\sigma }}\pi ^{1/4}}}\left(1-\left({\frac {t}{\sigma }}\right)^{2}\right)e^{-{\frac {t^{2}}{2\sigma ^{2}}}}} is the negative normalized second derivative of a Gaussian function, i.e., up to scale and normalization, the second Hermite function. It is a special case of the family of continuous wavelets (wavelets used in a continuous wavelet transform) known as Hermitian wavelets. The Ricker wavelet is frequently employed to model seismic data, and as a broad spectrum source term in computational electrodynamics. It is usually only referred to as the Mexican hat wavelet in the Americas, due to taking the shape of a sombrero when used as a 2D image processing kernel.
|
c_6thah0v7e9f1
|
Mexican hat wavelet
|
Summary
|
Mexican_hat_wavelet
|
It is also known as the Marr wavelet for David Marr. ψ ( x , y ) = 1 π σ 4 ( 1 − 1 2 ( x 2 + y 2 σ 2 ) ) e − x 2 + y 2 2 σ 2 {\displaystyle \psi (x,y)={\frac {1}{\pi \sigma ^{4}}}\left(1-{\frac {1}{2}}\left({\frac {x^{2}+y^{2}}{\sigma ^{2}}}\right)\right)e^{-{\frac {x^{2}+y^{2}}{2\sigma ^{2}}}}} The multidimensional generalization of this wavelet is called the Laplacian of Gaussian function. In practice, this wavelet is sometimes approximated by the difference of Gaussians (DoG) function, because the DoG is separable and can therefore save considerable computation time in two or more dimensions.
|
c_qj509pjanqz2
|
Mexican hat wavelet
|
Summary
|
Mexican_hat_wavelet
|
The scale normalized Laplacian (in L 1 {\displaystyle L_{1}} -norm) is frequently used as a blob detector and for automatic scale selection in computer vision applications; see Laplacian of Gaussian and scale space. The relation between this Laplacian of the Gaussian operator and the difference-of-Gaussians operator is explained in appendix A in Lindeberg (2015). The Mexican hat wavelet can also be approximated by derivatives of cardinal B-splines.
|
c_u58ajeqo40ay
|
Van Wijngaarden transformation
|
Summary
|
Van_Wijngaarden_transformation
|
In mathematics and numerical analysis, the van Wijngaarden transformation is a variant on the Euler transform used to accelerate the convergence of an alternating series. One algorithm to compute Euler's transform runs as follows: Compute a row of partial sums and form rows of averages between neighbors The first column s j , 0 {\displaystyle s_{j,0}} then contains the partial sums of the Euler transform. Adriaan van Wijngaarden's contribution was to point out that it is better not to carry this procedure through to the very end, but to stop two-thirds of the way.
|
c_q53bbf8xyocs
|
Van Wijngaarden transformation
|
Summary
|
Van_Wijngaarden_transformation
|
If a 0 , a 1 , … , a 12 {\displaystyle a_{0},a_{1},\ldots ,a_{12}} are available, then s 8 , 4 {\displaystyle s_{8,4}} is almost always a better approximation to the sum than s 12 , 0 {\displaystyle s_{12,0}} . In many cases the diagonal terms do not converge in one cycle so process of averaging is to be repeated with diagonal terms by bringing them in a row. (For example, this will be needed in a geometric series with ratio − 4 {\displaystyle -4} .) This process of successive averaging of the average of partial sum can be replaced by using the formula to calculate the diagonal term. For a simple-but-concrete example, recall the Leibniz formula for pi The algorithm described above produces the following table: These correspond to the following algorithmic outputs:
|
c_9mpybcoybjeh
|
Pseudo-Boolean function
|
Summary
|
Pseudo-Boolean_function
|
In mathematics and optimization, a pseudo-Boolean function is a function of the form f: B n → R , {\displaystyle f:\mathbf {B} ^{n}\to \mathbb {R} ,} where B = {0, 1} is a Boolean domain and n is a nonnegative integer called the arity of the function. A Boolean function is then a special case, where the values are also restricted to 0 or 1.
|
c_7soo7vjp43mc
|
Consistent and inconsistent equations
|
Summary
|
Consistent_equations
|
In mathematics and particularly in algebra, a system of equations (either linear or nonlinear) is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations.If a system of equations is inconsistent, then it is possible to manipulate and combine the equations in such a way as to obtain contradictory information, such as 2 = 1, or x 3 + y 5 = 5 {\displaystyle x^{3}+y^{5}=5} and x 3 + y 3 = 6 {\displaystyle x^{3}+y^{3}=6} (which implies 5 = 6). Both types of equation system, consistent and inconsistent, can be any of overdetermined (having more equations than unknowns), underdetermined (having fewer equations than unknowns), or exactly determined.
|
c_alyqd72ucaz0
|
Initial condition
|
Summary
|
Seed_value
|
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value,: pp. 160 is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time.
|
c_wr04w7xl4ugg
|
Initial condition
|
Summary
|
Seed_value
|
In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons.
|
c_v838pmpw72wa
|
Circumgon
|
Summary
|
Circumgon
|
In mathematics and particularly in elementary geometry, a circumgon is a geometric figure which circumscribes some circle, in the sense that it is the union of the outer edges of non-overlapping triangles each of which has a vertex at the center of the circle and opposite side on a line that is tangent to the circle. : p. 855 The limiting case in which part or all of the circumgon is a circular arc is permitted.
|
c_71wqojji75q1
|
Circumgon
|
Summary
|
Circumgon
|
A circumgonal region is the union of those triangular regions. Every triangle is a circumgonal region because it circumscribes the circle known as the incircle of the triangle. Every square is a circumgonal region.
|
c_1kejvtesvmt1
|
Circumgon
|
Summary
|
Circumgon
|
In fact, every regular polygon is a circumgonal region, as is more generally every tangential polygon. But not every polygon is a circumgonal region: for example, a non-square rectangle is not. A circumgonal region need not even be a convex polygon: for example, it could consist of three triangular wedges meeting only at the circle's center.
|
c_6qfozvp9sss8
|
Circumgon
|
Summary
|
Circumgon
|
All circumgons have common properties regarding area–perimeter ratios and centroids. It is these properties that make circumgons interesting objects of study in elementary geometry. The concept and the terminology of a circumgon were introduced and their properties investigated first by Tom M. Apostol and Mamikon A. Mnatsakanian in a paper published in 2004.
|
c_c0gmycmmvhsx
|
Pairwise Stone space
|
Summary
|
Pairwise_Stone_space
|
In mathematics and particularly in topology, pairwise Stone space is a bitopological space ( X , τ 1 , τ 2 ) {\displaystyle \scriptstyle (X,\tau _{1},\tau _{2})} which is pairwise compact, pairwise Hausdorff, and pairwise zero-dimensional. Pairwise Stone spaces are a bitopological version of the Stone spaces. Pairwise Stone spaces are closely related to spectral spaces. Theorem: If ( X , τ ) {\displaystyle \scriptstyle (X,\tau )} is a spectral space, then ( X , τ , τ ∗ ) {\displaystyle \scriptstyle (X,\tau ,\tau ^{*})} is a pairwise Stone space, where τ ∗ {\displaystyle \scriptstyle \tau ^{*}} is the de Groot dual topology of τ {\displaystyle \scriptstyle \tau } . Conversely, if ( X , τ 1 , τ 2 ) {\displaystyle \scriptstyle (X,\tau _{1},\tau _{2})} is a pairwise Stone space, then both ( X , τ 1 ) {\displaystyle \scriptstyle (X,\tau _{1})} and ( X , τ 2 ) {\displaystyle \scriptstyle (X,\tau _{2})} are spectral spaces.
|
c_060kiooovmhy
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series.
|
c_bgvhkb77a33n
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3).
|
c_sxozxy3z9cby
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
Spherical harmonics originate from solving Laplace's equation in the spherical domains. Functions that are solutions to Laplace's equation are called harmonics. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, where they can be defined as homogeneous polynomials of degree ℓ {\displaystyle \ell } in ( x , y , z ) {\displaystyle (x,y,z)} that obey Laplace's equation.
|
c_a49qaj380syr
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
The connection with spherical coordinates arises immediately if one uses the homogeneity to extract a factor of radial dependence r ℓ {\displaystyle r^{\ell }} from the above-mentioned polynomial of degree ℓ {\displaystyle \ell } ; the remaining factor can be regarded as a function of the spherical angular coordinates θ {\displaystyle \theta } and φ {\displaystyle \varphi } only, or equivalently of the orientational unit vector r {\displaystyle \mathbf {r} } specified by these angles. In this setting, they may be viewed as the angular portion of a set of solutions to Laplace's equation in three dimensions, and this viewpoint is often taken as an alternative definition. Notice, however, that spherical harmonics are not functions on the sphere which are harmonic with respect to the Laplace-Beltrami operator for the standard round metric on the sphere: the only harmonic functions in this sense on the sphere are the constants, since harmonic functions satisfy the Maximum principle.
|
c_q9mwmmrrn4th
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
Spherical harmonics, as functions on the sphere, are eigenfunctions of the Laplace-Beltrami operator (see the section Higher dimensions below). A specific set of spherical harmonics, denoted Y ℓ m ( θ , φ ) {\displaystyle Y_{\ell }^{m}(\theta ,\varphi )} or Y ℓ m ( r ) {\displaystyle Y_{\ell }^{m}({\mathbf {r} })} , are known as Laplace's spherical harmonics, as they were first introduced by Pierre Simon de Laplace in 1782.
|
c_kxnuahuvjasy
|
Spherical harmonic function
|
Summary
|
Spherical_harmonic_function
|
These functions form an orthogonal system, and are thus basic to the expansion of a general function on the sphere as alluded to above. Spherical harmonics are important in many theoretical and practical applications, including the representation of multipole electrostatic and electromagnetic fields, electron configurations, gravitational fields, geoids, the magnetic fields of planetary bodies and stars, and the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and modelling of 3D shapes.
|
c_sf2wz02nnscu
|
Canonical commutation relation algebra
|
Summary
|
Canonical_commutation_relation_algebra
|
In mathematics and physics CCR algebras (after canonical commutation relations) and CAR algebras (after canonical anticommutation relations) arise from the quantum mechanical study of bosons and fermions respectively. They play a prominent role in quantum statistical mechanics and quantum field theory.
|
c_fnda6cezxryr
|
Laplace's equation
|
Summary
|
Laplace’s_equation
|
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where Δ = ∇ ⋅ ∇ = ∇ 2 {\displaystyle \Delta =\nabla \cdot \nabla =\nabla ^{2}} is the Laplace operator, ∇ ⋅ {\displaystyle \nabla \cdot } is the divergence operator (also symbolized "div"), ∇ {\displaystyle \nabla } is the gradient operator (also symbolized "grad"), and f ( x , y , z ) {\displaystyle f(x,y,z)} is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function. If the right-hand side is specified as a given function, h ( x , y , z ) {\displaystyle h(x,y,z)} , we have This is called Poisson's equation, a generalization of Laplace's equation.
|
c_3pit1xy51w2q
|
Laplace's equation
|
Summary
|
Laplace’s_equation
|
Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Laplace's equation is also a special case of the Helmholtz equation. The general theory of solutions to Laplace's equation is known as potential theory.
|
c_j0zvtusp79k2
|
Laplace's equation
|
Summary
|
Laplace’s_equation
|
The twice continuously differentiable solutions of Laplace's equation are the harmonic functions, which are important in multiple branches of physics, notably electrostatics, gravitation, and fluid dynamics. In the study of heat conduction, the Laplace equation is the steady-state heat equation. In general, Laplace's equation describes situations of equilibrium, or those that do not depend explicitly on time.
|
c_tkyhwpokcw9h
|
Lieb–Thirring conjecture
|
Summary
|
Lieb–Thirring_inequality
|
In mathematics and physics, Lieb–Thirring inequalities provide an upper bound on the sums of powers of the negative eigenvalues of a Schrödinger operator in terms of integrals of the potential. They are named after E. H. Lieb and W. E. Thirring. The inequalities are useful in studies of quantum mechanics and differential equations and imply, as a corollary, a lower bound on the kinetic energy of N {\displaystyle N} quantum mechanical particles that plays an important role in the proof of stability of matter.
|
c_71bxmg3rcbgt
|
Tensor diagram notation
|
Summary
|
Penrose's_graphical_notation
|
In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines. The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, Categorical quantum mechanics which includes ZX-calculus is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams, and is now widely used in quantum industry. The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing "birdtracks", a group-theoretical diagram to classify the classical Lie groups. Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra.
|
c_8l7n74qyvync
|
Hamiltonian flow
|
Summary
|
Hamiltonian_vector_field
|
In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the Poisson bracket of f and g.
|
c_rf2i1s4mhcw7
|
Global mode
|
Summary
|
Global_mode
|
In mathematics and physics, a global mode of a system is one in which the system executes coherent oscillations in time. Suppose a quantity y ( x , t ) {\displaystyle y(x,t)} which depends on space x {\displaystyle x} and time t {\displaystyle t} is governed by some partial differential equation which does not have an explicit dependence on t {\displaystyle t} . Then a global mode is a solution of this PDE of the form y ( x , t ) = y ^ ( x ) e i ω t {\displaystyle y(x,t)={\hat {y}}(x)e^{i\omega t}} , for some frequency ω {\displaystyle \omega } .
|
c_y5f0g23p652a
|
Global mode
|
Summary
|
Global_mode
|
If ω {\displaystyle \omega } is complex, then the imaginary part corresponds to the mode exhibiting exponential growth or exponential decay. The concept of a global mode can be compared to that of a normal mode; the PDE may be thought of as a dynamical system of infinitely many equations coupled together. Global modes are used in the stability analysis of hydrodynamical systems.
|
c_edvng0xi1fns
|
Global mode
|
Summary
|
Global_mode
|
Philip Drazin introduced the concept of a global mode in his 1974 paper, and gave a technique for finding the normal modes of a linear PDE problem in which the coefficients or geometry vary slowly in x {\displaystyle x} . This technique is based on the WKBJ approximation, which is a special case of multiple-scale analysis. His method extends the Briggs–Bers technique, which gives a stability analysis for linear PDEs with constant coefficients.
|
c_dhtv2lfsr6cs
|
Non-perturbative
|
Summary
|
Non-perturbative
|
In mathematics and physics, a non-perturbative function or process is one that cannot be described by perturbation theory. An example is the function f ( x ) = e − 1 / x 2 , {\displaystyle f(x)=e^{-1/x^{2}},} which does not have a Taylor series at x = 0. Every coefficient of the Taylor expansion around x = 0 is exactly zero, but the function is non-zero if x ≠ 0.
|
c_nhar1hrsjmaa
|
Non-perturbative
|
Summary
|
Non-perturbative
|
In physics, such functions arise for phenomena which are impossible to understand by perturbation theory, at any finite order. In quantum field theory, 't Hooft–Polyakov monopoles, domain walls, flux tubes, and instantons are examples. A concrete, physical example is given by the Schwinger effect, whereby a strong electric field may spontaneously decay into electron-positron pairs.
|
c_s6pjae0a5atr
|
Non-perturbative
|
Summary
|
Non-perturbative
|
For not too strong fields, the rate per unit volume of this process is given by, Γ = ( e E ) 2 4 π 3 e − π m 2 e E {\displaystyle \Gamma ={\frac {(eE)^{2}}{4\pi ^{3}}}\mathrm {e} ^{-{\frac {\pi m^{2}}{eE}}}} which cannot be expanded in a Taylor series in the electric charge e {\displaystyle e} , or the electric field strength E {\displaystyle E} . Here m {\displaystyle m} is the mass of an electron and we have used units where c = ℏ = 1 {\displaystyle c=\hbar =1} . In theoretical physics, a non-perturbative solution is one that cannot be described in terms of perturbations about some simple background, such as empty space. For this reason, non-perturbative solutions and theories yield insights into areas and subjects that perturbative methods cannot reveal.
|
c_ucve0163qv3k
|
Exact solutions of nonlinear partial differential equations
|
Summary
|
Exact_solutions_of_nonlinear_partial_differential_equations
|
In mathematics and physics, a nonlinear partial differential equation is a partial differential equation with nonlinear terms. They describe many different physical systems, ranging from gravitation to fluid dynamics, and have been used in mathematics to solve problems such as the Poincaré conjecture and the Calabi conjecture. They are difficult to study: almost no general techniques exist that work for all such equations, and usually each individual equation has to be studied as a separate problem. The distinction between a linear and a nonlinear partial differential equation is usually made in terms of the properties of the operator that defines the PDE itself.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.