content_id
stringlengths 14
14
| page_title
stringlengths 1
250
| section_title
stringlengths 1
1.26k
⌀ | breadcrumb
stringlengths 1
1.39k
| text
stringlengths 9
3.55k
|
---|---|---|---|---|
c_8d2cqab2s2j7
|
Matrix inversion lemma
|
Summary
|
Woodbury_matrix_identity
|
In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.The Woodbury matrix identity is ( A + U C V ) − 1 = A − 1 − A − 1 U ( C − 1 + V A − 1 U ) − 1 V A − 1 , {\displaystyle \left(A+UCV\right)^{-1}=A^{-1}-A^{-1}U\left(C^{-1}+VA^{-1}U\right)^{-1}VA^{-1},} where A, U, C and V are conformable matrices: A is n×n, C is k×k, U is n×k, and V is k×n. This can be derived using blockwise matrix inversion.
|
c_n1i7j5r1rmnc
|
Matrix inversion lemma
|
Summary
|
Woodbury_matrix_identity
|
While the identity is primarily used on matrices, it holds in a general ring or in an Ab-category. The Woodbury matrix identity allows cheap computation of inverses and solutions to linear equations. However, little is known about the numerical stability of the formula. There are no published results concerning its error bounds. Anecdotal evidence suggests that it may diverge even for seemingly benign examples (when both the original and modified matrices are well-conditioned).
|
c_i3vorkraa8wv
|
Positive operator
|
Summary
|
Positive_operator
|
In mathematics (specifically linear algebra, operator theory, and functional analysis) as well as physics, a linear operator A {\displaystyle A} acting on an inner product space is called positive-semidefinite (or non-negative) if, for every x ∈ Dom ( A ) {\displaystyle x\in \mathop {\text{Dom}} (A)} , ⟨ A x , x ⟩ ∈ R {\displaystyle \langle Ax,x\rangle \in \mathbb {R} } and ⟨ A x , x ⟩ ≥ 0 {\displaystyle \langle Ax,x\rangle \geq 0} , where Dom ( A ) {\displaystyle \mathop {\text{Dom}} (A)} is the domain of A {\displaystyle A} . Positive-semidefinite operators are denoted as A ≥ 0 {\displaystyle A\geq 0} . The operator is said to be positive-definite, and written A > 0 {\displaystyle A>0} , if ⟨ A x , x ⟩ > 0 , {\displaystyle \langle Ax,x\rangle >0,} for all x ∈ D o m ( A ) ∖ { 0 } {\displaystyle x\in \mathop {\mathrm {Dom} } (A)\setminus \{0\}} .In physics (specifically quantum mechanics), such operators represent quantum states, via the density matrix formalism.
|
c_0ntyx9jg0xsw
|
Multiple integrals
|
Summary
|
Double_integration
|
In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in R 2 {\displaystyle \mathbb {R} ^{2}} (the real-number plane) are called double integrals, and integrals of a function of three variables over a region in R 3 {\displaystyle \mathbb {R} ^{3}} (real-number 3D space) are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.
|
c_glk15gp7969k
|
Alexander's theorem
|
Summary
|
Alexander's_theorem
|
In mathematics Alexander's theorem states that every knot or link can be represented as a closed braid; that is, a braid in which the corresponding ends of the strings are connected in pairs. The theorem is named after James Waddell Alexander II, who published a proof in 1923.Braids were first considered as a tool of knot theory by Alexander. His theorem gives a positive answer to the question Is it always possible to transform a given knot into a closed braid?
|
c_kve32bft1by9
|
Alexander's theorem
|
Summary
|
Alexander's_theorem
|
A good construction example is found in Colin Adams's book.However, the correspondence between knots and braids is clearly not one-to-one: a knot may have many braid representations. For example, conjugate braids yield equivalent knots. This leads to a second fundamental question: Which closed braids represent the same knot type? This question is addressed in Markov's theorem, which gives ‘moves’ relating any two closed braids that represent the same knot.
|
c_1v7hf8u3qif4
|
Antoine's horned sphere
|
Summary
|
Antoine's_horned_sphere
|
In mathematics Antoine's necklace is a topological embedding of the Cantor set in 3-dimensional Euclidean space, whose complement is not simply connected. It also serves as a counterexample to the claim that all Cantor spaces are ambiently homeomorphic to each other. It was discovered by Louis Antoine (1921).
|
c_8gxvpef9uvx1
|
Haboush's theorem
|
Summary
|
Haboush's_theorem
|
In mathematics Haboush's theorem, often still referred to as the Mumford conjecture, states that for any semisimple algebraic group G over a field K, and for any linear representation ρ of G on a K-vector space V, given v ≠ 0 in V that is fixed by the action of G, there is a G-invariant polynomial F on V, without constant term, such that F(v) ≠ 0.The polynomial can be taken to be homogeneous, in other words an element of a symmetric power of the dual of V, and if the characteristic is p>0 the degree of the polynomial can be taken to be a power of p. When K has characteristic 0 this was well known; in fact Weyl's theorem on the complete reducibility of the representations of G implies that F can even be taken to be linear. Mumford's conjecture about the extension to prime characteristic p was proved by W. J. Haboush (1975), about a decade after the problem had been posed by David Mumford, in the introduction to the first edition of his book Geometric Invariant Theory.
|
c_zcvkk2im2kr8
|
Nef polygon
|
Summary
|
Nef_polygon
|
In mathematics Nef polygons and Nef polyhedra are the sets of polygons and polyhedra which can be obtained from a finite set of halfplanes (halfspaces) by Boolean operations of set intersection and set complement. The objects are named after the Swiss mathematician Walter Nef (1919–2013), who introduced them in his 1978 book on polyhedra.Since other Boolean operations, such as union or difference, may be expressed via intersection and complement operations, the sets of Nef polygons (polyhedra) are closed with respect to these operations as well.In addition, the class of Nef polyhedra is closed with respect to the topological operations of taking closure, interior, exterior, and boundary. Boolean operations, such as difference or intersection, may produce non-regular sets. However the class of Nef polyhedra is also closed with respect to the operation of regularization.Convex polytopes are a special subclass of Nef polyhedra, being the set of polyhedra which are the intersections of a finite set of half-planes.
|
c_ya8bxn487yfc
|
Cauchy–Euler operator
|
Summary
|
Cauchy–Euler_operator
|
In mathematics a Cauchy–Euler operator is a differential operator of the form p ( x ) ⋅ d d x {\displaystyle p(x)\cdot {d \over dx}} for a polynomial p. It is named after Augustin-Louis Cauchy and Leonhard Euler. The simplest example is that in which p(x) = x, which has eigenvalues n = 0, 1, 2, 3, ... and corresponding eigenfunctions xn.
|
c_j39c7aykjwjz
|
Dirac structure
|
Summary
|
Dirac_structure
|
In mathematics a Dirac structure is a geometric construction generalizing both symplectic structures and Poisson structures, and having several applications to mechanics. It is based on the notion of constraint introduced by Paul Dirac and was first introduced by Ted Courant and Alan Weinstein. In more detail, let V be a real vector space, and V* its dual. A (linear) Dirac structure on V is a linear subspace D of V × V ∗ {\displaystyle V\times V^{*}} satisfying for all ( v , α ) ∈ D {\displaystyle (v,\alpha )\in D} one has ⟨ α , v ⟩ = 0 {\displaystyle \left\langle \alpha ,\,v\right\rangle =0} , D is maximal with respect to this property.In particular, if V is finite dimensional then the second criterion is satisfied if dim D = dim V {\displaystyle \dim D=\dim V} .
|
c_rztmrarnaro0
|
Dirac structure
|
Summary
|
Dirac_structure
|
(Similar definitions can be made for vector spaces over other fields.) An alternative (equivalent) definition often used is that D {\displaystyle D} satisfies D = D ⊥ {\displaystyle D=D^{\perp }} , where orthogonality is with respect to the symmetric bilinear form on V × V ∗ {\displaystyle V\times V^{*}} given by ⟨ ( u , α ) , ( v , β ) ⟩ = ⟨ α , v ⟩ + ⟨ β , u ⟩ . {\displaystyle {\bigl \langle }(u,\alpha ),\,(v,\beta ){\bigr \rangle }=\left\langle \alpha ,v\right\rangle +\left\langle \beta ,u\right\rangle .}
|
c_7jnktpfn5nhi
|
Lie coalgebra
|
Summary
|
Lie_coalgebra
|
In mathematics a Lie coalgebra is the dual structure to a Lie algebra. In finite dimensions, these are dual objects: the dual vector space to a Lie algebra naturally has the structure of a Lie coalgebra, and conversely.
|
c_8q5pwmymvhda
|
Polynomial solutions of P-recursive equations
|
Summary
|
Polynomial_solutions_of_P-recursive_equations
|
In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm. In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis ( x n ) n ∈ N {\textstyle (x^{n})_{n\in \mathbb {N} }} ).Other algorithms which compute rational or hypergeometric solutions of a linear recurrence equation with polynomial coefficients also use algorithms which compute polynomial solutions.
|
c_easc89x6jrbo
|
P-recursive equation
|
Summary
|
P-recursive_equation
|
In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations (or linear recurrence relations or linear difference equations) with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics.
|
c_4pxhbutoafus
|
P-recursive equation
|
Summary
|
P-recursive_equation
|
The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite. From the late 1980s, the first algorithms were developed to find solutions for these equations. Sergei A. Abramov, Marko Petkovšek and Mark van Hoeij described algorithms to find polynomial, rational, hypergeometric and d'Alembertian solutions.
|
c_n4ogpkjvl6ol
|
Steinberg symbol
|
Summary
|
Steinberg_symbol
|
In mathematics a Steinberg symbol is a pairing function which generalises the Hilbert symbol and plays a role in the algebraic K-theory of fields. It is named after mathematician Robert Steinberg. For a field F we define a Steinberg symbol (or simply a symbol) to be a function ( ⋅ , ⋅ ): F ∗ × F ∗ → G {\displaystyle (\cdot ,\cdot ):F^{*}\times F^{*}\rightarrow G} , where G is an abelian group, written multiplicatively, such that ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is bimultiplicative; if a + b = 1 {\displaystyle a+b=1} then ( a , b ) = 1 {\displaystyle (a,b)=1} .The symbols on F derive from a "universal" symbol, which may be regarded as taking values in F ∗ ⊗ F ∗ / ⟨ a ⊗ 1 − a ⟩ {\displaystyle F^{*}\otimes F^{*}/\langle a\otimes 1-a\rangle } . By a theorem of Matsumoto, this group is K 2 F {\displaystyle K_{2}F} and is part of the Milnor K-theory for a field.
|
c_4qtqr0apmeht
|
Yetter–Drinfeld category
|
Summary
|
Yetter–Drinfeld_category
|
In mathematics a Yetter–Drinfeld category is a special type of braided monoidal category. It consists of modules over a Hopf algebra which satisfy some additional axioms.
|
c_yh05d0pfkm6z
|
Cocycle
|
Summary
|
Cocycle
|
In mathematics a cocycle is a closed cochain. Cocycles are used in algebraic topology to express obstructions (for example, to integrating a differential equation on a closed manifold). They are likewise used in group cohomology. In autonomous dynamical systems, cocycles are used to describe particular kinds of map, as in the Oseledets theorem.
|
c_ynw9a2b8h3ts
|
Group structure and the axiom of choice
|
Summary
|
Group_structure_and_the_axiom_of_choice
|
In mathematics a group is a set together with a binary operation on the set called multiplication that obeys the group axioms. The axiom of choice is an axiom of ZFC set theory which in one form states that every set can be wellordered. In ZF set theory, i.e. ZFC without the axiom of choice, the following statements are equivalent: For every nonempty set X there exists a binary operation • such that (X, •) is a group. The axiom of choice is true.
|
c_vogz43mby46o
|
System of linear inequalities
|
Summary
|
Linear_inequality
|
In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality: < less than > greater than ≤ less than or equal to ≥ greater than or equal to ≠ not equal toA linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign.
|
c_qgxq9tmx6e9t
|
Power closed
|
Summary
|
Power_closed
|
In mathematics a p-group G {\displaystyle G} is called power closed if for every section H {\displaystyle H} of G {\displaystyle G} the product of p k {\displaystyle p^{k}} powers is again a p k {\displaystyle p^{k}} th power. Regular p-groups are an example of power closed groups. On the other hand, powerful p-groups, for which the product of p k {\displaystyle p^{k}} powers is again a p k {\displaystyle p^{k}} th power are not power closed, as this property does not hold for all sections of powerful p-groups. The power closed 2-groups of exponent at least eight are described in (Mann 2005, Th. 16).
|
c_zkrolcthxlal
|
Partial differential algebraic equation
|
Summary
|
Partial_differential_algebraic_equation
|
In mathematics a partial differential algebraic equation (PDAE) set is an incomplete system of partial differential equations that is closed with a set of algebraic equations.
|
c_rai5ej77trdb
|
Polydivisible number
|
Summary
|
Polydivisible_number
|
In mathematics a polydivisible number (or magic number) is a number in a given number base with digits abcde... that has the following properties: Its first digit a is not 0. The number formed by its first two digits ab is a multiple of 2. The number formed by its first three digits abc is a multiple of 3. The number formed by its first four digits abcd is a multiple of 4. etc.
|
c_c79vb6yogv2d
|
Primitive abundant number
|
Summary
|
Primitive_abundant_number
|
In mathematics a primitive abundant number is an abundant number whose proper divisors are all deficient numbers.For example, 20 is a primitive abundant number because: The sum of its proper divisors is 1 + 2 + 4 + 5 + 10 = 22, so 20 is an abundant number. The sums of the proper divisors of 1, 2, 4, 5 and 10 are 0, 1, 3, 1 and 8 respectively, so each of these numbers is a deficient number.The first few primitive abundant numbers are: 20, 70, 88, 104, 272, 304, 368, 464, 550, 572 ... (sequence A071395 in the OEIS)The smallest odd primitive abundant number is 945. A variant definition is abundant numbers having no abundant proper divisor (sequence A091191 in the OEIS). It starts: 12, 18, 20, 30, 42, 56, 66, 70, 78, 88, 102, 104, 114
|
c_d2wgvnvg8lfi
|
Radial basis function
|
Summary
|
Radial_basis_function
|
In mathematics a radial basis function (RBF) is a real-valued function φ {\textstyle \varphi } whose value depends only on the distance between the input and some fixed point, either the origin, so that φ ( x ) = φ ^ ( ‖ x ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)} , or some other fixed point c {\textstyle \mathbf {c} } , called a center, so that φ ( x ) = φ ^ ( ‖ x − c ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} -\mathbf {c} \right\|)} . Any function φ {\textstyle \varphi } that satisfies the property φ ( x ) = φ ^ ( ‖ x ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)} is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used.
|
c_fdciznnhhqmg
|
Radial basis function
|
Summary
|
Radial_basis_function
|
They are often used as a collection { φ k } k {\displaystyle \{\varphi _{k}\}_{k}} which forms a basis for some function space of interest, hence the name. Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977. RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.
|
c_x6bol0ynw945
|
Regular Hadamard matrices
|
Summary
|
Regular_Hadamard_matrices
|
In mathematics a regular Hadamard matrix is a Hadamard matrix whose row and column sums are all equal. While the order of a Hadamard matrix must be 1, 2, or a multiple of 4, regular Hadamard matrices carry the further restriction that the order be a square number. The excess, denoted E(H), of a Hadamard matrix H of order n is defined to be the sum of the entries of H. The excess satisfies the bound |E(H)| ≤ n3/2. A Hadamard matrix attains this bound if and only if it is regular.
|
c_7yl0g4skvbtk
|
Stack (mathematics)
|
Summary
|
Category_fibered_in_groupoids
|
In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist. Descent theory is concerned with generalisations of situations where isomorphic, compatible geometrical objects (such as vector bundles on topological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced with pullbacks; fibred categories then make a good framework to discuss the possibility of such gluing.
|
c_yrvpkcnyzynn
|
Stack (mathematics)
|
Summary
|
Category_fibered_in_groupoids
|
The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another base category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology.
|
c_fm6atqkvm2hm
|
Countably compact space
|
Summary
|
Countable_compactness
|
In mathematics a topological space is called countably compact if every countable open cover has a finite subcover.
|
c_lr2dqkuh75rv
|
Translation surface
|
Summary
|
Translation_surface
|
In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form. These surfaces arise in dynamical systems where they can be used to model billiards, and in Teichmüller theory. A particularly interesting subclass is that of Veech surfaces (named after William A. Veech) which are the most symmetric ones.
|
c_3jbxe17o6g29
|
Eberlein compactum
|
Summary
|
Eberlein_compactum
|
In mathematics an Eberlein compactum, studied by William Frederick Eberlein, is a compact topological space homeomorphic to a subset of a Banach space with the weak topology. Every compact metric space, more generally every one-point compactification of a locally compact metric space, is Eberlein compact. The converse is not true.
|
c_28x3ztqyij94
|
Singly and doubly even
|
Summary
|
Singly_and_doubly_even
|
In mathematics an even integer, that is, a number that is divisible by 2, is called evenly even or doubly even if it is a multiple of 4, and oddly even or singly even if it is not. The former names are traditional ones, derived from ancient Greek mathematics; the latter have become common in recent decades. These names reflect a basic concept in number theory, the 2-order of an integer: how many times the integer can be divided by 2. This is equivalent to the multiplicity of 2 in the prime factorization. A singly even number can be divided by 2 only once; it is even but its quotient by 2 is odd. A doubly even number is an integer that is divisible more than once by 2; it is even and its quotient by 2 is also even.The separate consideration of oddly and evenly even numbers is useful in many parts of mathematics, especially in number theory, combinatorics, coding theory (see even codes), among others.
|
c_50gs3g8zrf2l
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
In mathematics an expression represents a single value. A function maps one or more values to one unique value. Inverses of functions are not always well defined as functions. Sometimes extra conditions are required to make an inverse of a function fit the definition of a function.
|
c_sz5rq3g09bne
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
Some Boolean operations, in particular do not have inverses that may be defined as functions. In particular the disjunction "or" has inverses that allow two values. In natural language "or" represents alternate possibilities.
|
c_ht29ty9yqrxy
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
Narrowing is based on value sets that allow multiple values to be packaged and considered as a single value. This allows the inverses of functions to always be considered as functions. To achieve this value sets must record the context to which a value belongs.
|
c_wcg8aiiikppq
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
A variable may only take on a single value in each possible world. The value sets tag each value in the value set with the world to which it belongs. Possible worlds belong to world sets.
|
c_6x4l5nwzj3mu
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
A world set is a set of all mutually exclusive worlds. Combining values from different possible worlds is impossible, because that would mean combining mutually exclusive possible worlds. The application of functions to value sets creates combinations of value sets from different worlds.
|
c_8fm5qlsu2gqi
|
Narrowing of algebraic value sets
|
Introduction
|
Narrowing_of_algebraic_value_sets > Introduction
|
Narrowing reduces those worlds by eliminating combinations of different worlds from the same world set. Narrowing rules also detect situations where some combinations of worlds are shown to be impossible. No back tracking is required in the use of narrowing. By packaging the possible values in a value set all combinations of values may be considered at the same time. Evaluation proceeds as for a functional language, combining combinations of values in value sets, with narrowing rules eliminating impossible values from the sets.
|
c_xdmpu5dz99u0
|
Isogonal trajectory
|
Summary
|
Orthogonal_trajectories
|
In mathematics an orthogonal trajectory is a curve, which intersects any curve of a given pencil of (planar) curves orthogonally. For example, the orthogonal trajectories of a pencil of concentric circles are the lines through their common center (see diagram). Suitable methods for the determination of orthogonal trajectories are provided by solving differential equations. The standard method establishes a first order ordinary differential equation and solves it by separation of variables.
|
c_yzuhnzuf2gf2
|
Isogonal trajectory
|
Summary
|
Orthogonal_trajectories
|
Both steps may be difficult or even impossible. In such cases one has to apply numerical methods. Orthogonal trajectories are used in mathematics for example as curved coordinate systems (i.e. elliptic coordinates) or appear in physics as electric fields and their equipotential curves. If the trajectory intersects the given curves by an arbitrary (but fixed) angle, one gets an isogonal trajectory.
|
c_2hhqak2vz7nb
|
Rectangular mask short-time Fourier transform
|
Summary
|
Rectangular_mask_short-time_Fourier_transform
|
In mathematics and Fourier analysis, a rectangular mask short-time Fourier transform (rec-STFT) has the simple form of short-time Fourier transform. Other types of the STFT may require more computation time than the rec-STFT. The rectangular mask function can be defined for some bound (B) over time (t) as w ( t ) = { 1 ; | t | ≤ B 0 ; | t | > B {\displaystyle w(t)={\begin{cases}\ 1;&|t|\leq B\\\ 0;&|t|>B\end{cases}}} We can change B for different tradeoffs between desired time resolution and frequency resolution. Rec-STFT X ( t , f ) = ∫ t − B t + B x ( τ ) e − j 2 π f τ d τ {\displaystyle X(t,f)=\int _{t-B}^{t+B}x(\tau )e^{-j2\pi f\tau }\,d\tau } Inverse form x ( t ) = ∫ − ∞ ∞ X ( t 1 , f ) e j 2 π f t d f where t − B < t 1 < t + B {\displaystyle x(t)=\int _{-\infty }^{\infty }X(t_{1},f)e^{j2\pi ft}\,df{\text{ where }}t-B
|
c_5li34uvimk4l
|
Bol loop
|
Summary
|
Bol_loop
|
In mathematics and abstract algebra, a Bol loop is an algebraic structure generalizing the notion of group. Bol loops are named for the Dutch mathematician Gerrit Bol who introduced them in (Bol 1937). A loop, L, is said to be a left Bol loop if it satisfies the identity a ( b ( a c ) ) = ( a ( b a ) ) c {\displaystyle a(b(ac))=(a(ba))c} , for every a,b,c in L,while L is said to be a right Bol loop if it satisfies ( ( c a ) b ) a = c ( ( a b ) a ) {\displaystyle ((ca)b)a=c((ab)a)} , for every a,b,c in L.These identities can be seen as weakened forms of associativity, or a strengthened form of (left or right) alternativity. A loop is both left Bol and right Bol if and only if it is a Moufang loop. Alternatively, a right or left Bol loop is Moufang if and only if it satisfies the flexible identity a(ba) = (ab)a . Different authors use the term "Bol loop" to refer to either a left Bol or a right Bol loop.
|
c_bq3u946ickz2
|
Boolean domain
|
Summary
|
Boolean_domain
|
In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpretations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually written as {0, 1}, or B . {\displaystyle \mathbb {B} .} The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements.
|
c_hhdla871r1qn
|
Boolean domain
|
Summary
|
Boolean_domain
|
The initial object in the category of bounded lattices is a Boolean domain. In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true. However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for example, falsity is represented by the number 0 and truth is represented by the number 1 or −1, and all variables that can take these values can also take any other numerical values.
|
c_9y5qvrykqnci
|
Relation algebra
|
Summary
|
Relation_algebra
|
In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution called converse, a unary operation. The motivating example of a relation algebra is the algebra 2 X 2 of all binary relations on a set X, that is, subsets of the cartesian square X2, with R•S interpreted as the usual composition of binary relations R and S, and with the converse of R as the converse relation. Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated in the algebraic logic of Ernst Schröder. The equational form of relation algebra treated here was developed by Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables.
|
c_5rnh9th9bd1d
|
List of group theory topics
|
Summary
|
List_of_group_theory_topics
|
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
|
c_wd7h2hrl5ju3
|
List of group theory topics
|
Summary
|
List_of_group_theory_topics
|
Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
|
c_wuf2cp7j7gzt
|
Two-element Boolean algebra
|
Summary
|
Two-element_Boolean_algebra
|
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here.
|
c_o48cyf8dv0eu
|
Vaughan's identity
|
Summary
|
Vaughan's_lemma
|
In mathematics and analytic number theory, Vaughan's identity is an identity found by R. C. Vaughan (1977) that can be used to simplify Vinogradov's work on trigonometric sums. It can be used to estimate summatory functions of the form ∑ n ≤ N f ( n ) Λ ( n ) {\displaystyle \sum _{n\leq N}f(n)\Lambda (n)} where f is some arithmetic function of the natural integers n, whose values in applications are often roots of unity, and Λ is the von Mangoldt function.
|
c_421z05kjclq1
|
Perturbation theory
|
Summary
|
Perturbation_analysis
|
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter ε {\displaystyle \varepsilon } . The first term is the known solution to the solvable problem.
|
c_5ewq2i42yyzw
|
Perturbation theory
|
Summary
|
Perturbation_analysis
|
Successive terms in the series at higher powers of ε {\displaystyle \varepsilon } usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
|
c_4iz0ddq1ui48
|
Perturbation theory
|
Summary
|
Perturbation_analysis
|
Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
|
c_c7nby6xhxnsp
|
Generalized momentum
|
Summary
|
Conjugate_momentum
|
In mathematics and classical mechanics, canonical coordinates are sets of coordinates on phase space which can be used to describe a physical system at any given point in time. Canonical coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in quantum mechanics; see the Stone–von Neumann theorem and canonical commutation relations for details. As Hamiltonian mechanics are generalized by symplectic geometry and canonical transformations are generalized by contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold (the mathematical notion of phase space).
|
c_uabr7bt8uun7
|
Poisson commutativity
|
Summary
|
Poisson_brackets
|
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized by q i {\displaystyle q_{i}} and p i {\displaystyle p_{i}} , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich.
|
c_yc254oy1qxcw
|
Poisson commutativity
|
Summary
|
Poisson_brackets
|
For instance, it is often possible to choose the Hamiltonian itself H = H ( q , p , t ) {\displaystyle H=H(q,p,t)} as one of the new canonical momentum coordinates. In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups. All of these objects are named in honor of Siméon Denis Poisson.
|
c_9w1eucwjxlhs
|
Centered hexagonal number
|
Summary
|
Centered_hexagonal_number
|
In mathematics and combinatorics, a centered hexagonal number, or hex number, is a centered figurate number that represents a hexagon with a dot in the center and all other dots surrounding the center dot in a hexagonal lattice. The following figures illustrate this arrangement for the first four centered hexagonal numbers: Centered hexagonal numbers should not be confused with cornered hexagonal numbers, which are figurate numbers in which the associated hexagons share a vertex. The sequence of hexagonal numbers starts out as follows (sequence A003215 in the OEIS): 1, 7, 19, 37, 61, 91, 127, 169, 217, 271, 331, 397, 469, 547, 631, 721, 817, 919.
|
c_hc4z51i8j0l0
|
Elementary cellular automaton
|
Summary
|
Elementary_cellular_automaton
|
In mathematics and computability theory, an elementary cellular automaton is a one-dimensional cellular automaton where there are two possible states (labeled 0 and 1) and the rule to determine the state of a cell in the next generation depends only on the current state of the cell and its two immediate neighbors. There is an elementary cellular automaton (rule 110, defined below) which is capable of universal computation, and as such it is one of the simplest possible models of computation.
|
c_z1jx4p8g0t5z
|
Delaunay triangulation
|
Summary
|
Delaunay_triangulation
|
In mathematics and computational geometry, a Delaunay triangulation (also known as a Delone triangulation) for a given set P of discrete points in a general position is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum of all the angles of the triangles in the triangulation; they tend to avoid sliver triangles. The triangulation is named after Boris Delaunay for his work on this topic from 1934.For a set of points on the same line there is no Delaunay triangulation (the notion of triangulation is degenerate for this case). For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors.
|
c_cg3ppbsvfbkd
|
Delaunay triangulation
|
Summary
|
Delaunay_triangulation
|
By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique.
|
c_od7izchw5q2n
|
Gabriel graph
|
Summary
|
Gabriel_graph
|
In mathematics and computational geometry, the Gabriel graph of a set S {\displaystyle S} of points in the Euclidean plane expresses one notion of proximity or nearness of those points. Formally, it is the graph G {\displaystyle G} with vertex set S {\displaystyle S} in which any two distinct points p ∈ S {\displaystyle p\in S} and q ∈ S {\displaystyle q\in S} are adjacent precisely when the closed disc having p q {\displaystyle pq} as a diameter contains no other points. Another way of expressing the same adjacency criterion is that p {\displaystyle p} and q {\displaystyle q} should be the two closest given points to their midpoint, with no other given point being as close. Gabriel graphs naturally generalize to higher dimensions, with the empty disks replaced by empty closed balls. Gabriel graphs are named after K. Ruben Gabriel, who introduced them in a paper with Robert R. Sokal in 1969.
|
c_9z9xgi6sli7u
|
Euler's method
|
Summary
|
Euler's_method
|
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis (published 1768–1770).The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method.
|
c_5wl0td3zfp0b
|
Distinct degree factorization
|
Summary
|
Polynomial_factorization_over_finite_fields
|
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them. All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory. As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.
|
c_oykv3bpyti3c
|
Computational differentiation
|
Summary
|
Algorithmic_differentiation
|
In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
|
c_3l4ztpq8dc8m
|
Polynomial factorization
|
Summary
|
Polynomial_factorization
|
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems. The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension.
|
c_3ucrcecwd0ge
|
Polynomial factorization
|
Summary
|
Polynomial_factorization
|
But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems: When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982) Nowadays, modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field.
|
c_34d7ho1sulih
|
Binary exponentiation
|
Summary
|
Repeated_squaring
|
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
|
c_47qkox9220b3
|
Indicial notation
|
Summary
|
Index_notation
|
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.
|
c_lm0ar0t37jod
|
Precedence rule
|
Summary
|
Standard_mathematical_order_of_operations
|
In mathematics and computer programming, the order of operations is a collection of rules that reflect conventions about which operations to perform first in order to evaluate a given mathematical expression. These rules are formalized with a ranking of the operators. The rank of an operator is called its precedence, and an operation with a higher precedence is performed before operations with lower precedence. Calculators generally perform operations with the same precedence from left to right, but some programming languages and calculators adopt different conventions.
|
c_hvqzm02w6ytg
|
Precedence rule
|
Summary
|
Standard_mathematical_order_of_operations
|
For example, multiplication is granted a higher precedence than addition, and it has been this way since the introduction of modern algebraic notation. Thus, in the expression 1 + 2 × 3, the multiplication is performed before addition, and the expression has the value 1 + (2 × 3) = 7, and not (1 + 2) × 3 = 9. When exponents were introduced in the 16th and 17th centuries, they were given precedence over both addition and multiplication and placed as a superscript to the right of their base.
|
c_zhx5h5cmmqd1
|
Precedence rule
|
Summary
|
Standard_mathematical_order_of_operations
|
Thus 3 + 52 = 28 and 3 × 52 = 75. These conventions exist to avoid notational ambiguity while allowing notation to remain brief. Where it is desired to override the precedence conventions, or even simply to emphasize them, parentheses ( ) can be used.
|
c_gwnpeynvnv9m
|
Precedence rule
|
Summary
|
Standard_mathematical_order_of_operations
|
For example, (2 + 3) × 4 = 20 forces addition to precede multiplication, while (3 + 5)2 = 64 forces addition to precede exponentiation. If multiple pairs of parentheses are required in a mathematical expression (such as in the case of nested parentheses), the parentheses may be replaced by brackets or braces to avoid confusion, as in − 5 = 9.
|
c_mg4r5nk01xnu
|
Precedence rule
|
Summary
|
Standard_mathematical_order_of_operations
|
These rules are meaningful only when the usual notation (called infix notation) is used. When functional or Polish notation are used for all operations, the order of operations results from the notation itself. Internet memes sometimes present ambiguous infix expressions that cause disputes and increase web traffic. Most of these ambiguous expressions involve mixed division and multiplication, where there is no general agreement about the order of operations.
|
c_vacpduzow6b1
|
Fixed-point combinator
|
Summary
|
Fixed_point_combinator
|
In mathematics and computer science in general, a fixed point of a function is a value that is mapped to itself by the function. In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator): page 26 is a higher-order function fix {\displaystyle {\textsf {fix}}} that returns some fixed point of its argument function, if one exists. Formally, if the function f has one or more fixed points, then fix f = f ( fix f ) , {\displaystyle {\textsf {fix}}\ f=f\ ({\textsf {fix}}\ f)\ ,} and hence, by repeated application, fix f = f ( f ( … f ( fix f ) … ) ) . {\displaystyle {\textsf {fix}}\ f=f\ (f\ (\ldots f\ ({\textsf {fix}}\ f)\ldots ))\ .}
|
c_dettoiq5u8y1
|
Horner's method
|
Summary
|
Horner_scheme
|
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based on Horner's rule, in which a polynomial is written in nested form: a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) .
|
c_6nuvmwcosh8k
|
Horner's method
|
Summary
|
Horner_scheme
|
{\displaystyle {\begin{aligned}a_{0}&+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\&=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This allows the evaluation of a polynomial of degree n with only n {\displaystyle n} multiplications and n {\displaystyle n} additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations.Alternatively, Horner's method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by the application of Horner's rule. It was widely used until computers came into general use around 1970.
|
c_8rioiafinabd
|
Recamán's sequence
|
Summary
|
Recamán's_sequence
|
In mathematics and computer science, Recamán's sequence is a well known sequence defined by a recurrence relation. Because its elements are related to the previous elements in a straightforward way, they are often defined using recursion. It takes its name after its inventor Bernardo Recamán Santos, a Colombian mathematician.
|
c_5egl93ujryjq
|
Zeno machine
|
Summary
|
Zeno_machine
|
In mathematics and computer science, Zeno machines (abbreviated ZM, and also called accelerated Turing machine, ATM) are a hypothetical computational model related to Turing machines that are capable of carrying out computations involving a countably infinite number of algorithmic steps. These machines are ruled out in most models of computation. The idea of Zeno machines was first discussed by Hermann Weyl in 1927; the name refers to Zeno's paradoxes, attributed to the ancient Greek philosopher Zeno of Elea. Zeno machines play a crucial role in some theories. The theory of the Omega Point devised by physicist Frank J. Tipler, for instance, can only be valid if Zeno machines are possible.
|
c_gv581axajpf4
|
Balanced boolean function
|
Summary
|
Balanced_boolean_function
|
In mathematics and computer science, a balanced boolean function is a boolean function whose output yields as many 0s as 1s over its input set. This means that for a uniformly random input string of bits, the probability of getting a 1 is 1/2. Examples of balanced boolean functions are the function that copies the first bit of its input to the output, and the function that produces the exclusive or of the input bits.
|
c_lia68dlec6sm
|
Normal form (mathematics)
|
Summary
|
Data_normalization
|
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness.The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero.
|
c_fzxm7krzo4np
|
Normal form (mathematics)
|
Summary
|
Data_normalization
|
More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix.In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object.
|
c_zf5iq2rieuos
|
Normal form (mathematics)
|
Summary
|
Data_normalization
|
In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations.
|
c_fri1lo7oj8uh
|
Normal form (mathematics)
|
Summary
|
Data_normalization
|
Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. Canonical form can also mean a differential form that is defined in a natural (canonical) way.
|
c_drszpyy14ext
|
Recursive step
|
Formal definitions
|
Recursive_structure > Formal definitions
|
In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties: A simple base case (or cases) — a terminating scenario that does not use recursion to produce an answer A recursive step — a set of rules that reduces all successive cases toward the base case.For example, the following is a recursive definition of a person's ancestor. One's ancestor is either: One's parent (base case), or One's parent's ancestor (recursive step).The Fibonacci sequence is another classic example of recursion: Fib(0) = 0 as base case 1,Fib(1) = 1 as base case 2,For all integers n > 1, Fib(n) = Fib(n − 1) + Fib(n − 2).Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number."
|
c_1zz25zg2kl17
|
Recursive step
|
Formal definitions
|
Recursive_structure > Formal definitions
|
By this base case and recursive rule, one can generate the set of all natural numbers. Other recursively defined mathematical objects include factorials, functions (e.g., recurrence relations), sets (e.g., Cantor ternary set), and fractals. There are various more tongue-in-cheek definitions of recursion; see recursive humor.
|
c_eaxzzwinokl7
|
Functional form
|
Summary
|
Functional_form
|
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure), returns a function as its result.All other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example, since it maps a function to its derivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, see Functor (disambiguation). In the untyped lambda calculus, all functions are higher-order; in a typed lambda calculus, from which most functional programming languages are derived, higher-order functions that take one function as argument are values with types of the form ( τ 1 → τ 2 ) → τ 3 {\displaystyle (\tau _{1}\to \tau _{2})\to \tau _{3}} .
|
c_23dpn7ns4qrc
|
History monoid
|
Summary
|
History_monoid
|
In mathematics and computer science, a history monoid is a way of representing the histories of concurrently running computer processes as a collection of strings, each string representing the individual history of a process. The history monoid provides a set of synchronization primitives (such as locks, mutexes or thread joins) for providing rendezvous points between a set of independently executing processes or threads. History monoids occur in the theory of concurrent computation, and provide a low-level mathematical foundation for process calculi, such as CSP the language of communicating sequential processes, or CCS, the calculus of communicating systems.
|
c_z5cnmdhhnmbe
|
History monoid
|
Summary
|
History_monoid
|
History monoids were first presented by M.W. Shields.History monoids are isomorphic to trace monoids (free partially commutative monoids) and to the monoid of dependency graphs. As such, they are free objects and are universal. The history monoid is a type of semi-abelian categorical product in the category of monoids.
|
c_1owrofo2ba1r
|
Circuit-finding oracle
|
Summary
|
Circuit-finding_oracle
|
In mathematics and computer science, a matroid oracle is a subroutine through which an algorithm may access a matroid, an abstract combinatorial structure that can be used to describe the linear dependencies between vectors in a vector space or the spanning trees of a graph, among other applications. The most commonly used oracle of this type is an independence oracle, a subroutine for testing whether a set of matroid elements is independent. Several other types of oracle have also been used; some of them have been shown to be weaker than independence oracles, some stronger, and some equivalent in computational power.Many algorithms that perform computations on matroids have been designed to take an oracle as input, allowing them to run efficiently without change on many different kinds of matroids, and without additional assumptions about what kind of matroid they are using. For instance, given an independence oracle for any matroid, it is possible to find the minimum weight basis of the matroid by applying a greedy algorithm that adds elements to the basis in sorted order by weight, using the independence oracle to test whether each element can be added.In computational complexity theory, the oracle model has led to unconditional lower bounds proving that certain matroid problems cannot be solved in polynomial time, without invoking unproved assumptions such as the assumption that P ≠ NP. Problems that have been shown to be hard in this way include testing whether a matroid is binary or uniform, or testing whether it contains certain fixed minors.
|
c_y9gcqewxegyk
|
Prolongable morphism
|
Summary
|
Prolongable_morphism
|
In mathematics and computer science, a morphic word or substitutive word is an infinite sequence of symbols which is constructed from a particular class of endomorphism of a free monoid. Every automatic sequence is morphic.
|
c_um7f0u4o4pww
|
Pebble game
|
Summary
|
Pebble_game
|
In mathematics and computer science, a pebble game is a type of mathematical game played by placing "pebbles" or "markers" on a directed acyclic graph according to certain rules: A given step of the game consists of either placing a pebble on an empty vertex or removing a pebble from a previously pebbled vertex. A vertex may be pebbled only if all its predecessors have pebbles. The objective of the game is to successively pebble each vertex of G (in any order) while minimizing the number of pebbles that are ever on the graph simultaneously.
|
c_74ggv3mxt3sd
|
Primality certificate
|
Summary
|
Primality_certificate
|
In mathematics and computer science, a primality certificate or primality proof is a succinct, formal proof that a number is prime. Primality certificates allow the primality of a number to be rapidly checked without having to run an expensive or unreliable primality test. "Succinct" usually means that the proof should be at most polynomially larger than the number of digits in the number itself (for example, if the number has b bits, the proof might contain roughly b2 bits). Primality certificates lead directly to proofs that problems such as primality testing and the complement of integer factorization lie in NP, the class of problems verifiable in polynomial time given a solution.
|
c_83c56z838xdl
|
Primality certificate
|
Summary
|
Primality_certificate
|
These problems already trivially lie in co-NP. This was the first strong evidence that these problems are not NP-complete, since if they were, it would imply that NP is subset of co-NP, a result widely believed to be false; in fact, this was the first demonstration of a problem in NP intersect co-NP not known, at the time, to be in P. Producing certificates for the complement problem, to establish that a number is composite, is straightforward: it suffices to give a nontrivial divisor. Standard probabilistic primality tests such as the Baillie–PSW primality test, the Fermat primality test, and the Miller–Rabin primality test also produce compositeness certificates in the event where the input is composite, but do not produce certificates for prime inputs.
|
c_01ibon2slol1
|
Random tree
|
Summary
|
Random_tree
|
In mathematics and computer science, a random tree is a tree or arborescence that is formed by a stochastic process. Types of random trees include: Uniform spanning tree, a spanning tree of a given graph in which each different tree is equally likely to be selected Random minimal spanning tree, spanning trees of a graph formed by choosing random edge weights and using the minimum spanning tree for those weights Random binary tree, binary trees with a given number of nodes, formed by inserting the nodes in a random order or by selecting all possible trees uniformly at random Random recursive tree, increasingly labelled trees, which can be generated using a simple stochastic growth rule. Treap or randomized binary search tree, a data structure that uses random choices to simulate a random binary tree for non-random update sequences Rapidly exploring random tree, a fractal space-filling pattern used as a data structure for searching high-dimensional spaces Brownian tree, a fractal tree structure created by diffusion-limited aggregation processes Random forest, a machine-learning classifier based on choosing random subsets of variables for each tree and using the most frequent tree output as the overall classification Branching process, a model of a population in which each individual has a random number of children
|
c_z4gn381q2roh
|
Rational series
|
Summary
|
Rational_series
|
In mathematics and computer science, a rational series is a generalisation of the concept of formal power series over a ring to the case when the basic algebraic structure is no longer a ring but a semiring, and the indeterminates adjoined are not assumed to commute. They can be regarded as algebraic expressions of a formal language over a finite alphabet.
|
c_yvpdfh77ja2p
|
Recursive definition
|
Summary
|
Inductive_definition
|
In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set. A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function n!
|
c_ow3vf4452g9c
|
Recursive definition
|
Summary
|
Inductive_definition
|
is defined by the rules 0 ! = 1.
|
c_jy4vjh0kby2w
|
Recursive definition
|
Summary
|
Inductive_definition
|
( n + 1 ) ! = ( n + 1 ) ⋅ n ! .
|
c_4y0csrlrn20u
|
Recursive definition
|
Summary
|
Inductive_definition
|
{\displaystyle {\begin{aligned}&0!=1.\\&(n+1)!=(n+1)\cdot n!.\end{aligned}}} This definition is valid for each natural number n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function n!, starting from n = 0 and proceeding onwards with n = 1, 2, 3 etc. The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction.An inductive definition of a set describes the elements in a set in terms of other elements in the set.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.